Ticket #393: 393status48.dpatch

File 393status48.dpatch, 791.9 KB (added by kevan, at 2011-08-07T01:18:24Z)

fix broken test, fix pyflakes warnings

Line 
1Mon Aug  1 18:35:24 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
2  * mutable/retrieve: rework the mutable downloader to handle multiple-segment files
3 
4  The downloader needs substantial reworking to handle multiple segment
5  mutable files, which it needs to handle for MDMF.
6
7Mon Aug  1 18:39:31 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
8  * mutable/publish: teach the publisher how to publish MDMF mutable files
9 
10  Like the downloader, the publisher needs some substantial changes to handle multiple segment mutable files.
11
12Mon Aug  1 18:41:19 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
13  * interfaces: change interfaces to work with MDMF
14 
15  A lot of this work concerns #993, in that it unifies (to an extent) the
16  interfaces of mutable and immutable files.
17
18Mon Aug  1 18:42:58 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
19  * nodemaker: teach nodemaker how to create MDMF mutable files
20
21Mon Aug  1 18:45:01 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
22  * mutable/filenode: Modify mutable filenodes for use with MDMF
23 
24  In particular:
25      - Break MutableFileNode and MutableFileVersion into distinct classes.
26      - Implement the interface modifications made for MDMF.
27      - Be aware of MDMF caps.
28      - Learn how to create and work with MDMF files.
29
30Mon Aug  1 18:48:11 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
31  * client: teach client how to create and work with MDMF files
32
33Mon Aug  1 18:49:26 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
34  * nodemaker: teach nodemaker about MDMF caps
35
36Mon Aug  1 18:51:40 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
37  * mutable: train checker and repairer to work with MDMF mutable files
38
39Mon Aug  1 18:56:43 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
40  * test/common: Alter common test code to work with MDMF.
41 
42  This mostly has to do with making the test code implement the new
43  unified filenode interfaces.
44
45Mon Aug  1 19:08:14 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
46  * immutable/literal.py: Implement interface changes in literal nodes.
47
48Mon Aug  1 19:09:05 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
49  * immutable/filenode: implement unified filenode interface
50
51Mon Aug  1 19:11:20 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
52  * mutable/layout: Define MDMF share format, write tools for working with MDMF share format
53 
54  The changes in layout.py are mostly concerned with the MDMF share
55  format. In particular, we define read and write proxy objects used by
56  retrieval, publishing, and other code to write and read the MDMF share
57  format. We create equivalent proxies for SDMF objects so that these
58  objects can be suitably general.
59
60Mon Aug  1 19:12:07 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
61  * frontends/sftpd: Resolve incompatibilities between SFTP frontend and MDMF changes
62
63Mon Aug  1 19:16:13 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
64  * cli: teach CLI how to create MDMF mutable files
65 
66  Specifically, 'tahoe mkdir' and 'tahoe put' now take a --mutable-type
67  argument.
68
69Mon Aug  1 19:20:56 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
70  * docs: amend configuration, webapi documentation to talk about MDMF
71
72Mon Aug  1 20:28:10 PDT 2011  david-sarah@jacaranda.org
73  * Fix some test failures caused by #393 patch.
74
75Sat Aug  6 17:42:24 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
76  * dirnode: teach dirnode to make MDMF directories
77
78Sat Aug  6 17:42:59 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
79  * mutable/servermap: Rework the servermap to work with MDMF mutable files
80
81Sat Aug  6 17:43:48 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
82  * webapi changes for MDMF
83 
84      - Learn how to create MDMF files and directories through the
85        mutable-type argument.
86      - Operate with the interface changes associated with MDMF and #993.
87      - Learn how to do partial updates of mutable files.
88
89Sat Aug  6 17:44:14 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
90  * test/test_mutable: tests for MDMF
91 
92  These are their own patch because they cut across a lot of the changes
93  I've made in implementing MDMF in such a way as to make it difficult to
94  split them up into the other patches.
95
96Sat Aug  6 17:44:36 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
97  * uri: add MDMF and MDMF directory caps, add extension hint support
98
99Sat Aug  6 17:44:59 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
100  * test: fix assorted tests broken by MDMF changes
101
102Sat Aug  6 17:45:14 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
103  * immutable/filenode: fix pyflakes warnings
104
105New patches:
106
107[mutable/retrieve: rework the mutable downloader to handle multiple-segment files
108Kevan Carstensen <kevan@isnotajoke.com>**20110802013524
109 Ignore-this: 398d11b5cb993b50e5e4fa6e7a3856dc
110 
111 The downloader needs substantial reworking to handle multiple segment
112 mutable files, which it needs to handle for MDMF.
113] {
114hunk ./src/allmydata/mutable/retrieve.py 2
115 
116-import struct, time
117+import time
118 from itertools import count
119 from zope.interface import implements
120 from twisted.internet import defer
121hunk ./src/allmydata/mutable/retrieve.py 7
122 from twisted.python import failure
123-from foolscap.api import DeadReferenceError, eventually, fireEventually
124-from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError
125-from allmydata.util import hashutil, idlib, log
126+from twisted.internet.interfaces import IPushProducer, IConsumer
127+from foolscap.api import eventually, fireEventually
128+from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError, \
129+                                 MDMF_VERSION, SDMF_VERSION
130+from allmydata.util import hashutil, log, mathutil
131 from allmydata.util.dictutil import DictOfSets
132 from allmydata import hashtree, codec
133 from allmydata.storage.server import si_b2a
134hunk ./src/allmydata/mutable/retrieve.py 19
135 from pycryptopp.publickey import rsa
136 
137 from allmydata.mutable.common import CorruptShareError, UncoordinatedWriteError
138-from allmydata.mutable.layout import SIGNED_PREFIX, unpack_share_data
139+from allmydata.mutable.layout import MDMFSlotReadProxy
140 
141 class RetrieveStatus:
142     implements(IRetrieveStatus)
143hunk ./src/allmydata/mutable/retrieve.py 86
144     # times, and each will have a separate response chain. However the
145     # Retrieve object will remain tied to a specific version of the file, and
146     # will use a single ServerMap instance.
147+    implements(IPushProducer)
148 
149hunk ./src/allmydata/mutable/retrieve.py 88
150-    def __init__(self, filenode, servermap, verinfo, fetch_privkey=False):
151+    def __init__(self, filenode, servermap, verinfo, fetch_privkey=False,
152+                 verify=False):
153         self._node = filenode
154         assert self._node.get_pubkey()
155         self._storage_index = filenode.get_storage_index()
156hunk ./src/allmydata/mutable/retrieve.py 107
157         self.verinfo = verinfo
158         # during repair, we may be called upon to grab the private key, since
159         # it wasn't picked up during a verify=False checker run, and we'll
160-        # need it for repair to generate the a new version.
161-        self._need_privkey = fetch_privkey
162-        if self._node.get_privkey():
163+        # need it for repair to generate a new version.
164+        self._need_privkey = fetch_privkey or verify
165+        if self._node.get_privkey() and not verify:
166             self._need_privkey = False
167 
168hunk ./src/allmydata/mutable/retrieve.py 112
169+        if self._need_privkey:
170+            # TODO: Evaluate the need for this. We'll use it if we want
171+            # to limit how many queries are on the wire for the privkey
172+            # at once.
173+            self._privkey_query_markers = [] # one Marker for each time we've
174+                                             # tried to get the privkey.
175+
176+        # verify means that we are using the downloader logic to verify all
177+        # of our shares. This tells the downloader a few things.
178+        #
179+        # 1. We need to download all of the shares.
180+        # 2. We don't need to decode or decrypt the shares, since our
181+        #    caller doesn't care about the plaintext, only the
182+        #    information about which shares are or are not valid.
183+        # 3. When we are validating readers, we need to validate the
184+        #    signature on the prefix. Do we? We already do this in the
185+        #    servermap update?
186+        self._verify = False
187+        if verify:
188+            self._verify = True
189+
190         self._status = RetrieveStatus()
191         self._status.set_storage_index(self._storage_index)
192         self._status.set_helper(False)
193hunk ./src/allmydata/mutable/retrieve.py 142
194          offsets_tuple) = self.verinfo
195         self._status.set_size(datalength)
196         self._status.set_encoding(k, N)
197+        self.readers = {}
198+        self._paused = False
199+        self._paused_deferred = None
200+        self._offset = None
201+        self._read_length = None
202+        self.log("got seqnum %d" % self.verinfo[0])
203+
204 
205     def get_status(self):
206         return self._status
207hunk ./src/allmydata/mutable/retrieve.py 160
208             kwargs["facility"] = "tahoe.mutable.retrieve"
209         return log.msg(*args, **kwargs)
210 
211-    def download(self):
212+
213+    ###################
214+    # IPushProducer
215+
216+    def pauseProducing(self):
217+        """
218+        I am called by my download target if we have produced too much
219+        data for it to handle. I make the downloader stop producing new
220+        data until my resumeProducing method is called.
221+        """
222+        if self._paused:
223+            return
224+
225+        # fired when the download is unpaused.
226+        self._old_status = self._status.get_status()
227+        self._status.set_status("Paused")
228+
229+        self._pause_deferred = defer.Deferred()
230+        self._paused = True
231+
232+
233+    def resumeProducing(self):
234+        """
235+        I am called by my download target once it is ready to begin
236+        receiving data again.
237+        """
238+        if not self._paused:
239+            return
240+
241+        self._paused = False
242+        p = self._pause_deferred
243+        self._pause_deferred = None
244+        self._status.set_status(self._old_status)
245+
246+        eventually(p.callback, None)
247+
248+
249+    def _check_for_paused(self, res):
250+        """
251+        I am called just before a write to the consumer. I return a
252+        Deferred that eventually fires with the data that is to be
253+        written to the consumer. If the download has not been paused,
254+        the Deferred fires immediately. Otherwise, the Deferred fires
255+        when the downloader is unpaused.
256+        """
257+        if self._paused:
258+            d = defer.Deferred()
259+            self._pause_deferred.addCallback(lambda ignored: d.callback(res))
260+            return d
261+        return defer.succeed(res)
262+
263+
264+    def download(self, consumer=None, offset=0, size=None):
265+        assert IConsumer.providedBy(consumer) or self._verify
266+
267+        if consumer:
268+            self._consumer = consumer
269+            # we provide IPushProducer, so streaming=True, per
270+            # IConsumer.
271+            self._consumer.registerProducer(self, streaming=True)
272+
273         self._done_deferred = defer.Deferred()
274         self._started = time.time()
275         self._status.set_status("Retrieving Shares")
276hunk ./src/allmydata/mutable/retrieve.py 225
277 
278+        self._offset = offset
279+        self._read_length = size
280+
281         # first, which servers can we use?
282         versionmap = self.servermap.make_versionmap()
283         shares = versionmap[self.verinfo]
284hunk ./src/allmydata/mutable/retrieve.py 235
285         self.remaining_sharemap = DictOfSets()
286         for (shnum, peerid, timestamp) in shares:
287             self.remaining_sharemap.add(shnum, peerid)
288+            # If the servermap update fetched anything, it fetched at least 1
289+            # KiB, so we ask for that much.
290+            # TODO: Change the cache methods to allow us to fetch all of the
291+            # data that they have, then change this method to do that.
292+            any_cache = self._node._read_from_cache(self.verinfo, shnum,
293+                                                    0, 1000)
294+            ss = self.servermap.connections[peerid]
295+            reader = MDMFSlotReadProxy(ss,
296+                                       self._storage_index,
297+                                       shnum,
298+                                       any_cache)
299+            reader.peerid = peerid
300+            self.readers[shnum] = reader
301+
302 
303         self.shares = {} # maps shnum to validated blocks
304hunk ./src/allmydata/mutable/retrieve.py 251
305+        self._active_readers = [] # list of active readers for this dl.
306+        self._validated_readers = set() # set of readers that we have
307+                                        # validated the prefix of
308+        self._block_hash_trees = {} # shnum => hashtree
309 
310         # how many shares do we need?
311hunk ./src/allmydata/mutable/retrieve.py 257
312-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
313+        (seqnum,
314+         root_hash,
315+         IV,
316+         segsize,
317+         datalength,
318+         k,
319+         N,
320+         prefix,
321          offsets_tuple) = self.verinfo
322hunk ./src/allmydata/mutable/retrieve.py 266
323+
324+
325+        # We need one share hash tree for the entire file; its leaves
326+        # are the roots of the block hash trees for the shares that
327+        # comprise it, and its root is in the verinfo.
328+        self.share_hash_tree = hashtree.IncompleteHashTree(N)
329+        self.share_hash_tree.set_hashes({0: root_hash})
330+
331+        # This will set up both the segment decoder and the tail segment
332+        # decoder, as well as a variety of other instance variables that
333+        # the download process will use.
334+        self._setup_encoding_parameters()
335         assert len(self.remaining_sharemap) >= k
336hunk ./src/allmydata/mutable/retrieve.py 279
337-        # we start with the lowest shnums we have available, since FEC is
338-        # faster if we're using "primary shares"
339-        self.active_shnums = set(sorted(self.remaining_sharemap.keys())[:k])
340-        for shnum in self.active_shnums:
341-            # we use an arbitrary peer who has the share. If shares are
342-            # doubled up (more than one share per peer), we could make this
343-            # run faster by spreading the load among multiple peers. But the
344-            # algorithm to do that is more complicated than I want to write
345-            # right now, and a well-provisioned grid shouldn't have multiple
346-            # shares per peer.
347-            peerid = list(self.remaining_sharemap[shnum])[0]
348-            self.get_data(shnum, peerid)
349 
350hunk ./src/allmydata/mutable/retrieve.py 280
351-        # control flow beyond this point: state machine. Receiving responses
352-        # from queries is the input. We might send out more queries, or we
353-        # might produce a result.
354+        self.log("starting download")
355+        self._paused = False
356+        self._started_fetching = time.time()
357 
358hunk ./src/allmydata/mutable/retrieve.py 284
359+        self._add_active_peers()
360+        # The download process beyond this is a state machine.
361+        # _add_active_peers will select the peers that we want to use
362+        # for the download, and then attempt to start downloading. After
363+        # each segment, it will check for doneness, reacting to broken
364+        # peers and corrupt shares as necessary. If it runs out of good
365+        # peers before downloading all of the segments, _done_deferred
366+        # will errback.  Otherwise, it will eventually callback with the
367+        # contents of the mutable file.
368         return self._done_deferred
369 
370hunk ./src/allmydata/mutable/retrieve.py 295
371-    def get_data(self, shnum, peerid):
372-        self.log(format="sending sh#%(shnum)d request to [%(peerid)s]",
373-                 shnum=shnum,
374-                 peerid=idlib.shortnodeid_b2a(peerid),
375-                 level=log.NOISY)
376-        ss = self.servermap.connections[peerid]
377-        started = time.time()
378-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
379+
380+    def decode(self, blocks_and_salts, segnum):
381+        """
382+        I am a helper method that the mutable file update process uses
383+        as a shortcut to decode and decrypt the segments that it needs
384+        to fetch in order to perform a file update. I take in a
385+        collection of blocks and salts, and pick some of those to make a
386+        segment with. I return the plaintext associated with that
387+        segment.
388+        """
389+        # shnum => block hash tree. Unusued, but setup_encoding_parameters will
390+        # want to set this.
391+        # XXX: Make it so that it won't set this if we're just decoding.
392+        self._block_hash_trees = {}
393+        self._setup_encoding_parameters()
394+        # This is the form expected by decode.
395+        blocks_and_salts = blocks_and_salts.items()
396+        blocks_and_salts = [(True, [d]) for d in blocks_and_salts]
397+
398+        d = self._decode_blocks(blocks_and_salts, segnum)
399+        d.addCallback(self._decrypt_segment)
400+        return d
401+
402+
403+    def _setup_encoding_parameters(self):
404+        """
405+        I set up the encoding parameters, including k, n, the number
406+        of segments associated with this file, and the segment decoder.
407+        """
408+        (seqnum,
409+         root_hash,
410+         IV,
411+         segsize,
412+         datalength,
413+         k,
414+         n,
415+         known_prefix,
416          offsets_tuple) = self.verinfo
417hunk ./src/allmydata/mutable/retrieve.py 333
418-        offsets = dict(offsets_tuple)
419+        self._required_shares = k
420+        self._total_shares = n
421+        self._segment_size = segsize
422+        self._data_length = datalength
423+
424+        if not IV:
425+            self._version = MDMF_VERSION
426+        else:
427+            self._version = SDMF_VERSION
428 
429hunk ./src/allmydata/mutable/retrieve.py 343
430-        # we read the checkstring, to make sure that the data we grab is from
431-        # the right version.
432-        readv = [ (0, struct.calcsize(SIGNED_PREFIX)) ]
433+        if datalength and segsize:
434+            self._num_segments = mathutil.div_ceil(datalength, segsize)
435+            self._tail_data_size = datalength % segsize
436+        else:
437+            self._num_segments = 0
438+            self._tail_data_size = 0
439 
440hunk ./src/allmydata/mutable/retrieve.py 350
441-        # We also read the data, and the hashes necessary to validate them
442-        # (share_hash_chain, block_hash_tree, share_data). We don't read the
443-        # signature or the pubkey, since that was handled during the
444-        # servermap phase, and we'll be comparing the share hash chain
445-        # against the roothash that was validated back then.
446+        self._segment_decoder = codec.CRSDecoder()
447+        self._segment_decoder.set_params(segsize, k, n)
448 
449hunk ./src/allmydata/mutable/retrieve.py 353
450-        readv.append( (offsets['share_hash_chain'],
451-                       offsets['enc_privkey'] - offsets['share_hash_chain'] ) )
452+        if  not self._tail_data_size:
453+            self._tail_data_size = segsize
454 
455hunk ./src/allmydata/mutable/retrieve.py 356
456-        # if we need the private key (for repair), we also fetch that
457-        if self._need_privkey:
458-            readv.append( (offsets['enc_privkey'],
459-                           offsets['EOF'] - offsets['enc_privkey']) )
460+        self._tail_segment_size = mathutil.next_multiple(self._tail_data_size,
461+                                                         self._required_shares)
462+        if self._tail_segment_size == self._segment_size:
463+            self._tail_decoder = self._segment_decoder
464+        else:
465+            self._tail_decoder = codec.CRSDecoder()
466+            self._tail_decoder.set_params(self._tail_segment_size,
467+                                          self._required_shares,
468+                                          self._total_shares)
469+
470+        self.log("got encoding parameters: "
471+                 "k: %d "
472+                 "n: %d "
473+                 "%d segments of %d bytes each (%d byte tail segment)" % \
474+                 (k, n, self._num_segments, self._segment_size,
475+                  self._tail_segment_size))
476 
477hunk ./src/allmydata/mutable/retrieve.py 373
478-        m = Marker()
479-        self._outstanding_queries[m] = (peerid, shnum, started)
480+        for i in xrange(self._total_shares):
481+            # So we don't have to do this later.
482+            self._block_hash_trees[i] = hashtree.IncompleteHashTree(self._num_segments)
483 
484hunk ./src/allmydata/mutable/retrieve.py 377
485-        # ask the cache first
486-        got_from_cache = False
487-        datavs = []
488-        for (offset, length) in readv:
489-            data = self._node._read_from_cache(self.verinfo, shnum, offset, length)
490-            if data is not None:
491-                datavs.append(data)
492-        if len(datavs) == len(readv):
493-            self.log("got data from cache")
494-            got_from_cache = True
495-            d = fireEventually({shnum: datavs})
496-            # datavs is a dict mapping shnum to a pair of strings
497+        # Our last task is to tell the downloader where to start and
498+        # where to stop. We use three parameters for that:
499+        #   - self._start_segment: the segment that we need to start
500+        #     downloading from.
501+        #   - self._current_segment: the next segment that we need to
502+        #     download.
503+        #   - self._last_segment: The last segment that we were asked to
504+        #     download.
505+        #
506+        #  We say that the download is complete when
507+        #  self._current_segment > self._last_segment. We use
508+        #  self._start_segment and self._last_segment to know when to
509+        #  strip things off of segments, and how much to strip.
510+        if self._offset:
511+            self.log("got offset: %d" % self._offset)
512+            # our start segment is the first segment containing the
513+            # offset we were given.
514+            start = mathutil.div_ceil(self._offset,
515+                                      self._segment_size)
516+            # this gets us the first segment after self._offset. Then
517+            # our start segment is the one before it.
518+            start -= 1
519+
520+            assert start < self._num_segments
521+            self._start_segment = start
522+            self.log("got start segment: %d" % self._start_segment)
523         else:
524hunk ./src/allmydata/mutable/retrieve.py 404
525-            d = self._do_read(ss, peerid, self._storage_index, [shnum], readv)
526-        self.remaining_sharemap.discard(shnum, peerid)
527+            self._start_segment = 0
528 
529hunk ./src/allmydata/mutable/retrieve.py 406
530-        d.addCallback(self._got_results, m, peerid, started, got_from_cache)
531-        d.addErrback(self._query_failed, m, peerid)
532-        # errors that aren't handled by _query_failed (and errors caused by
533-        # _query_failed) get logged, but we still want to check for doneness.
534-        def _oops(f):
535-            self.log(format="problem in _query_failed for sh#%(shnum)d to %(peerid)s",
536-                     shnum=shnum,
537-                     peerid=idlib.shortnodeid_b2a(peerid),
538-                     failure=f,
539-                     level=log.WEIRD, umid="W0xnQA")
540-        d.addErrback(_oops)
541-        d.addBoth(self._check_for_done)
542-        # any error during _check_for_done means the download fails. If the
543-        # download is successful, _check_for_done will fire _done by itself.
544-        d.addErrback(self._done)
545-        d.addErrback(log.err)
546-        return d # purely for testing convenience
547 
548hunk ./src/allmydata/mutable/retrieve.py 407
549-    def _do_read(self, ss, peerid, storage_index, shnums, readv):
550-        # isolate the callRemote to a separate method, so tests can subclass
551-        # Publish and override it
552-        d = ss.callRemote("slot_readv", storage_index, shnums, readv)
553-        return d
554+        if self._read_length:
555+            # our end segment is the last segment containing part of the
556+            # segment that we were asked to read.
557+            self.log("got read length %d" % self._read_length)
558+            end_data = self._offset + self._read_length
559+            end = mathutil.div_ceil(end_data,
560+                                    self._segment_size)
561+            end -= 1
562+            assert end < self._num_segments
563+            self._last_segment = end
564+            self.log("got end segment: %d" % self._last_segment)
565+        else:
566+            self._last_segment = self._num_segments - 1
567 
568hunk ./src/allmydata/mutable/retrieve.py 421
569-    def remove_peer(self, peerid):
570-        for shnum in list(self.remaining_sharemap.keys()):
571-            self.remaining_sharemap.discard(shnum, peerid)
572+        self._current_segment = self._start_segment
573 
574hunk ./src/allmydata/mutable/retrieve.py 423
575-    def _got_results(self, datavs, marker, peerid, started, got_from_cache):
576-        now = time.time()
577-        elapsed = now - started
578-        if not got_from_cache:
579-            self._status.add_fetch_timing(peerid, elapsed)
580-        self.log(format="got results (%(shares)d shares) from [%(peerid)s]",
581-                 shares=len(datavs),
582-                 peerid=idlib.shortnodeid_b2a(peerid),
583-                 level=log.NOISY)
584-        self._outstanding_queries.pop(marker, None)
585-        if not self._running:
586-            return
587+    def _add_active_peers(self):
588+        """
589+        I populate self._active_readers with enough active readers to
590+        retrieve the contents of this mutable file. I am called before
591+        downloading starts, and (eventually) after each validation
592+        error, connection error, or other problem in the download.
593+        """
594+        # TODO: It would be cool to investigate other heuristics for
595+        # reader selection. For instance, the cost (in time the user
596+        # spends waiting for their file) of selecting a really slow peer
597+        # that happens to have a primary share is probably more than
598+        # selecting a really fast peer that doesn't have a primary
599+        # share. Maybe the servermap could be extended to provide this
600+        # information; it could keep track of latency information while
601+        # it gathers more important data, and then this routine could
602+        # use that to select active readers.
603+        #
604+        # (these and other questions would be easier to answer with a
605+        #  robust, configurable tahoe-lafs simulator, which modeled node
606+        #  failures, differences in node speed, and other characteristics
607+        #  that we expect storage servers to have.  You could have
608+        #  presets for really stable grids (like allmydata.com),
609+        #  friendnets, make it easy to configure your own settings, and
610+        #  then simulate the effect of big changes on these use cases
611+        #  instead of just reasoning about what the effect might be. Out
612+        #  of scope for MDMF, though.)
613 
614hunk ./src/allmydata/mutable/retrieve.py 450
615-        # note that we only ask for a single share per query, so we only
616-        # expect a single share back. On the other hand, we use the extra
617-        # shares if we get them.. seems better than an assert().
618+        # We need at least self._required_shares readers to download a
619+        # segment.
620+        if self._verify:
621+            needed = self._total_shares
622+        else:
623+            needed = self._required_shares - len(self._active_readers)
624+        # XXX: Why don't format= log messages work here?
625+        self.log("adding %d peers to the active peers list" % needed)
626 
627hunk ./src/allmydata/mutable/retrieve.py 459
628-        for shnum,datav in datavs.items():
629-            (prefix, hash_and_data) = datav[:2]
630-            try:
631-                self._got_results_one_share(shnum, peerid,
632-                                            prefix, hash_and_data)
633-            except CorruptShareError, e:
634-                # log it and give the other shares a chance to be processed
635-                f = failure.Failure()
636-                self.log(format="bad share: %(f_value)s",
637-                         f_value=str(f.value), failure=f,
638-                         level=log.WEIRD, umid="7fzWZw")
639-                self.notify_server_corruption(peerid, shnum, str(e))
640-                self.remove_peer(peerid)
641-                self.servermap.mark_bad_share(peerid, shnum, prefix)
642-                self._bad_shares.add( (peerid, shnum) )
643-                self._status.problems[peerid] = f
644-                self._last_failure = f
645-                pass
646-            if self._need_privkey and len(datav) > 2:
647-                lp = None
648-                self._try_to_validate_privkey(datav[2], peerid, shnum, lp)
649-        # all done!
650+        # We favor lower numbered shares, since FEC is faster with
651+        # primary shares than with other shares, and lower-numbered
652+        # shares are more likely to be primary than higher numbered
653+        # shares.
654+        active_shnums = set(sorted(self.remaining_sharemap.keys()))
655+        # We shouldn't consider adding shares that we already have; this
656+        # will cause problems later.
657+        active_shnums -= set([reader.shnum for reader in self._active_readers])
658+        active_shnums = list(active_shnums)[:needed]
659+        if len(active_shnums) < needed and not self._verify:
660+            # We don't have enough readers to retrieve the file; fail.
661+            return self._failed()
662 
663hunk ./src/allmydata/mutable/retrieve.py 472
664-    def notify_server_corruption(self, peerid, shnum, reason):
665-        ss = self.servermap.connections[peerid]
666-        ss.callRemoteOnly("advise_corrupt_share",
667-                          "mutable", self._storage_index, shnum, reason)
668+        for shnum in active_shnums:
669+            self._active_readers.append(self.readers[shnum])
670+            self.log("added reader for share %d" % shnum)
671+        assert len(self._active_readers) >= self._required_shares
672+        # Conceptually, this is part of the _add_active_peers step. It
673+        # validates the prefixes of newly added readers to make sure
674+        # that they match what we are expecting for self.verinfo. If
675+        # validation is successful, _validate_active_prefixes will call
676+        # _download_current_segment for us. If validation is
677+        # unsuccessful, then _validate_prefixes will remove the peer and
678+        # call _add_active_peers again, where we will attempt to rectify
679+        # the problem by choosing another peer.
680+        return self._validate_active_prefixes()
681 
682hunk ./src/allmydata/mutable/retrieve.py 486
683-    def _got_results_one_share(self, shnum, peerid,
684-                               got_prefix, got_hash_and_data):
685-        self.log("_got_results: got shnum #%d from peerid %s"
686-                 % (shnum, idlib.shortnodeid_b2a(peerid)))
687-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
688+
689+    def _validate_active_prefixes(self):
690+        """
691+        I check to make sure that the prefixes on the peers that I am
692+        currently reading from match the prefix that we want to see, as
693+        said in self.verinfo.
694+
695+        If I find that all of the active peers have acceptable prefixes,
696+        I pass control to _download_current_segment, which will use
697+        those peers to do cool things. If I find that some of the active
698+        peers have unacceptable prefixes, I will remove them from active
699+        peers (and from further consideration) and call
700+        _add_active_peers to attempt to rectify the situation. I keep
701+        track of which peers I have already validated so that I don't
702+        need to do so again.
703+        """
704+        assert self._active_readers, "No more active readers"
705+
706+        ds = []
707+        new_readers = set(self._active_readers) - self._validated_readers
708+        self.log('validating %d newly-added active readers' % len(new_readers))
709+
710+        for reader in new_readers:
711+            # We force a remote read here -- otherwise, we are relying
712+            # on cached data that we already verified as valid, and we
713+            # won't detect an uncoordinated write that has occurred
714+            # since the last servermap update.
715+            d = reader.get_prefix(force_remote=True)
716+            d.addCallback(self._try_to_validate_prefix, reader)
717+            ds.append(d)
718+        dl = defer.DeferredList(ds, consumeErrors=True)
719+        def _check_results(results):
720+            # Each result in results will be of the form (success, msg).
721+            # We don't care about msg, but success will tell us whether
722+            # or not the checkstring validated. If it didn't, we need to
723+            # remove the offending (peer,share) from our active readers,
724+            # and ensure that active readers is again populated.
725+            bad_readers = []
726+            for i, result in enumerate(results):
727+                if not result[0]:
728+                    reader = self._active_readers[i]
729+                    f = result[1]
730+                    assert isinstance(f, failure.Failure)
731+
732+                    self.log("The reader %s failed to "
733+                             "properly validate: %s" % \
734+                             (reader, str(f.value)))
735+                    bad_readers.append((reader, f))
736+                else:
737+                    reader = self._active_readers[i]
738+                    self.log("the reader %s checks out, so we'll use it" % \
739+                             reader)
740+                    self._validated_readers.add(reader)
741+                    # Each time we validate a reader, we check to see if
742+                    # we need the private key. If we do, we politely ask
743+                    # for it and then continue computing. If we find
744+                    # that we haven't gotten it at the end of
745+                    # segment decoding, then we'll take more drastic
746+                    # measures.
747+                    if self._need_privkey and not self._node.is_readonly():
748+                        d = reader.get_encprivkey()
749+                        d.addCallback(self._try_to_validate_privkey, reader)
750+            if bad_readers:
751+                # We do them all at once, or else we screw up list indexing.
752+                for (reader, f) in bad_readers:
753+                    self._mark_bad_share(reader, f)
754+                if self._verify:
755+                    if len(self._active_readers) >= self._required_shares:
756+                        return self._download_current_segment()
757+                    else:
758+                        return self._failed()
759+                else:
760+                    return self._add_active_peers()
761+            else:
762+                return self._download_current_segment()
763+            # The next step will assert that it has enough active
764+            # readers to fetch shares; we just need to remove it.
765+        dl.addCallback(_check_results)
766+        return dl
767+
768+
769+    def _try_to_validate_prefix(self, prefix, reader):
770+        """
771+        I check that the prefix returned by a candidate server for
772+        retrieval matches the prefix that the servermap knows about
773+        (and, hence, the prefix that was validated earlier). If it does,
774+        I return True, which means that I approve of the use of the
775+        candidate server for segment retrieval. If it doesn't, I return
776+        False, which means that another server must be chosen.
777+        """
778+        (seqnum,
779+         root_hash,
780+         IV,
781+         segsize,
782+         datalength,
783+         k,
784+         N,
785+         known_prefix,
786          offsets_tuple) = self.verinfo
787hunk ./src/allmydata/mutable/retrieve.py 585
788-        assert len(got_prefix) == len(prefix), (len(got_prefix), len(prefix))
789-        if got_prefix != prefix:
790-            msg = "someone wrote to the data since we read the servermap: prefix changed"
791-            raise UncoordinatedWriteError(msg)
792-        (share_hash_chain, block_hash_tree,
793-         share_data) = unpack_share_data(self.verinfo, got_hash_and_data)
794+        if known_prefix != prefix:
795+            self.log("prefix from share %d doesn't match" % reader.shnum)
796+            raise UncoordinatedWriteError("Mismatched prefix -- this could "
797+                                          "indicate an uncoordinated write")
798+        # Otherwise, we're okay -- no issues.
799 
800hunk ./src/allmydata/mutable/retrieve.py 591
801-        assert isinstance(share_data, str)
802-        # build the block hash tree. SDMF has only one leaf.
803-        leaves = [hashutil.block_hash(share_data)]
804-        t = hashtree.HashTree(leaves)
805-        if list(t) != block_hash_tree:
806-            raise CorruptShareError(peerid, shnum, "block hash tree failure")
807-        share_hash_leaf = t[0]
808-        t2 = hashtree.IncompleteHashTree(N)
809-        # root_hash was checked by the signature
810-        t2.set_hashes({0: root_hash})
811-        try:
812-            t2.set_hashes(hashes=share_hash_chain,
813-                          leaves={shnum: share_hash_leaf})
814-        except (hashtree.BadHashError, hashtree.NotEnoughHashesError,
815-                IndexError), e:
816-            msg = "corrupt hashes: %s" % (e,)
817-            raise CorruptShareError(peerid, shnum, msg)
818-        self.log(" data valid! len=%d" % len(share_data))
819-        # each query comes down to this: placing validated share data into
820-        # self.shares
821-        self.shares[shnum] = share_data
822 
823hunk ./src/allmydata/mutable/retrieve.py 592
824-    def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
825+    def _remove_reader(self, reader):
826+        """
827+        At various points, we will wish to remove a peer from
828+        consideration and/or use. These include, but are not necessarily
829+        limited to:
830 
831hunk ./src/allmydata/mutable/retrieve.py 598
832-        alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
833-        alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
834-        if alleged_writekey != self._node.get_writekey():
835-            self.log("invalid privkey from %s shnum %d" %
836-                     (idlib.nodeid_b2a(peerid)[:8], shnum),
837-                     parent=lp, level=log.WEIRD, umid="YIw4tA")
838-            return
839+            - A connection error.
840+            - A mismatched prefix (that is, a prefix that does not match
841+              our conception of the version information string).
842+            - A failing block hash, salt hash, or share hash, which can
843+              indicate disk failure/bit flips, or network trouble.
844 
845hunk ./src/allmydata/mutable/retrieve.py 604
846-        # it's good
847-        self.log("got valid privkey from shnum %d on peerid %s" %
848-                 (shnum, idlib.shortnodeid_b2a(peerid)),
849-                 parent=lp)
850-        privkey = rsa.create_signing_key_from_string(alleged_privkey_s)
851-        self._node._populate_encprivkey(enc_privkey)
852-        self._node._populate_privkey(privkey)
853-        self._need_privkey = False
854+        This method will do that. I will make sure that the
855+        (shnum,reader) combination represented by my reader argument is
856+        not used for anything else during this download. I will not
857+        advise the reader of any corruption, something that my callers
858+        may wish to do on their own.
859+        """
860+        # TODO: When you're done writing this, see if this is ever
861+        # actually used for something that _mark_bad_share isn't. I have
862+        # a feeling that they will be used for very similar things, and
863+        # that having them both here is just going to be an epic amount
864+        # of code duplication.
865+        #
866+        # (well, okay, not epic, but meaningful)
867+        self.log("removing reader %s" % reader)
868+        # Remove the reader from _active_readers
869+        self._active_readers.remove(reader)
870+        # TODO: self.readers.remove(reader)?
871+        for shnum in list(self.remaining_sharemap.keys()):
872+            self.remaining_sharemap.discard(shnum, reader.peerid)
873 
874hunk ./src/allmydata/mutable/retrieve.py 624
875-    def _query_failed(self, f, marker, peerid):
876-        self.log(format="query to [%(peerid)s] failed",
877-                 peerid=idlib.shortnodeid_b2a(peerid),
878-                 level=log.NOISY)
879-        self._status.problems[peerid] = f
880-        self._outstanding_queries.pop(marker, None)
881-        if not self._running:
882-            return
883+
884+    def _mark_bad_share(self, reader, f):
885+        """
886+        I mark the (peerid, shnum) encapsulated by my reader argument as
887+        a bad share, which means that it will not be used anywhere else.
888+
889+        There are several reasons to want to mark something as a bad
890+        share. These include:
891+
892+            - A connection error to the peer.
893+            - A mismatched prefix (that is, a prefix that does not match
894+              our local conception of the version information string).
895+            - A failing block hash, salt hash, share hash, or other
896+              integrity check.
897+
898+        This method will ensure that readers that we wish to mark bad
899+        (for these reasons or other reasons) are not used for the rest
900+        of the download. Additionally, it will attempt to tell the
901+        remote peer (with no guarantee of success) that its share is
902+        corrupt.
903+        """
904+        self.log("marking share %d on server %s as bad" % \
905+                 (reader.shnum, reader))
906+        prefix = self.verinfo[-2]
907+        self.servermap.mark_bad_share(reader.peerid,
908+                                      reader.shnum,
909+                                      prefix)
910+        self._remove_reader(reader)
911+        self._bad_shares.add((reader.peerid, reader.shnum, f))
912+        self._status.problems[reader.peerid] = f
913         self._last_failure = f
914hunk ./src/allmydata/mutable/retrieve.py 655
915-        self.remove_peer(peerid)
916-        level = log.WEIRD
917-        if f.check(DeadReferenceError):
918-            level = log.UNUSUAL
919-        self.log(format="error during query: %(f_value)s",
920-                 f_value=str(f.value), failure=f, level=level, umid="gOJB5g")
921+        self.notify_server_corruption(reader.peerid, reader.shnum,
922+                                      str(f.value))
923 
924hunk ./src/allmydata/mutable/retrieve.py 658
925-    def _check_for_done(self, res):
926-        # exit paths:
927-        #  return : keep waiting, no new queries
928-        #  return self._send_more_queries(outstanding) : send some more queries
929-        #  fire self._done(plaintext) : download successful
930-        #  raise exception : download fails
931 
932hunk ./src/allmydata/mutable/retrieve.py 659
933-        self.log(format="_check_for_done: running=%(running)s, decoding=%(decoding)s",
934-                 running=self._running, decoding=self._decoding,
935-                 level=log.NOISY)
936-        if not self._running:
937-            return
938-        if self._decoding:
939-            return
940-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
941-         offsets_tuple) = self.verinfo
942+    def _download_current_segment(self):
943+        """
944+        I download, validate, decode, decrypt, and assemble the segment
945+        that this Retrieve is currently responsible for downloading.
946+        """
947+        assert len(self._active_readers) >= self._required_shares
948+        if self._current_segment <= self._last_segment:
949+            d = self._process_segment(self._current_segment)
950+        else:
951+            d = defer.succeed(None)
952+        d.addBoth(self._turn_barrier)
953+        d.addCallback(self._check_for_done)
954+        return d
955 
956hunk ./src/allmydata/mutable/retrieve.py 673
957-        if len(self.shares) < k:
958-            # we don't have enough shares yet
959-            return self._maybe_send_more_queries(k)
960-        if self._need_privkey:
961-            # we got k shares, but none of them had a valid privkey. TODO:
962-            # look further. Adding code to do this is a bit complicated, and
963-            # I want to avoid that complication, and this should be pretty
964-            # rare (k shares with bitflips in the enc_privkey but not in the
965-            # data blocks). If we actually do get here, the subsequent repair
966-            # will fail for lack of a privkey.
967-            self.log("got k shares but still need_privkey, bummer",
968-                     level=log.WEIRD, umid="MdRHPA")
969 
970hunk ./src/allmydata/mutable/retrieve.py 674
971-        # we have enough to finish. All the shares have had their hashes
972-        # checked, so if something fails at this point, we don't know how
973-        # to fix it, so the download will fail.
974+    def _turn_barrier(self, result):
975+        """
976+        I help the download process avoid the recursion limit issues
977+        discussed in #237.
978+        """
979+        return fireEventually(result)
980 
981hunk ./src/allmydata/mutable/retrieve.py 681
982-        self._decoding = True # avoid reentrancy
983-        self._status.set_status("decoding")
984-        now = time.time()
985-        elapsed = now - self._started
986-        self._status.timings["fetch"] = elapsed
987 
988hunk ./src/allmydata/mutable/retrieve.py 682
989-        d = defer.maybeDeferred(self._decode)
990-        d.addCallback(self._decrypt, IV, self._node.get_readkey())
991-        d.addBoth(self._done)
992-        return d # purely for test convenience
993+    def _process_segment(self, segnum):
994+        """
995+        I download, validate, decode, and decrypt one segment of the
996+        file that this Retrieve is retrieving. This means coordinating
997+        the process of getting k blocks of that file, validating them,
998+        assembling them into one segment with the decoder, and then
999+        decrypting them.
1000+        """
1001+        self.log("processing segment %d" % segnum)
1002 
1003hunk ./src/allmydata/mutable/retrieve.py 692
1004-    def _maybe_send_more_queries(self, k):
1005-        # we don't have enough shares yet. Should we send out more queries?
1006-        # There are some number of queries outstanding, each for a single
1007-        # share. If we can generate 'needed_shares' additional queries, we do
1008-        # so. If we can't, then we know this file is a goner, and we raise
1009-        # NotEnoughSharesError.
1010-        self.log(format=("_maybe_send_more_queries, have=%(have)d, k=%(k)d, "
1011-                         "outstanding=%(outstanding)d"),
1012-                 have=len(self.shares), k=k,
1013-                 outstanding=len(self._outstanding_queries),
1014-                 level=log.NOISY)
1015+        # TODO: The old code uses a marker. Should this code do that
1016+        # too? What did the Marker do?
1017+        assert len(self._active_readers) >= self._required_shares
1018 
1019hunk ./src/allmydata/mutable/retrieve.py 696
1020-        remaining_shares = k - len(self.shares)
1021-        needed = remaining_shares - len(self._outstanding_queries)
1022-        if not needed:
1023-            # we have enough queries in flight already
1024+        # We need to ask each of our active readers for its block and
1025+        # salt. We will then validate those. If validation is
1026+        # successful, we will assemble the results into plaintext.
1027+        ds = []
1028+        for reader in self._active_readers:
1029+            started = time.time()
1030+            d = reader.get_block_and_salt(segnum, queue=True)
1031+            d2 = self._get_needed_hashes(reader, segnum)
1032+            dl = defer.DeferredList([d, d2], consumeErrors=True)
1033+            dl.addCallback(self._validate_block, segnum, reader, started)
1034+            dl.addErrback(self._validation_or_decoding_failed, [reader])
1035+            ds.append(dl)
1036+            reader.flush()
1037+        dl = defer.DeferredList(ds)
1038+        if self._verify:
1039+            dl.addCallback(lambda ignored: "")
1040+            dl.addCallback(self._set_segment)
1041+        else:
1042+            dl.addCallback(self._maybe_decode_and_decrypt_segment, segnum)
1043+        return dl
1044 
1045hunk ./src/allmydata/mutable/retrieve.py 717
1046-            # TODO: but if they've been in flight for a long time, and we
1047-            # have reason to believe that new queries might respond faster
1048-            # (i.e. we've seen other queries come back faster, then consider
1049-            # sending out new queries. This could help with peers which have
1050-            # silently gone away since the servermap was updated, for which
1051-            # we're still waiting for the 15-minute TCP disconnect to happen.
1052-            self.log("enough queries are in flight, no more are needed",
1053-                     level=log.NOISY)
1054-            return
1055 
1056hunk ./src/allmydata/mutable/retrieve.py 718
1057-        outstanding_shnums = set([shnum
1058-                                  for (peerid, shnum, started)
1059-                                  in self._outstanding_queries.values()])
1060-        # prefer low-numbered shares, they are more likely to be primary
1061-        available_shnums = sorted(self.remaining_sharemap.keys())
1062-        for shnum in available_shnums:
1063-            if shnum in outstanding_shnums:
1064-                # skip ones that are already in transit
1065-                continue
1066-            if shnum not in self.remaining_sharemap:
1067-                # no servers for that shnum. note that DictOfSets removes
1068-                # empty sets from the dict for us.
1069-                continue
1070-            peerid = list(self.remaining_sharemap[shnum])[0]
1071-            # get_data will remove that peerid from the sharemap, and add the
1072-            # query to self._outstanding_queries
1073-            self._status.set_status("Retrieving More Shares")
1074-            self.get_data(shnum, peerid)
1075-            needed -= 1
1076-            if not needed:
1077+    def _maybe_decode_and_decrypt_segment(self, blocks_and_salts, segnum):
1078+        """
1079+        I take the results of fetching and validating the blocks from a
1080+        callback chain in another method. If the results are such that
1081+        they tell me that validation and fetching succeeded without
1082+        incident, I will proceed with decoding and decryption.
1083+        Otherwise, I will do nothing.
1084+        """
1085+        self.log("trying to decode and decrypt segment %d" % segnum)
1086+        failures = False
1087+        for block_and_salt in blocks_and_salts:
1088+            if not block_and_salt[0] or block_and_salt[1] == None:
1089+                self.log("some validation operations failed; not proceeding")
1090+                failures = True
1091                 break
1092hunk ./src/allmydata/mutable/retrieve.py 733
1093+        if not failures:
1094+            self.log("everything looks ok, building segment %d" % segnum)
1095+            d = self._decode_blocks(blocks_and_salts, segnum)
1096+            d.addCallback(self._decrypt_segment)
1097+            d.addErrback(self._validation_or_decoding_failed,
1098+                         self._active_readers)
1099+            # check to see whether we've been paused before writing
1100+            # anything.
1101+            d.addCallback(self._check_for_paused)
1102+            d.addCallback(self._set_segment)
1103+            return d
1104+        else:
1105+            return defer.succeed(None)
1106 
1107hunk ./src/allmydata/mutable/retrieve.py 747
1108-        # at this point, we have as many outstanding queries as we can. If
1109-        # needed!=0 then we might not have enough to recover the file.
1110-        if needed:
1111-            format = ("ran out of peers: "
1112-                      "have %(have)d shares (k=%(k)d), "
1113-                      "%(outstanding)d queries in flight, "
1114-                      "need %(need)d more, "
1115-                      "found %(bad)d bad shares")
1116-            args = {"have": len(self.shares),
1117-                    "k": k,
1118-                    "outstanding": len(self._outstanding_queries),
1119-                    "need": needed,
1120-                    "bad": len(self._bad_shares),
1121-                    }
1122-            self.log(format=format,
1123-                     level=log.WEIRD, umid="ezTfjw", **args)
1124-            err = NotEnoughSharesError("%s, last failure: %s" %
1125-                                      (format % args, self._last_failure))
1126-            if self._bad_shares:
1127-                self.log("We found some bad shares this pass. You should "
1128-                         "update the servermap and try again to check "
1129-                         "more peers",
1130-                         level=log.WEIRD, umid="EFkOlA")
1131-                err.servermap = self.servermap
1132-            raise err
1133 
1134hunk ./src/allmydata/mutable/retrieve.py 748
1135+    def _set_segment(self, segment):
1136+        """
1137+        Given a plaintext segment, I register that segment with the
1138+        target that is handling the file download.
1139+        """
1140+        self.log("got plaintext for segment %d" % self._current_segment)
1141+        if self._current_segment == self._start_segment:
1142+            # We're on the first segment. It's possible that we want
1143+            # only some part of the end of this segment, and that we
1144+            # just downloaded the whole thing to get that part. If so,
1145+            # we need to account for that and give the reader just the
1146+            # data that they want.
1147+            n = self._offset % self._segment_size
1148+            self.log("stripping %d bytes off of the first segment" % n)
1149+            self.log("original segment length: %d" % len(segment))
1150+            segment = segment[n:]
1151+            self.log("new segment length: %d" % len(segment))
1152+
1153+        if self._current_segment == self._last_segment and self._read_length is not None:
1154+            # We're on the last segment. It's possible that we only want
1155+            # part of the beginning of this segment, and that we
1156+            # downloaded the whole thing anyway. Make sure to give the
1157+            # caller only the portion of the segment that they want to
1158+            # receive.
1159+            extra = self._read_length
1160+            if self._start_segment != self._last_segment:
1161+                extra -= self._segment_size - \
1162+                            (self._offset % self._segment_size)
1163+            extra %= self._segment_size
1164+            self.log("original segment length: %d" % len(segment))
1165+            segment = segment[:extra]
1166+            self.log("new segment length: %d" % len(segment))
1167+            self.log("only taking %d bytes of the last segment" % extra)
1168+
1169+        if not self._verify:
1170+            self._consumer.write(segment)
1171+        else:
1172+            # we don't care about the plaintext if we are doing a verify.
1173+            segment = None
1174+        self._current_segment += 1
1175+
1176+
1177+    def _validation_or_decoding_failed(self, f, readers):
1178+        """
1179+        I am called when a block or a salt fails to correctly validate, or when
1180+        the decryption or decoding operation fails for some reason.  I react to
1181+        this failure by notifying the remote server of corruption, and then
1182+        removing the remote peer from further activity.
1183+        """
1184+        assert isinstance(readers, list)
1185+        bad_shnums = [reader.shnum for reader in readers]
1186+
1187+        self.log("validation or decoding failed on share(s) %s, peer(s) %s "
1188+                 ", segment %d: %s" % \
1189+                 (bad_shnums, readers, self._current_segment, str(f)))
1190+        for reader in readers:
1191+            self._mark_bad_share(reader, f)
1192         return
1193 
1194hunk ./src/allmydata/mutable/retrieve.py 807
1195-    def _decode(self):
1196-        started = time.time()
1197-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
1198-         offsets_tuple) = self.verinfo
1199 
1200hunk ./src/allmydata/mutable/retrieve.py 808
1201-        # shares_dict is a dict mapping shnum to share data, but the codec
1202-        # wants two lists.
1203-        shareids = []; shares = []
1204-        for shareid, share in self.shares.items():
1205+    def _validate_block(self, results, segnum, reader, started):
1206+        """
1207+        I validate a block from one share on a remote server.
1208+        """
1209+        # Grab the part of the block hash tree that is necessary to
1210+        # validate this block, then generate the block hash root.
1211+        self.log("validating share %d for segment %d" % (reader.shnum,
1212+                                                             segnum))
1213+        self._status.add_fetch_timing(reader.peerid, started)
1214+        self._status.set_status("Valdiating blocks for segment %d" % segnum)
1215+        # Did we fail to fetch either of the things that we were
1216+        # supposed to? Fail if so.
1217+        if not results[0][0] and results[1][0]:
1218+            # handled by the errback handler.
1219+
1220+            # These all get batched into one query, so the resulting
1221+            # failure should be the same for all of them, so we can just
1222+            # use the first one.
1223+            assert isinstance(results[0][1], failure.Failure)
1224+
1225+            f = results[0][1]
1226+            raise CorruptShareError(reader.peerid,
1227+                                    reader.shnum,
1228+                                    "Connection error: %s" % str(f))
1229+
1230+        block_and_salt, block_and_sharehashes = results
1231+        block, salt = block_and_salt[1]
1232+        blockhashes, sharehashes = block_and_sharehashes[1]
1233+
1234+        blockhashes = dict(enumerate(blockhashes[1]))
1235+        self.log("the reader gave me the following blockhashes: %s" % \
1236+                 blockhashes.keys())
1237+        self.log("the reader gave me the following sharehashes: %s" % \
1238+                 sharehashes[1].keys())
1239+        bht = self._block_hash_trees[reader.shnum]
1240+
1241+        if bht.needed_hashes(segnum, include_leaf=True):
1242+            try:
1243+                bht.set_hashes(blockhashes)
1244+            except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \
1245+                    IndexError), e:
1246+                raise CorruptShareError(reader.peerid,
1247+                                        reader.shnum,
1248+                                        "block hash tree failure: %s" % e)
1249+
1250+        if self._version == MDMF_VERSION:
1251+            blockhash = hashutil.block_hash(salt + block)
1252+        else:
1253+            blockhash = hashutil.block_hash(block)
1254+        # If this works without an error, then validation is
1255+        # successful.
1256+        try:
1257+           bht.set_hashes(leaves={segnum: blockhash})
1258+        except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \
1259+                IndexError), e:
1260+            raise CorruptShareError(reader.peerid,
1261+                                    reader.shnum,
1262+                                    "block hash tree failure: %s" % e)
1263+
1264+        # Reaching this point means that we know that this segment
1265+        # is correct. Now we need to check to see whether the share
1266+        # hash chain is also correct.
1267+        # SDMF wrote share hash chains that didn't contain the
1268+        # leaves, which would be produced from the block hash tree.
1269+        # So we need to validate the block hash tree first. If
1270+        # successful, then bht[0] will contain the root for the
1271+        # shnum, which will be a leaf in the share hash tree, which
1272+        # will allow us to validate the rest of the tree.
1273+        if self.share_hash_tree.needed_hashes(reader.shnum,
1274+                                              include_leaf=True) or \
1275+                                              self._verify:
1276+            try:
1277+                self.share_hash_tree.set_hashes(hashes=sharehashes[1],
1278+                                            leaves={reader.shnum: bht[0]})
1279+            except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \
1280+                    IndexError), e:
1281+                raise CorruptShareError(reader.peerid,
1282+                                        reader.shnum,
1283+                                        "corrupt hashes: %s" % e)
1284+
1285+        self.log('share %d is valid for segment %d' % (reader.shnum,
1286+                                                       segnum))
1287+        return {reader.shnum: (block, salt)}
1288+
1289+
1290+    def _get_needed_hashes(self, reader, segnum):
1291+        """
1292+        I get the hashes needed to validate segnum from the reader, then return
1293+        to my caller when this is done.
1294+        """
1295+        bht = self._block_hash_trees[reader.shnum]
1296+        needed = bht.needed_hashes(segnum, include_leaf=True)
1297+        # The root of the block hash tree is also a leaf in the share
1298+        # hash tree. So we don't need to fetch it from the remote
1299+        # server. In the case of files with one segment, this means that
1300+        # we won't fetch any block hash tree from the remote server,
1301+        # since the hash of each share of the file is the entire block
1302+        # hash tree, and is a leaf in the share hash tree. This is fine,
1303+        # since any share corruption will be detected in the share hash
1304+        # tree.
1305+        #needed.discard(0)
1306+        self.log("getting blockhashes for segment %d, share %d: %s" % \
1307+                 (segnum, reader.shnum, str(needed)))
1308+        d1 = reader.get_blockhashes(needed, queue=True, force_remote=True)
1309+        if self.share_hash_tree.needed_hashes(reader.shnum):
1310+            need = self.share_hash_tree.needed_hashes(reader.shnum)
1311+            self.log("also need sharehashes for share %d: %s" % (reader.shnum,
1312+                                                                 str(need)))
1313+            d2 = reader.get_sharehashes(need, queue=True, force_remote=True)
1314+        else:
1315+            d2 = defer.succeed({}) # the logic in the next method
1316+                                   # expects a dict
1317+        dl = defer.DeferredList([d1, d2], consumeErrors=True)
1318+        return dl
1319+
1320+
1321+    def _decode_blocks(self, blocks_and_salts, segnum):
1322+        """
1323+        I take a list of k blocks and salts, and decode that into a
1324+        single encrypted segment.
1325+        """
1326+        d = {}
1327+        # We want to merge our dictionaries to the form
1328+        # {shnum: blocks_and_salts}
1329+        #
1330+        # The dictionaries come from validate block that way, so we just
1331+        # need to merge them.
1332+        for block_and_salt in blocks_and_salts:
1333+            d.update(block_and_salt[1])
1334+
1335+        # All of these blocks should have the same salt; in SDMF, it is
1336+        # the file-wide IV, while in MDMF it is the per-segment salt. In
1337+        # either case, we just need to get one of them and use it.
1338+        #
1339+        # d.items()[0] is like (shnum, (block, salt))
1340+        # d.items()[0][1] is like (block, salt)
1341+        # d.items()[0][1][1] is the salt.
1342+        salt = d.items()[0][1][1]
1343+        # Next, extract just the blocks from the dict. We'll use the
1344+        # salt in the next step.
1345+        share_and_shareids = [(k, v[0]) for k, v in d.items()]
1346+        d2 = dict(share_and_shareids)
1347+        shareids = []
1348+        shares = []
1349+        for shareid, share in d2.items():
1350             shareids.append(shareid)
1351             shares.append(share)
1352 
1353hunk ./src/allmydata/mutable/retrieve.py 956
1354-        assert len(shareids) >= k, len(shareids)
1355+        self._status.set_status("Decoding")
1356+        started = time.time()
1357+        assert len(shareids) >= self._required_shares, len(shareids)
1358         # zfec really doesn't want extra shares
1359hunk ./src/allmydata/mutable/retrieve.py 960
1360-        shareids = shareids[:k]
1361-        shares = shares[:k]
1362-
1363-        fec = codec.CRSDecoder()
1364-        fec.set_params(segsize, k, N)
1365-
1366-        self.log("params %s, we have %d shares" % ((segsize, k, N), len(shares)))
1367-        self.log("about to decode, shareids=%s" % (shareids,))
1368-        d = defer.maybeDeferred(fec.decode, shares, shareids)
1369-        def _done(buffers):
1370-            self._status.timings["decode"] = time.time() - started
1371-            self.log(" decode done, %d buffers" % len(buffers))
1372+        shareids = shareids[:self._required_shares]
1373+        shares = shares[:self._required_shares]
1374+        self.log("decoding segment %d" % segnum)
1375+        if segnum == self._num_segments - 1:
1376+            d = defer.maybeDeferred(self._tail_decoder.decode, shares, shareids)
1377+        else:
1378+            d = defer.maybeDeferred(self._segment_decoder.decode, shares, shareids)
1379+        def _process(buffers):
1380             segment = "".join(buffers)
1381hunk ./src/allmydata/mutable/retrieve.py 969
1382+            self.log(format="now decoding segment %(segnum)s of %(numsegs)s",
1383+                     segnum=segnum,
1384+                     numsegs=self._num_segments,
1385+                     level=log.NOISY)
1386             self.log(" joined length %d, datalength %d" %
1387hunk ./src/allmydata/mutable/retrieve.py 974
1388-                     (len(segment), datalength))
1389-            segment = segment[:datalength]
1390+                     (len(segment), self._data_length))
1391+            if segnum == self._num_segments - 1:
1392+                size_to_use = self._tail_data_size
1393+            else:
1394+                size_to_use = self._segment_size
1395+            segment = segment[:size_to_use]
1396             self.log(" segment len=%d" % len(segment))
1397hunk ./src/allmydata/mutable/retrieve.py 981
1398-            return segment
1399-        def _err(f):
1400-            self.log(" decode failed: %s" % f)
1401-            return f
1402-        d.addCallback(_done)
1403-        d.addErrback(_err)
1404+            self._status.timings.setdefault("decode", 0)
1405+            self._status.timings['decode'] = time.time() - started
1406+            return segment, salt
1407+        d.addCallback(_process)
1408         return d
1409 
1410hunk ./src/allmydata/mutable/retrieve.py 987
1411-    def _decrypt(self, crypttext, IV, readkey):
1412+
1413+    def _decrypt_segment(self, segment_and_salt):
1414+        """
1415+        I take a single segment and its salt, and decrypt it. I return
1416+        the plaintext of the segment that is in my argument.
1417+        """
1418+        segment, salt = segment_and_salt
1419         self._status.set_status("decrypting")
1420hunk ./src/allmydata/mutable/retrieve.py 995
1421+        self.log("decrypting segment %d" % self._current_segment)
1422         started = time.time()
1423hunk ./src/allmydata/mutable/retrieve.py 997
1424-        key = hashutil.ssk_readkey_data_hash(IV, readkey)
1425+        key = hashutil.ssk_readkey_data_hash(salt, self._node.get_readkey())
1426         decryptor = AES(key)
1427hunk ./src/allmydata/mutable/retrieve.py 999
1428-        plaintext = decryptor.process(crypttext)
1429-        self._status.timings["decrypt"] = time.time() - started
1430+        plaintext = decryptor.process(segment)
1431+        self._status.timings.setdefault("decrypt", 0)
1432+        self._status.timings['decrypt'] = time.time() - started
1433         return plaintext
1434 
1435hunk ./src/allmydata/mutable/retrieve.py 1004
1436-    def _done(self, res):
1437-        if not self._running:
1438+
1439+    def notify_server_corruption(self, peerid, shnum, reason):
1440+        ss = self.servermap.connections[peerid]
1441+        ss.callRemoteOnly("advise_corrupt_share",
1442+                          "mutable", self._storage_index, shnum, reason)
1443+
1444+
1445+    def _try_to_validate_privkey(self, enc_privkey, reader):
1446+        alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
1447+        alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
1448+        if alleged_writekey != self._node.get_writekey():
1449+            self.log("invalid privkey from %s shnum %d" %
1450+                     (reader, reader.shnum),
1451+                     level=log.WEIRD, umid="YIw4tA")
1452+            if self._verify:
1453+                self.servermap.mark_bad_share(reader.peerid, reader.shnum,
1454+                                              self.verinfo[-2])
1455+                e = CorruptShareError(reader.peerid,
1456+                                      reader.shnum,
1457+                                      "invalid privkey")
1458+                f = failure.Failure(e)
1459+                self._bad_shares.add((reader.peerid, reader.shnum, f))
1460             return
1461hunk ./src/allmydata/mutable/retrieve.py 1027
1462+
1463+        # it's good
1464+        self.log("got valid privkey from shnum %d on reader %s" %
1465+                 (reader.shnum, reader))
1466+        privkey = rsa.create_signing_key_from_string(alleged_privkey_s)
1467+        self._node._populate_encprivkey(enc_privkey)
1468+        self._node._populate_privkey(privkey)
1469+        self._need_privkey = False
1470+
1471+
1472+    def _check_for_done(self, res):
1473+        """
1474+        I check to see if this Retrieve object has successfully finished
1475+        its work.
1476+
1477+        I can exit in the following ways:
1478+            - If there are no more segments to download, then I exit by
1479+              causing self._done_deferred to fire with the plaintext
1480+              content requested by the caller.
1481+            - If there are still segments to be downloaded, and there
1482+              are enough active readers (readers which have not broken
1483+              and have not given us corrupt data) to continue
1484+              downloading, I send control back to
1485+              _download_current_segment.
1486+            - If there are still segments to be downloaded but there are
1487+              not enough active peers to download them, I ask
1488+              _add_active_peers to add more peers. If it is successful,
1489+              it will call _download_current_segment. If there are not
1490+              enough peers to retrieve the file, then that will cause
1491+              _done_deferred to errback.
1492+        """
1493+        self.log("checking for doneness")
1494+        if self._current_segment > self._last_segment:
1495+            # No more segments to download, we're done.
1496+            self.log("got plaintext, done")
1497+            return self._done()
1498+
1499+        if len(self._active_readers) >= self._required_shares:
1500+            # More segments to download, but we have enough good peers
1501+            # in self._active_readers that we can do that without issue,
1502+            # so go nab the next segment.
1503+            self.log("not done yet: on segment %d of %d" % \
1504+                     (self._current_segment + 1, self._num_segments))
1505+            return self._download_current_segment()
1506+
1507+        self.log("not done yet: on segment %d of %d, need to add peers" % \
1508+                 (self._current_segment + 1, self._num_segments))
1509+        return self._add_active_peers()
1510+
1511+
1512+    def _done(self):
1513+        """
1514+        I am called by _check_for_done when the download process has
1515+        finished successfully. After making some useful logging
1516+        statements, I return the decrypted contents to the owner of this
1517+        Retrieve object through self._done_deferred.
1518+        """
1519         self._running = False
1520         self._status.set_active(False)
1521hunk ./src/allmydata/mutable/retrieve.py 1086
1522-        self._status.timings["total"] = time.time() - self._started
1523-        # res is either the new contents, or a Failure
1524-        if isinstance(res, failure.Failure):
1525-            self.log("Retrieve done, with failure", failure=res,
1526-                     level=log.UNUSUAL)
1527-            self._status.set_status("Failed")
1528+        now = time.time()
1529+        self._status.timings['total'] = now - self._started
1530+        self._status.timings['fetch'] = now - self._started_fetching
1531+
1532+        if self._verify:
1533+            ret = list(self._bad_shares)
1534+            self.log("done verifying, found %d bad shares" % len(ret))
1535         else:
1536hunk ./src/allmydata/mutable/retrieve.py 1094
1537-            self.log("Retrieve done, success!")
1538-            self._status.set_status("Finished")
1539-            self._status.set_progress(1.0)
1540-            # remember the encoding parameters, use them again next time
1541-            (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
1542-             offsets_tuple) = self.verinfo
1543-            self._node._populate_required_shares(k)
1544-            self._node._populate_total_shares(N)
1545-        eventually(self._done_deferred.callback, res)
1546+            # TODO: upload status here?
1547+            ret = self._consumer
1548+            self._consumer.unregisterProducer()
1549+        eventually(self._done_deferred.callback, ret)
1550+
1551+
1552+    def _failed(self):
1553+        """
1554+        I am called by _add_active_peers when there are not enough
1555+        active peers left to complete the download. After making some
1556+        useful logging statements, I return an exception to that effect
1557+        to the caller of this Retrieve object through
1558+        self._done_deferred.
1559+        """
1560+        self._running = False
1561+        self._status.set_active(False)
1562+        now = time.time()
1563+        self._status.timings['total'] = now - self._started
1564+        self._status.timings['fetch'] = now - self._started_fetching
1565 
1566hunk ./src/allmydata/mutable/retrieve.py 1114
1567+        if self._verify:
1568+            ret = list(self._bad_shares)
1569+        else:
1570+            format = ("ran out of peers: "
1571+                      "have %(have)d of %(total)d segments "
1572+                      "found %(bad)d bad shares "
1573+                      "encoding %(k)d-of-%(n)d")
1574+            args = {"have": self._current_segment,
1575+                    "total": self._num_segments,
1576+                    "need": self._last_segment,
1577+                    "k": self._required_shares,
1578+                    "n": self._total_shares,
1579+                    "bad": len(self._bad_shares)}
1580+            e = NotEnoughSharesError("%s, last failure: %s" % \
1581+                                     (format % args, str(self._last_failure)))
1582+            f = failure.Failure(e)
1583+            ret = f
1584+        eventually(self._done_deferred.callback, ret)
1585}
1586[mutable/publish: teach the publisher how to publish MDMF mutable files
1587Kevan Carstensen <kevan@isnotajoke.com>**20110802013931
1588 Ignore-this: 115217ec2b289452ec774cb725da8a86
1589 
1590 Like the downloader, the publisher needs some substantial changes to handle multiple segment mutable files.
1591] {
1592hunk ./src/allmydata/mutable/publish.py 3
1593 
1594 
1595-import os, struct, time
1596+import os, time
1597+from StringIO import StringIO
1598 from itertools import count
1599 from zope.interface import implements
1600 from twisted.internet import defer
1601hunk ./src/allmydata/mutable/publish.py 9
1602 from twisted.python import failure
1603-from allmydata.interfaces import IPublishStatus
1604+from allmydata.interfaces import IPublishStatus, SDMF_VERSION, MDMF_VERSION, \
1605+                                 IMutableUploadable
1606 from allmydata.util import base32, hashutil, mathutil, idlib, log
1607 from allmydata.util.dictutil import DictOfSets
1608 from allmydata import hashtree, codec
1609hunk ./src/allmydata/mutable/publish.py 21
1610 from allmydata.mutable.common import MODE_WRITE, MODE_CHECK, \
1611      UncoordinatedWriteError, NotEnoughServersError
1612 from allmydata.mutable.servermap import ServerMap
1613-from allmydata.mutable.layout import pack_prefix, pack_share, unpack_header, pack_checkstring, \
1614-     unpack_checkstring, SIGNED_PREFIX
1615+from allmydata.mutable.layout import unpack_checkstring, MDMFSlotWriteProxy, \
1616+                                     SDMFSlotWriteProxy
1617+
1618+KiB = 1024
1619+DEFAULT_MAX_SEGMENT_SIZE = 128 * KiB
1620+PUSHING_BLOCKS_STATE = 0
1621+PUSHING_EVERYTHING_ELSE_STATE = 1
1622+DONE_STATE = 2
1623 
1624 class PublishStatus:
1625     implements(IPublishStatus)
1626hunk ./src/allmydata/mutable/publish.py 112
1627         self._log_number = num
1628         self._running = True
1629         self._first_write_error = None
1630+        self._last_failure = None
1631 
1632         self._status = PublishStatus()
1633         self._status.set_storage_index(self._storage_index)
1634hunk ./src/allmydata/mutable/publish.py 119
1635         self._status.set_helper(False)
1636         self._status.set_progress(0.0)
1637         self._status.set_active(True)
1638+        self._version = self._node.get_version()
1639+        assert self._version in (SDMF_VERSION, MDMF_VERSION)
1640+
1641 
1642     def get_status(self):
1643         return self._status
1644hunk ./src/allmydata/mutable/publish.py 133
1645             kwargs["facility"] = "tahoe.mutable.publish"
1646         return log.msg(*args, **kwargs)
1647 
1648+
1649+    def update(self, data, offset, blockhashes, version):
1650+        """
1651+        I replace the contents of this file with the contents of data,
1652+        starting at offset. I return a Deferred that fires with None
1653+        when the replacement has been completed, or with an error if
1654+        something went wrong during the process.
1655+
1656+        Note that this process will not upload new shares. If the file
1657+        being updated is in need of repair, callers will have to repair
1658+        it on their own.
1659+        """
1660+        # How this works:
1661+        # 1: Make peer assignments. We'll assign each share that we know
1662+        # about on the grid to that peer that currently holds that
1663+        # share, and will not place any new shares.
1664+        # 2: Setup encoding parameters. Most of these will stay the same
1665+        # -- datalength will change, as will some of the offsets.
1666+        # 3. Upload the new segments.
1667+        # 4. Be done.
1668+        assert IMutableUploadable.providedBy(data)
1669+
1670+        self.data = data
1671+
1672+        # XXX: Use the MutableFileVersion instead.
1673+        self.datalength = self._node.get_size()
1674+        if data.get_size() > self.datalength:
1675+            self.datalength = data.get_size()
1676+
1677+        self.log("starting update")
1678+        self.log("adding new data of length %d at offset %d" % \
1679+                    (data.get_size(), offset))
1680+        self.log("new data length is %d" % self.datalength)
1681+        self._status.set_size(self.datalength)
1682+        self._status.set_status("Started")
1683+        self._started = time.time()
1684+
1685+        self.done_deferred = defer.Deferred()
1686+
1687+        self._writekey = self._node.get_writekey()
1688+        assert self._writekey, "need write capability to publish"
1689+
1690+        # first, which servers will we publish to? We require that the
1691+        # servermap was updated in MODE_WRITE, so we can depend upon the
1692+        # peerlist computed by that process instead of computing our own.
1693+        assert self._servermap
1694+        assert self._servermap.last_update_mode in (MODE_WRITE, MODE_CHECK)
1695+        # we will push a version that is one larger than anything present
1696+        # in the grid, according to the servermap.
1697+        self._new_seqnum = self._servermap.highest_seqnum() + 1
1698+        self._status.set_servermap(self._servermap)
1699+
1700+        self.log(format="new seqnum will be %(seqnum)d",
1701+                 seqnum=self._new_seqnum, level=log.NOISY)
1702+
1703+        # We're updating an existing file, so all of the following
1704+        # should be available.
1705+        self.readkey = self._node.get_readkey()
1706+        self.required_shares = self._node.get_required_shares()
1707+        assert self.required_shares is not None
1708+        self.total_shares = self._node.get_total_shares()
1709+        assert self.total_shares is not None
1710+        self._status.set_encoding(self.required_shares, self.total_shares)
1711+
1712+        self._pubkey = self._node.get_pubkey()
1713+        assert self._pubkey
1714+        self._privkey = self._node.get_privkey()
1715+        assert self._privkey
1716+        self._encprivkey = self._node.get_encprivkey()
1717+
1718+        sb = self._storage_broker
1719+        full_peerlist = [(s.get_serverid(), s.get_rref())
1720+                         for s in sb.get_servers_for_psi(self._storage_index)]
1721+        self.full_peerlist = full_peerlist # for use later, immutable
1722+        self.bad_peers = set() # peerids who have errbacked/refused requests
1723+
1724+        # This will set self.segment_size, self.num_segments, and
1725+        # self.fec. TODO: Does it know how to do the offset? Probably
1726+        # not. So do that part next.
1727+        self.setup_encoding_parameters(offset=offset)
1728+
1729+        # if we experience any surprises (writes which were rejected because
1730+        # our test vector did not match, or shares which we didn't expect to
1731+        # see), we set this flag and report an UncoordinatedWriteError at the
1732+        # end of the publish process.
1733+        self.surprised = False
1734+
1735+        # we keep track of three tables. The first is our goal: which share
1736+        # we want to see on which servers. This is initially populated by the
1737+        # existing servermap.
1738+        self.goal = set() # pairs of (peerid, shnum) tuples
1739+
1740+        # the second table is our list of outstanding queries: those which
1741+        # are in flight and may or may not be delivered, accepted, or
1742+        # acknowledged. Items are added to this table when the request is
1743+        # sent, and removed when the response returns (or errbacks).
1744+        self.outstanding = set() # (peerid, shnum) tuples
1745+
1746+        # the third is a table of successes: share which have actually been
1747+        # placed. These are populated when responses come back with success.
1748+        # When self.placed == self.goal, we're done.
1749+        self.placed = set() # (peerid, shnum) tuples
1750+
1751+        # we also keep a mapping from peerid to RemoteReference. Each time we
1752+        # pull a connection out of the full peerlist, we add it to this for
1753+        # use later.
1754+        self.connections = {}
1755+
1756+        self.bad_share_checkstrings = {}
1757+
1758+        # This is set at the last step of the publishing process.
1759+        self.versioninfo = ""
1760+
1761+        # we use the servermap to populate the initial goal: this way we will
1762+        # try to update each existing share in place. Since we're
1763+        # updating, we ignore damaged and missing shares -- callers must
1764+        # do a repair to repair and recreate these.
1765+        for (peerid, shnum) in self._servermap.servermap:
1766+            self.goal.add( (peerid, shnum) )
1767+            self.connections[peerid] = self._servermap.connections[peerid]
1768+        self.writers = {}
1769+
1770+        # SDMF files are updated differently.
1771+        self._version = MDMF_VERSION
1772+        writer_class = MDMFSlotWriteProxy
1773+
1774+        # For each (peerid, shnum) in self.goal, we make a
1775+        # write proxy for that peer. We'll use this to write
1776+        # shares to the peer.
1777+        for key in self.goal:
1778+            peerid, shnum = key
1779+            write_enabler = self._node.get_write_enabler(peerid)
1780+            renew_secret = self._node.get_renewal_secret(peerid)
1781+            cancel_secret = self._node.get_cancel_secret(peerid)
1782+            secrets = (write_enabler, renew_secret, cancel_secret)
1783+
1784+            self.writers[shnum] =  writer_class(shnum,
1785+                                                self.connections[peerid],
1786+                                                self._storage_index,
1787+                                                secrets,
1788+                                                self._new_seqnum,
1789+                                                self.required_shares,
1790+                                                self.total_shares,
1791+                                                self.segment_size,
1792+                                                self.datalength)
1793+            self.writers[shnum].peerid = peerid
1794+            assert (peerid, shnum) in self._servermap.servermap
1795+            old_versionid, old_timestamp = self._servermap.servermap[key]
1796+            (old_seqnum, old_root_hash, old_salt, old_segsize,
1797+             old_datalength, old_k, old_N, old_prefix,
1798+             old_offsets_tuple) = old_versionid
1799+            self.writers[shnum].set_checkstring(old_seqnum,
1800+                                                old_root_hash,
1801+                                                old_salt)
1802+
1803+        # Our remote shares will not have a complete checkstring until
1804+        # after we are done writing share data and have started to write
1805+        # blocks. In the meantime, we need to know what to look for when
1806+        # writing, so that we can detect UncoordinatedWriteErrors.
1807+        self._checkstring = self.writers.values()[0].get_checkstring()
1808+
1809+        # Now, we start pushing shares.
1810+        self._status.timings["setup"] = time.time() - self._started
1811+        # First, we encrypt, encode, and publish the shares that we need
1812+        # to encrypt, encode, and publish.
1813+
1814+        # Our update process fetched these for us. We need to update
1815+        # them in place as publishing happens.
1816+        self.blockhashes = {} # (shnum, [blochashes])
1817+        for (i, bht) in blockhashes.iteritems():
1818+            # We need to extract the leaves from our old hash tree.
1819+            old_segcount = mathutil.div_ceil(version[4],
1820+                                             version[3])
1821+            h = hashtree.IncompleteHashTree(old_segcount)
1822+            bht = dict(enumerate(bht))
1823+            h.set_hashes(bht)
1824+            leaves = h[h.get_leaf_index(0):]
1825+            for j in xrange(self.num_segments - len(leaves)):
1826+                leaves.append(None)
1827+
1828+            assert len(leaves) >= self.num_segments
1829+            self.blockhashes[i] = leaves
1830+            # This list will now be the leaves that were set during the
1831+            # initial upload + enough empty hashes to make it a
1832+            # power-of-two. If we exceed a power of two boundary, we
1833+            # should be encoding the file over again, and should not be
1834+            # here. So, we have
1835+            #assert len(self.blockhashes[i]) == \
1836+            #    hashtree.roundup_pow2(self.num_segments), \
1837+            #        len(self.blockhashes[i])
1838+            # XXX: Except this doesn't work. Figure out why.
1839+
1840+        # These are filled in later, after we've modified the block hash
1841+        # tree suitably.
1842+        self.sharehash_leaves = None # eventually [sharehashes]
1843+        self.sharehashes = {} # shnum -> [sharehash leaves necessary to
1844+                              # validate the share]
1845+
1846+        self.log("Starting push")
1847+
1848+        self._state = PUSHING_BLOCKS_STATE
1849+        self._push()
1850+
1851+        return self.done_deferred
1852+
1853+
1854     def publish(self, newdata):
1855         """Publish the filenode's current contents.  Returns a Deferred that
1856         fires (with None) when the publish has done as much work as it's ever
1857hunk ./src/allmydata/mutable/publish.py 346
1858         simultaneous write.
1859         """
1860 
1861-        # 1: generate shares (SDMF: files are small, so we can do it in RAM)
1862-        # 2: perform peer selection, get candidate servers
1863-        #  2a: send queries to n+epsilon servers, to determine current shares
1864-        #  2b: based upon responses, create target map
1865-        # 3: send slot_testv_and_readv_and_writev messages
1866-        # 4: as responses return, update share-dispatch table
1867-        # 4a: may need to run recovery algorithm
1868-        # 5: when enough responses are back, we're done
1869+        # 0. Setup encoding parameters, encoder, and other such things.
1870+        # 1. Encrypt, encode, and publish segments.
1871+        assert IMutableUploadable.providedBy(newdata)
1872 
1873hunk ./src/allmydata/mutable/publish.py 350
1874-        self.log("starting publish, datalen is %s" % len(newdata))
1875-        self._status.set_size(len(newdata))
1876+        self.data = newdata
1877+        self.datalength = newdata.get_size()
1878+        #if self.datalength >= DEFAULT_MAX_SEGMENT_SIZE:
1879+        #    self._version = MDMF_VERSION
1880+        #else:
1881+        #    self._version = SDMF_VERSION
1882+
1883+        self.log("starting publish, datalen is %s" % self.datalength)
1884+        self._status.set_size(self.datalength)
1885         self._status.set_status("Started")
1886         self._started = time.time()
1887 
1888hunk ./src/allmydata/mutable/publish.py 407
1889         self.full_peerlist = full_peerlist # for use later, immutable
1890         self.bad_peers = set() # peerids who have errbacked/refused requests
1891 
1892-        self.newdata = newdata
1893-        self.salt = os.urandom(16)
1894-
1895+        # This will set self.segment_size, self.num_segments, and
1896+        # self.fec.
1897         self.setup_encoding_parameters()
1898 
1899         # if we experience any surprises (writes which were rejected because
1900hunk ./src/allmydata/mutable/publish.py 417
1901         # end of the publish process.
1902         self.surprised = False
1903 
1904-        # as a failsafe, refuse to iterate through self.loop more than a
1905-        # thousand times.
1906-        self.looplimit = 1000
1907-
1908         # we keep track of three tables. The first is our goal: which share
1909         # we want to see on which servers. This is initially populated by the
1910         # existing servermap.
1911hunk ./src/allmydata/mutable/publish.py 440
1912 
1913         self.bad_share_checkstrings = {}
1914 
1915+        # This is set at the last step of the publishing process.
1916+        self.versioninfo = ""
1917+
1918         # we use the servermap to populate the initial goal: this way we will
1919         # try to update each existing share in place.
1920         for (peerid, shnum) in self._servermap.servermap:
1921hunk ./src/allmydata/mutable/publish.py 456
1922             self.bad_share_checkstrings[key] = old_checkstring
1923             self.connections[peerid] = self._servermap.connections[peerid]
1924 
1925-        # create the shares. We'll discard these as they are delivered. SDMF:
1926-        # we're allowed to hold everything in memory.
1927+        # TODO: Make this part do peer selection.
1928+        self.update_goal()
1929+        self.writers = {}
1930+        if self._version == MDMF_VERSION:
1931+            writer_class = MDMFSlotWriteProxy
1932+        else:
1933+            writer_class = SDMFSlotWriteProxy
1934+
1935+        # For each (peerid, shnum) in self.goal, we make a
1936+        # write proxy for that peer. We'll use this to write
1937+        # shares to the peer.
1938+        for key in self.goal:
1939+            peerid, shnum = key
1940+            write_enabler = self._node.get_write_enabler(peerid)
1941+            renew_secret = self._node.get_renewal_secret(peerid)
1942+            cancel_secret = self._node.get_cancel_secret(peerid)
1943+            secrets = (write_enabler, renew_secret, cancel_secret)
1944 
1945hunk ./src/allmydata/mutable/publish.py 474
1946+            self.writers[shnum] =  writer_class(shnum,
1947+                                                self.connections[peerid],
1948+                                                self._storage_index,
1949+                                                secrets,
1950+                                                self._new_seqnum,
1951+                                                self.required_shares,
1952+                                                self.total_shares,
1953+                                                self.segment_size,
1954+                                                self.datalength)
1955+            self.writers[shnum].peerid = peerid
1956+            if (peerid, shnum) in self._servermap.servermap:
1957+                old_versionid, old_timestamp = self._servermap.servermap[key]
1958+                (old_seqnum, old_root_hash, old_salt, old_segsize,
1959+                 old_datalength, old_k, old_N, old_prefix,
1960+                 old_offsets_tuple) = old_versionid
1961+                self.writers[shnum].set_checkstring(old_seqnum,
1962+                                                    old_root_hash,
1963+                                                    old_salt)
1964+            elif (peerid, shnum) in self.bad_share_checkstrings:
1965+                old_checkstring = self.bad_share_checkstrings[(peerid, shnum)]
1966+                self.writers[shnum].set_checkstring(old_checkstring)
1967+
1968+        # Our remote shares will not have a complete checkstring until
1969+        # after we are done writing share data and have started to write
1970+        # blocks. In the meantime, we need to know what to look for when
1971+        # writing, so that we can detect UncoordinatedWriteErrors.
1972+        self._checkstring = self.writers.values()[0].get_checkstring()
1973+
1974+        # Now, we start pushing shares.
1975         self._status.timings["setup"] = time.time() - self._started
1976hunk ./src/allmydata/mutable/publish.py 504
1977-        d = self._encrypt_and_encode()
1978-        d.addCallback(self._generate_shares)
1979-        def _start_pushing(res):
1980-            self._started_pushing = time.time()
1981-            return res
1982-        d.addCallback(_start_pushing)
1983-        d.addCallback(self.loop) # trigger delivery
1984-        d.addErrback(self._fatal_error)
1985+        # First, we encrypt, encode, and publish the shares that we need
1986+        # to encrypt, encode, and publish.
1987+
1988+        # This will eventually hold the block hash chain for each share
1989+        # that we publish. We define it this way so that empty publishes
1990+        # will still have something to write to the remote slot.
1991+        self.blockhashes = dict([(i, []) for i in xrange(self.total_shares)])
1992+        for i in xrange(self.total_shares):
1993+            blocks = self.blockhashes[i]
1994+            for j in xrange(self.num_segments):
1995+                blocks.append(None)
1996+        self.sharehash_leaves = None # eventually [sharehashes]
1997+        self.sharehashes = {} # shnum -> [sharehash leaves necessary to
1998+                              # validate the share]
1999+
2000+        self.log("Starting push")
2001+
2002+        self._state = PUSHING_BLOCKS_STATE
2003+        self._push()
2004 
2005         return self.done_deferred
2006 
2007hunk ./src/allmydata/mutable/publish.py 526
2008-    def setup_encoding_parameters(self):
2009-        segment_size = len(self.newdata)
2010+
2011+    def _update_status(self):
2012+        self._status.set_status("Sending Shares: %d placed out of %d, "
2013+                                "%d messages outstanding" %
2014+                                (len(self.placed),
2015+                                 len(self.goal),
2016+                                 len(self.outstanding)))
2017+        self._status.set_progress(1.0 * len(self.placed) / len(self.goal))
2018+
2019+
2020+    def setup_encoding_parameters(self, offset=0):
2021+        if self._version == MDMF_VERSION:
2022+            segment_size = DEFAULT_MAX_SEGMENT_SIZE # 128 KiB by default
2023+        else:
2024+            segment_size = self.datalength # SDMF is only one segment
2025         # this must be a multiple of self.required_shares
2026         segment_size = mathutil.next_multiple(segment_size,
2027                                               self.required_shares)
2028hunk ./src/allmydata/mutable/publish.py 545
2029         self.segment_size = segment_size
2030+
2031+        # Calculate the starting segment for the upload.
2032         if segment_size:
2033hunk ./src/allmydata/mutable/publish.py 548
2034-            self.num_segments = mathutil.div_ceil(len(self.newdata),
2035+            # We use div_ceil instead of integer division here because
2036+            # it is semantically correct.
2037+            # If datalength isn't an even multiple of segment_size, but
2038+            # is larger than segment_size, datalength // segment_size
2039+            # will be the largest number such that num <= datalength and
2040+            # num % segment_size == 0. But that's not what we want,
2041+            # because it ignores the extra data. div_ceil will give us
2042+            # the right number of segments for the data that we're
2043+            # given.
2044+            self.num_segments = mathutil.div_ceil(self.datalength,
2045                                                   segment_size)
2046hunk ./src/allmydata/mutable/publish.py 559
2047+
2048+            self.starting_segment = offset // segment_size
2049+
2050         else:
2051             self.num_segments = 0
2052hunk ./src/allmydata/mutable/publish.py 564
2053-        assert self.num_segments in [0, 1,] # SDMF restrictions
2054+            self.starting_segment = 0
2055 
2056hunk ./src/allmydata/mutable/publish.py 566
2057-    def _fatal_error(self, f):
2058-        self.log("error during loop", failure=f, level=log.UNUSUAL)
2059-        self._done(f)
2060 
2061hunk ./src/allmydata/mutable/publish.py 567
2062-    def _update_status(self):
2063-        self._status.set_status("Sending Shares: %d placed out of %d, "
2064-                                "%d messages outstanding" %
2065-                                (len(self.placed),
2066-                                 len(self.goal),
2067-                                 len(self.outstanding)))
2068-        self._status.set_progress(1.0 * len(self.placed) / len(self.goal))
2069+        self.log("building encoding parameters for file")
2070+        self.log("got segsize %d" % self.segment_size)
2071+        self.log("got %d segments" % self.num_segments)
2072 
2073hunk ./src/allmydata/mutable/publish.py 571
2074-    def loop(self, ignored=None):
2075-        self.log("entering loop", level=log.NOISY)
2076-        if not self._running:
2077-            return
2078+        if self._version == SDMF_VERSION:
2079+            assert self.num_segments in (0, 1) # SDMF
2080+        # calculate the tail segment size.
2081 
2082hunk ./src/allmydata/mutable/publish.py 575
2083-        self.looplimit -= 1
2084-        if self.looplimit <= 0:
2085-            raise LoopLimitExceededError("loop limit exceeded")
2086+        if segment_size and self.datalength:
2087+            self.tail_segment_size = self.datalength % segment_size
2088+            self.log("got tail segment size %d" % self.tail_segment_size)
2089+        else:
2090+            self.tail_segment_size = 0
2091 
2092hunk ./src/allmydata/mutable/publish.py 581
2093-        if self.surprised:
2094-            # don't send out any new shares, just wait for the outstanding
2095-            # ones to be retired.
2096-            self.log("currently surprised, so don't send any new shares",
2097-                     level=log.NOISY)
2098+        if self.tail_segment_size == 0 and segment_size:
2099+            # The tail segment is the same size as the other segments.
2100+            self.tail_segment_size = segment_size
2101+
2102+        # Make FEC encoders
2103+        fec = codec.CRSEncoder()
2104+        fec.set_params(self.segment_size,
2105+                       self.required_shares, self.total_shares)
2106+        self.piece_size = fec.get_block_size()
2107+        self.fec = fec
2108+
2109+        if self.tail_segment_size == self.segment_size:
2110+            self.tail_fec = self.fec
2111         else:
2112hunk ./src/allmydata/mutable/publish.py 595
2113-            self.update_goal()
2114-            # how far are we from our goal?
2115-            needed = self.goal - self.placed - self.outstanding
2116-            self._update_status()
2117+            tail_fec = codec.CRSEncoder()
2118+            tail_fec.set_params(self.tail_segment_size,
2119+                                self.required_shares,
2120+                                self.total_shares)
2121+            self.tail_fec = tail_fec
2122 
2123hunk ./src/allmydata/mutable/publish.py 601
2124-            if needed:
2125-                # we need to send out new shares
2126-                self.log(format="need to send %(needed)d new shares",
2127-                         needed=len(needed), level=log.NOISY)
2128-                self._send_shares(needed)
2129-                return
2130+        self._current_segment = self.starting_segment
2131+        self.end_segment = self.num_segments - 1
2132+        # Now figure out where the last segment should be.
2133+        if self.data.get_size() != self.datalength:
2134+            # We're updating a few segments in the middle of a mutable
2135+            # file, so we don't want to republish the whole thing.
2136+            # (we don't have enough data to do that even if we wanted
2137+            # to)
2138+            end = self.data.get_size()
2139+            self.end_segment = end // segment_size
2140+            if end % segment_size == 0:
2141+                self.end_segment -= 1
2142 
2143hunk ./src/allmydata/mutable/publish.py 614
2144-        if self.outstanding:
2145-            # queries are still pending, keep waiting
2146-            self.log(format="%(outstanding)d queries still outstanding",
2147-                     outstanding=len(self.outstanding),
2148-                     level=log.NOISY)
2149-            return
2150+        self.log("got start segment %d" % self.starting_segment)
2151+        self.log("got end segment %d" % self.end_segment)
2152+
2153+
2154+    def _push(self, ignored=None):
2155+        """
2156+        I manage state transitions. In particular, I see that we still
2157+        have a good enough number of writers to complete the upload
2158+        successfully.
2159+        """
2160+        # Can we still successfully publish this file?
2161+        # TODO: Keep track of outstanding queries before aborting the
2162+        #       process.
2163+        if len(self.writers) < self.required_shares or self.surprised:
2164+            return self._failure()
2165+
2166+        # Figure out what we need to do next. Each of these needs to
2167+        # return a deferred so that we don't block execution when this
2168+        # is first called in the upload method.
2169+        if self._state == PUSHING_BLOCKS_STATE:
2170+            return self.push_segment(self._current_segment)
2171+
2172+        elif self._state == PUSHING_EVERYTHING_ELSE_STATE:
2173+            return self.push_everything_else()
2174+
2175+        # If we make it to this point, we were successful in placing the
2176+        # file.
2177+        return self._done()
2178+
2179+
2180+    def push_segment(self, segnum):
2181+        if self.num_segments == 0 and self._version == SDMF_VERSION:
2182+            self._add_dummy_salts()
2183+
2184+        if segnum > self.end_segment:
2185+            # We don't have any more segments to push.
2186+            self._state = PUSHING_EVERYTHING_ELSE_STATE
2187+            return self._push()
2188+
2189+        d = self._encode_segment(segnum)
2190+        d.addCallback(self._push_segment, segnum)
2191+        def _increment_segnum(ign):
2192+            self._current_segment += 1
2193+        # XXX: I don't think we need to do addBoth here -- any errBacks
2194+        # should be handled within push_segment.
2195+        d.addCallback(_increment_segnum)
2196+        d.addCallback(self._turn_barrier)
2197+        d.addCallback(self._push)
2198+        d.addErrback(self._failure)
2199+
2200+
2201+    def _turn_barrier(self, result):
2202+        """
2203+        I help the publish process avoid the recursion limit issues
2204+        described in #237.
2205+        """
2206+        return fireEventually(result)
2207+
2208+
2209+    def _add_dummy_salts(self):
2210+        """
2211+        SDMF files need a salt even if they're empty, or the signature
2212+        won't make sense. This method adds a dummy salt to each of our
2213+        SDMF writers so that they can write the signature later.
2214+        """
2215+        salt = os.urandom(16)
2216+        assert self._version == SDMF_VERSION
2217+
2218+        for writer in self.writers.itervalues():
2219+            writer.put_salt(salt)
2220+
2221+
2222+    def _encode_segment(self, segnum):
2223+        """
2224+        I encrypt and encode the segment segnum.
2225+        """
2226+        started = time.time()
2227+
2228+        if segnum + 1 == self.num_segments:
2229+            segsize = self.tail_segment_size
2230+        else:
2231+            segsize = self.segment_size
2232+
2233+
2234+        self.log("Pushing segment %d of %d" % (segnum + 1, self.num_segments))
2235+        data = self.data.read(segsize)
2236+        # XXX: This is dumb. Why return a list?
2237+        data = "".join(data)
2238+
2239+        assert len(data) == segsize, len(data)
2240+
2241+        salt = os.urandom(16)
2242+
2243+        key = hashutil.ssk_readkey_data_hash(salt, self.readkey)
2244+        self._status.set_status("Encrypting")
2245+        enc = AES(key)
2246+        crypttext = enc.process(data)
2247+        assert len(crypttext) == len(data)
2248 
2249hunk ./src/allmydata/mutable/publish.py 713
2250-        # no queries outstanding, no placements needed: we're done
2251-        self.log("no queries outstanding, no placements needed: done",
2252-                 level=log.OPERATIONAL)
2253         now = time.time()
2254hunk ./src/allmydata/mutable/publish.py 714
2255-        elapsed = now - self._started_pushing
2256-        self._status.timings["push"] = elapsed
2257-        return self._done(None)
2258+        self._status.timings["encrypt"] = now - started
2259+        started = now
2260+
2261+        # now apply FEC
2262+        if segnum + 1 == self.num_segments:
2263+            fec = self.tail_fec
2264+        else:
2265+            fec = self.fec
2266+
2267+        self._status.set_status("Encoding")
2268+        crypttext_pieces = [None] * self.required_shares
2269+        piece_size = fec.get_block_size()
2270+        for i in range(len(crypttext_pieces)):
2271+            offset = i * piece_size
2272+            piece = crypttext[offset:offset+piece_size]
2273+            piece = piece + "\x00"*(piece_size - len(piece)) # padding
2274+            crypttext_pieces[i] = piece
2275+            assert len(piece) == piece_size
2276+        d = fec.encode(crypttext_pieces)
2277+        def _done_encoding(res):
2278+            elapsed = time.time() - started
2279+            self._status.timings["encode"] = elapsed
2280+            return (res, salt)
2281+        d.addCallback(_done_encoding)
2282+        return d
2283+
2284+
2285+    def _push_segment(self, encoded_and_salt, segnum):
2286+        """
2287+        I push (data, salt) as segment number segnum.
2288+        """
2289+        results, salt = encoded_and_salt
2290+        shares, shareids = results
2291+        self._status.set_status("Pushing segment")
2292+        for i in xrange(len(shares)):
2293+            sharedata = shares[i]
2294+            shareid = shareids[i]
2295+            if self._version == MDMF_VERSION:
2296+                hashed = salt + sharedata
2297+            else:
2298+                hashed = sharedata
2299+            block_hash = hashutil.block_hash(hashed)
2300+            self.blockhashes[shareid][segnum] = block_hash
2301+            # find the writer for this share
2302+            writer = self.writers[shareid]
2303+            writer.put_block(sharedata, segnum, salt)
2304+
2305+
2306+    def push_everything_else(self):
2307+        """
2308+        I put everything else associated with a share.
2309+        """
2310+        self._pack_started = time.time()
2311+        self.push_encprivkey()
2312+        self.push_blockhashes()
2313+        self.push_sharehashes()
2314+        self.push_toplevel_hashes_and_signature()
2315+        d = self.finish_publishing()
2316+        def _change_state(ignored):
2317+            self._state = DONE_STATE
2318+        d.addCallback(_change_state)
2319+        d.addCallback(self._push)
2320+        return d
2321+
2322+
2323+    def push_encprivkey(self):
2324+        encprivkey = self._encprivkey
2325+        self._status.set_status("Pushing encrypted private key")
2326+        for writer in self.writers.itervalues():
2327+            writer.put_encprivkey(encprivkey)
2328+
2329+
2330+    def push_blockhashes(self):
2331+        self.sharehash_leaves = [None] * len(self.blockhashes)
2332+        self._status.set_status("Building and pushing block hash tree")
2333+        for shnum, blockhashes in self.blockhashes.iteritems():
2334+            t = hashtree.HashTree(blockhashes)
2335+            self.blockhashes[shnum] = list(t)
2336+            # set the leaf for future use.
2337+            self.sharehash_leaves[shnum] = t[0]
2338+
2339+            writer = self.writers[shnum]
2340+            writer.put_blockhashes(self.blockhashes[shnum])
2341+
2342+
2343+    def push_sharehashes(self):
2344+        self._status.set_status("Building and pushing share hash chain")
2345+        share_hash_tree = hashtree.HashTree(self.sharehash_leaves)
2346+        for shnum in xrange(len(self.sharehash_leaves)):
2347+            needed_indices = share_hash_tree.needed_hashes(shnum)
2348+            self.sharehashes[shnum] = dict( [ (i, share_hash_tree[i])
2349+                                             for i in needed_indices] )
2350+            writer = self.writers[shnum]
2351+            writer.put_sharehashes(self.sharehashes[shnum])
2352+        self.root_hash = share_hash_tree[0]
2353+
2354+
2355+    def push_toplevel_hashes_and_signature(self):
2356+        # We need to to three things here:
2357+        #   - Push the root hash and salt hash
2358+        #   - Get the checkstring of the resulting layout; sign that.
2359+        #   - Push the signature
2360+        self._status.set_status("Pushing root hashes and signature")
2361+        for shnum in xrange(self.total_shares):
2362+            writer = self.writers[shnum]
2363+            writer.put_root_hash(self.root_hash)
2364+        self._update_checkstring()
2365+        self._make_and_place_signature()
2366+
2367+
2368+    def _update_checkstring(self):
2369+        """
2370+        After putting the root hash, MDMF files will have the
2371+        checkstring written to the storage server. This means that we
2372+        can update our copy of the checkstring so we can detect
2373+        uncoordinated writes. SDMF files will have the same checkstring,
2374+        so we need not do anything.
2375+        """
2376+        self._checkstring = self.writers.values()[0].get_checkstring()
2377+
2378+
2379+    def _make_and_place_signature(self):
2380+        """
2381+        I create and place the signature.
2382+        """
2383+        started = time.time()
2384+        self._status.set_status("Signing prefix")
2385+        signable = self.writers[0].get_signable()
2386+        self.signature = self._privkey.sign(signable)
2387+
2388+        for (shnum, writer) in self.writers.iteritems():
2389+            writer.put_signature(self.signature)
2390+        self._status.timings['sign'] = time.time() - started
2391+
2392+
2393+    def finish_publishing(self):
2394+        # We're almost done -- we just need to put the verification key
2395+        # and the offsets
2396+        started = time.time()
2397+        self._status.set_status("Pushing shares")
2398+        self._started_pushing = started
2399+        ds = []
2400+        verification_key = self._pubkey.serialize()
2401+
2402+
2403+        # TODO: Bad, since we remove from this same dict. We need to
2404+        # make a copy, or just use a non-iterated value.
2405+        for (shnum, writer) in self.writers.iteritems():
2406+            writer.put_verification_key(verification_key)
2407+            d = writer.finish_publishing()
2408+            # Add the (peerid, shnum) tuple to our list of outstanding
2409+            # queries. This gets used by _loop if some of our queries
2410+            # fail to place shares.
2411+            self.outstanding.add((writer.peerid, writer.shnum))
2412+            d.addCallback(self._got_write_answer, writer, started)
2413+            d.addErrback(self._connection_problem, writer)
2414+            ds.append(d)
2415+        self._record_verinfo()
2416+        self._status.timings['pack'] = time.time() - started
2417+        return defer.DeferredList(ds)
2418+
2419+
2420+    def _record_verinfo(self):
2421+        self.versioninfo = self.writers.values()[0].get_verinfo()
2422+
2423+
2424+    def _connection_problem(self, f, writer):
2425+        """
2426+        We ran into a connection problem while working with writer, and
2427+        need to deal with that.
2428+        """
2429+        self.log("found problem: %s" % str(f))
2430+        self._last_failure = f
2431+        del(self.writers[writer.shnum])
2432+
2433 
2434     def log_goal(self, goal, message=""):
2435         logmsg = [message]
2436hunk ./src/allmydata/mutable/publish.py 971
2437             self.log_goal(self.goal, "after update: ")
2438 
2439 
2440+    def _got_write_answer(self, answer, writer, started):
2441+        if not answer:
2442+            # SDMF writers only pretend to write when readers set their
2443+            # blocks, salts, and so on -- they actually just write once,
2444+            # at the end of the upload process. In fake writes, they
2445+            # return defer.succeed(None). If we see that, we shouldn't
2446+            # bother checking it.
2447+            return
2448 
2449hunk ./src/allmydata/mutable/publish.py 980
2450-    def _encrypt_and_encode(self):
2451-        # this returns a Deferred that fires with a list of (sharedata,
2452-        # sharenum) tuples. TODO: cache the ciphertext, only produce the
2453-        # shares that we care about.
2454-        self.log("_encrypt_and_encode")
2455-
2456-        self._status.set_status("Encrypting")
2457-        started = time.time()
2458-
2459-        key = hashutil.ssk_readkey_data_hash(self.salt, self.readkey)
2460-        enc = AES(key)
2461-        crypttext = enc.process(self.newdata)
2462-        assert len(crypttext) == len(self.newdata)
2463+        peerid = writer.peerid
2464+        lp = self.log("_got_write_answer from %s, share %d" %
2465+                      (idlib.shortnodeid_b2a(peerid), writer.shnum))
2466 
2467         now = time.time()
2468hunk ./src/allmydata/mutable/publish.py 985
2469-        self._status.timings["encrypt"] = now - started
2470-        started = now
2471-
2472-        # now apply FEC
2473-
2474-        self._status.set_status("Encoding")
2475-        fec = codec.CRSEncoder()
2476-        fec.set_params(self.segment_size,
2477-                       self.required_shares, self.total_shares)
2478-        piece_size = fec.get_block_size()
2479-        crypttext_pieces = [None] * self.required_shares
2480-        for i in range(len(crypttext_pieces)):
2481-            offset = i * piece_size
2482-            piece = crypttext[offset:offset+piece_size]
2483-            piece = piece + "\x00"*(piece_size - len(piece)) # padding
2484-            crypttext_pieces[i] = piece
2485-            assert len(piece) == piece_size
2486-
2487-        d = fec.encode(crypttext_pieces)
2488-        def _done_encoding(res):
2489-            elapsed = time.time() - started
2490-            self._status.timings["encode"] = elapsed
2491-            return res
2492-        d.addCallback(_done_encoding)
2493-        return d
2494-
2495-    def _generate_shares(self, shares_and_shareids):
2496-        # this sets self.shares and self.root_hash
2497-        self.log("_generate_shares")
2498-        self._status.set_status("Generating Shares")
2499-        started = time.time()
2500-
2501-        # we should know these by now
2502-        privkey = self._privkey
2503-        encprivkey = self._encprivkey
2504-        pubkey = self._pubkey
2505-
2506-        (shares, share_ids) = shares_and_shareids
2507-
2508-        assert len(shares) == len(share_ids)
2509-        assert len(shares) == self.total_shares
2510-        all_shares = {}
2511-        block_hash_trees = {}
2512-        share_hash_leaves = [None] * len(shares)
2513-        for i in range(len(shares)):
2514-            share_data = shares[i]
2515-            shnum = share_ids[i]
2516-            all_shares[shnum] = share_data
2517-
2518-            # build the block hash tree. SDMF has only one leaf.
2519-            leaves = [hashutil.block_hash(share_data)]
2520-            t = hashtree.HashTree(leaves)
2521-            block_hash_trees[shnum] = list(t)
2522-            share_hash_leaves[shnum] = t[0]
2523-        for leaf in share_hash_leaves:
2524-            assert leaf is not None
2525-        share_hash_tree = hashtree.HashTree(share_hash_leaves)
2526-        share_hash_chain = {}
2527-        for shnum in range(self.total_shares):
2528-            needed_hashes = share_hash_tree.needed_hashes(shnum)
2529-            share_hash_chain[shnum] = dict( [ (i, share_hash_tree[i])
2530-                                              for i in needed_hashes ] )
2531-        root_hash = share_hash_tree[0]
2532-        assert len(root_hash) == 32
2533-        self.log("my new root_hash is %s" % base32.b2a(root_hash))
2534-        self._new_version_info = (self._new_seqnum, root_hash, self.salt)
2535-
2536-        prefix = pack_prefix(self._new_seqnum, root_hash, self.salt,
2537-                             self.required_shares, self.total_shares,
2538-                             self.segment_size, len(self.newdata))
2539-
2540-        # now pack the beginning of the share. All shares are the same up
2541-        # to the signature, then they have divergent share hash chains,
2542-        # then completely different block hash trees + salt + share data,
2543-        # then they all share the same encprivkey at the end. The sizes
2544-        # of everything are the same for all shares.
2545-
2546-        sign_started = time.time()
2547-        signature = privkey.sign(prefix)
2548-        self._status.timings["sign"] = time.time() - sign_started
2549-
2550-        verification_key = pubkey.serialize()
2551-
2552-        final_shares = {}
2553-        for shnum in range(self.total_shares):
2554-            final_share = pack_share(prefix,
2555-                                     verification_key,
2556-                                     signature,
2557-                                     share_hash_chain[shnum],
2558-                                     block_hash_trees[shnum],
2559-                                     all_shares[shnum],
2560-                                     encprivkey)
2561-            final_shares[shnum] = final_share
2562-        elapsed = time.time() - started
2563-        self._status.timings["pack"] = elapsed
2564-        self.shares = final_shares
2565-        self.root_hash = root_hash
2566-
2567-        # we also need to build up the version identifier for what we're
2568-        # pushing. Extract the offsets from one of our shares.
2569-        assert final_shares
2570-        offsets = unpack_header(final_shares.values()[0])[-1]
2571-        offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
2572-        verinfo = (self._new_seqnum, root_hash, self.salt,
2573-                   self.segment_size, len(self.newdata),
2574-                   self.required_shares, self.total_shares,
2575-                   prefix, offsets_tuple)
2576-        self.versioninfo = verinfo
2577-
2578-
2579-
2580-    def _send_shares(self, needed):
2581-        self.log("_send_shares")
2582-
2583-        # we're finally ready to send out our shares. If we encounter any
2584-        # surprises here, it's because somebody else is writing at the same
2585-        # time. (Note: in the future, when we remove the _query_peers() step
2586-        # and instead speculate about [or remember] which shares are where,
2587-        # surprises here are *not* indications of UncoordinatedWriteError,
2588-        # and we'll need to respond to them more gracefully.)
2589-
2590-        # needed is a set of (peerid, shnum) tuples. The first thing we do is
2591-        # organize it by peerid.
2592-
2593-        peermap = DictOfSets()
2594-        for (peerid, shnum) in needed:
2595-            peermap.add(peerid, shnum)
2596-
2597-        # the next thing is to build up a bunch of test vectors. The
2598-        # semantics of Publish are that we perform the operation if the world
2599-        # hasn't changed since the ServerMap was constructed (more or less).
2600-        # For every share we're trying to place, we create a test vector that
2601-        # tests to see if the server*share still corresponds to the
2602-        # map.
2603-
2604-        all_tw_vectors = {} # maps peerid to tw_vectors
2605-        sm = self._servermap.servermap
2606-
2607-        for key in needed:
2608-            (peerid, shnum) = key
2609-
2610-            if key in sm:
2611-                # an old version of that share already exists on the
2612-                # server, according to our servermap. We will create a
2613-                # request that attempts to replace it.
2614-                old_versionid, old_timestamp = sm[key]
2615-                (old_seqnum, old_root_hash, old_salt, old_segsize,
2616-                 old_datalength, old_k, old_N, old_prefix,
2617-                 old_offsets_tuple) = old_versionid
2618-                old_checkstring = pack_checkstring(old_seqnum,
2619-                                                   old_root_hash,
2620-                                                   old_salt)
2621-                testv = (0, len(old_checkstring), "eq", old_checkstring)
2622-
2623-            elif key in self.bad_share_checkstrings:
2624-                old_checkstring = self.bad_share_checkstrings[key]
2625-                testv = (0, len(old_checkstring), "eq", old_checkstring)
2626-
2627-            else:
2628-                # add a testv that requires the share not exist
2629-
2630-                # Unfortunately, foolscap-0.2.5 has a bug in the way inbound
2631-                # constraints are handled. If the same object is referenced
2632-                # multiple times inside the arguments, foolscap emits a
2633-                # 'reference' token instead of a distinct copy of the
2634-                # argument. The bug is that these 'reference' tokens are not
2635-                # accepted by the inbound constraint code. To work around
2636-                # this, we need to prevent python from interning the
2637-                # (constant) tuple, by creating a new copy of this vector
2638-                # each time.
2639-
2640-                # This bug is fixed in foolscap-0.2.6, and even though this
2641-                # version of Tahoe requires foolscap-0.3.1 or newer, we are
2642-                # supposed to be able to interoperate with older versions of
2643-                # Tahoe which are allowed to use older versions of foolscap,
2644-                # including foolscap-0.2.5 . In addition, I've seen other
2645-                # foolscap problems triggered by 'reference' tokens (see #541
2646-                # for details). So we must keep this workaround in place.
2647-
2648-                #testv = (0, 1, 'eq', "")
2649-                testv = tuple([0, 1, 'eq', ""])
2650-
2651-            testvs = [testv]
2652-            # the write vector is simply the share
2653-            writev = [(0, self.shares[shnum])]
2654-
2655-            if peerid not in all_tw_vectors:
2656-                all_tw_vectors[peerid] = {}
2657-                # maps shnum to (testvs, writevs, new_length)
2658-            assert shnum not in all_tw_vectors[peerid]
2659-
2660-            all_tw_vectors[peerid][shnum] = (testvs, writev, None)
2661-
2662-        # we read the checkstring back from each share, however we only use
2663-        # it to detect whether there was a new share that we didn't know
2664-        # about. The success or failure of the write will tell us whether
2665-        # there was a collision or not. If there is a collision, the first
2666-        # thing we'll do is update the servermap, which will find out what
2667-        # happened. We could conceivably reduce a roundtrip by using the
2668-        # readv checkstring to populate the servermap, but really we'd have
2669-        # to read enough data to validate the signatures too, so it wouldn't
2670-        # be an overall win.
2671-        read_vector = [(0, struct.calcsize(SIGNED_PREFIX))]
2672-
2673-        # ok, send the messages!
2674-        self.log("sending %d shares" % len(all_tw_vectors), level=log.NOISY)
2675-        started = time.time()
2676-        for (peerid, tw_vectors) in all_tw_vectors.items():
2677-
2678-            write_enabler = self._node.get_write_enabler(peerid)
2679-            renew_secret = self._node.get_renewal_secret(peerid)
2680-            cancel_secret = self._node.get_cancel_secret(peerid)
2681-            secrets = (write_enabler, renew_secret, cancel_secret)
2682-            shnums = tw_vectors.keys()
2683-
2684-            for shnum in shnums:
2685-                self.outstanding.add( (peerid, shnum) )
2686-
2687-            d = self._do_testreadwrite(peerid, secrets,
2688-                                       tw_vectors, read_vector)
2689-            d.addCallbacks(self._got_write_answer, self._got_write_error,
2690-                           callbackArgs=(peerid, shnums, started),
2691-                           errbackArgs=(peerid, shnums, started))
2692-            # tolerate immediate errback, like with DeadReferenceError
2693-            d.addBoth(fireEventually)
2694-            d.addCallback(self.loop)
2695-            d.addErrback(self._fatal_error)
2696-
2697-        self._update_status()
2698-        self.log("%d shares sent" % len(all_tw_vectors), level=log.NOISY)
2699+        elapsed = now - started
2700 
2701hunk ./src/allmydata/mutable/publish.py 987
2702-    def _do_testreadwrite(self, peerid, secrets,
2703-                          tw_vectors, read_vector):
2704-        storage_index = self._storage_index
2705-        ss = self.connections[peerid]
2706+        self._status.add_per_server_time(peerid, elapsed)
2707 
2708hunk ./src/allmydata/mutable/publish.py 989
2709-        #print "SS[%s] is %s" % (idlib.shortnodeid_b2a(peerid), ss), ss.tracker.interfaceName
2710-        d = ss.callRemote("slot_testv_and_readv_and_writev",
2711-                          storage_index,
2712-                          secrets,
2713-                          tw_vectors,
2714-                          read_vector)
2715-        return d
2716+        wrote, read_data = answer
2717 
2718hunk ./src/allmydata/mutable/publish.py 991
2719-    def _got_write_answer(self, answer, peerid, shnums, started):
2720-        lp = self.log("_got_write_answer from %s" %
2721-                      idlib.shortnodeid_b2a(peerid))
2722-        for shnum in shnums:
2723-            self.outstanding.discard( (peerid, shnum) )
2724+        surprise_shares = set(read_data.keys()) - set([writer.shnum])
2725 
2726hunk ./src/allmydata/mutable/publish.py 993
2727-        now = time.time()
2728-        elapsed = now - started
2729-        self._status.add_per_server_time(peerid, elapsed)
2730+        # We need to remove from surprise_shares any shares that we are
2731+        # knowingly also writing to that peer from other writers.
2732 
2733hunk ./src/allmydata/mutable/publish.py 996
2734-        wrote, read_data = answer
2735+        # TODO: Precompute this.
2736+        known_shnums = [x.shnum for x in self.writers.values()
2737+                        if x.peerid == peerid]
2738+        surprise_shares -= set(known_shnums)
2739+        self.log("found the following surprise shares: %s" %
2740+                 str(surprise_shares))
2741 
2742hunk ./src/allmydata/mutable/publish.py 1003
2743-        surprise_shares = set(read_data.keys()) - set(shnums)
2744+        # Now surprise shares contains all of the shares that we did not
2745+        # expect to be there.
2746 
2747         surprised = False
2748         for shnum in surprise_shares:
2749hunk ./src/allmydata/mutable/publish.py 1010
2750             # read_data is a dict mapping shnum to checkstring (SIGNED_PREFIX)
2751             checkstring = read_data[shnum][0]
2752-            their_version_info = unpack_checkstring(checkstring)
2753-            if their_version_info == self._new_version_info:
2754+            # What we want to do here is to see if their (seqnum,
2755+            # roothash, salt) is the same as our (seqnum, roothash,
2756+            # salt), or the equivalent for MDMF. The best way to do this
2757+            # is to store a packed representation of our checkstring
2758+            # somewhere, then not bother unpacking the other
2759+            # checkstring.
2760+            if checkstring == self._checkstring:
2761                 # they have the right share, somehow
2762 
2763                 if (peerid,shnum) in self.goal:
2764hunk ./src/allmydata/mutable/publish.py 1095
2765             self.log("our testv failed, so the write did not happen",
2766                      parent=lp, level=log.WEIRD, umid="8sc26g")
2767             self.surprised = True
2768-            self.bad_peers.add(peerid) # don't ask them again
2769+            self.bad_peers.add(writer) # don't ask them again
2770             # use the checkstring to add information to the log message
2771             for (shnum,readv) in read_data.items():
2772                 checkstring = readv[0]
2773hunk ./src/allmydata/mutable/publish.py 1117
2774                 # if expected_version==None, then we didn't expect to see a
2775                 # share on that peer, and the 'surprise_shares' clause above
2776                 # will have logged it.
2777-            # self.loop() will take care of finding new homes
2778             return
2779 
2780hunk ./src/allmydata/mutable/publish.py 1119
2781-        for shnum in shnums:
2782-            self.placed.add( (peerid, shnum) )
2783-            # and update the servermap
2784-            self._servermap.add_new_share(peerid, shnum,
2785+        # and update the servermap
2786+        # self.versioninfo is set during the last phase of publishing.
2787+        # If we get there, we know that responses correspond to placed
2788+        # shares, and can safely execute these statements.
2789+        if self.versioninfo:
2790+            self.log("wrote successfully: adding new share to servermap")
2791+            self._servermap.add_new_share(peerid, writer.shnum,
2792                                           self.versioninfo, started)
2793hunk ./src/allmydata/mutable/publish.py 1127
2794-
2795-        # self.loop() will take care of checking to see if we're done
2796-        return
2797-
2798-    def _got_write_error(self, f, peerid, shnums, started):
2799-        for shnum in shnums:
2800-            self.outstanding.discard( (peerid, shnum) )
2801-        self.bad_peers.add(peerid)
2802-        if self._first_write_error is None:
2803-            self._first_write_error = f
2804-        self.log(format="error while writing shares %(shnums)s to peerid %(peerid)s",
2805-                 shnums=list(shnums), peerid=idlib.shortnodeid_b2a(peerid),
2806-                 failure=f,
2807-                 level=log.UNUSUAL)
2808-        # self.loop() will take care of checking to see if we're done
2809+            self.placed.add( (peerid, writer.shnum) )
2810+        self._update_status()
2811+        # the next method in the deferred chain will check to see if
2812+        # we're done and successful.
2813         return
2814 
2815 
2816hunk ./src/allmydata/mutable/publish.py 1134
2817-    def _done(self, res):
2818+    def _done(self):
2819         if not self._running:
2820             return
2821         self._running = False
2822hunk ./src/allmydata/mutable/publish.py 1140
2823         now = time.time()
2824         self._status.timings["total"] = now - self._started
2825+
2826+        elapsed = now - self._started_pushing
2827+        self._status.timings['push'] = elapsed
2828+
2829         self._status.set_active(False)
2830hunk ./src/allmydata/mutable/publish.py 1145
2831-        if isinstance(res, failure.Failure):
2832-            self.log("Publish done, with failure", failure=res,
2833-                     level=log.WEIRD, umid="nRsR9Q")
2834-            self._status.set_status("Failed")
2835-        elif self.surprised:
2836-            self.log("Publish done, UncoordinatedWriteError", level=log.UNUSUAL)
2837-            self._status.set_status("UncoordinatedWriteError")
2838-            # deliver a failure
2839-            res = failure.Failure(UncoordinatedWriteError())
2840-            # TODO: recovery
2841+        self.log("Publish done, success")
2842+        self._status.set_status("Finished")
2843+        self._status.set_progress(1.0)
2844+        # Get k and segsize, then give them to the caller.
2845+        hints = {}
2846+        hints['segsize'] = self.segment_size
2847+        hints['k'] = self.required_shares
2848+        self._node.set_downloader_hints(hints)
2849+        eventually(self.done_deferred.callback, None)
2850+
2851+    def _failure(self, f=None):
2852+        if f:
2853+            self._last_failure = f
2854+
2855+        if not self.surprised:
2856+            # We ran out of servers
2857+            msg = "Publish ran out of good servers"
2858+            if self._last_failure:
2859+                msg += ", last failure was: %s" % str(self._last_failure)
2860+            self.log(msg)
2861+            e = NotEnoughServersError(msg)
2862+
2863+        else:
2864+            # We ran into shares that we didn't recognize, which means
2865+            # that we need to return an UncoordinatedWriteError.
2866+            self.log("Publish failed with UncoordinatedWriteError")
2867+            e = UncoordinatedWriteError()
2868+        f = failure.Failure(e)
2869+        eventually(self.done_deferred.callback, f)
2870+
2871+
2872+class MutableFileHandle:
2873+    """
2874+    I am a mutable uploadable built around a filehandle-like object,
2875+    usually either a StringIO instance or a handle to an actual file.
2876+    """
2877+    implements(IMutableUploadable)
2878+
2879+    def __init__(self, filehandle):
2880+        # The filehandle is defined as a generally file-like object that
2881+        # has these two methods. We don't care beyond that.
2882+        assert hasattr(filehandle, "read")
2883+        assert hasattr(filehandle, "close")
2884+
2885+        self._filehandle = filehandle
2886+        # We must start reading at the beginning of the file, or we risk
2887+        # encountering errors when the data read does not match the size
2888+        # reported to the uploader.
2889+        self._filehandle.seek(0)
2890+
2891+        # We have not yet read anything, so our position is 0.
2892+        self._marker = 0
2893+
2894+
2895+    def get_size(self):
2896+        """
2897+        I return the amount of data in my filehandle.
2898+        """
2899+        if not hasattr(self, "_size"):
2900+            old_position = self._filehandle.tell()
2901+            # Seek to the end of the file by seeking 0 bytes from the
2902+            # file's end
2903+            self._filehandle.seek(0, 2) # 2 == os.SEEK_END in 2.5+
2904+            self._size = self._filehandle.tell()
2905+            # Restore the previous position, in case this was called
2906+            # after a read.
2907+            self._filehandle.seek(old_position)
2908+            assert self._filehandle.tell() == old_position
2909+
2910+        assert hasattr(self, "_size")
2911+        return self._size
2912+
2913+
2914+    def pos(self):
2915+        """
2916+        I return the position of my read marker -- i.e., how much data I
2917+        have already read and returned to callers.
2918+        """
2919+        return self._marker
2920+
2921+
2922+    def read(self, length):
2923+        """
2924+        I return some data (up to length bytes) from my filehandle.
2925+
2926+        In most cases, I return length bytes, but sometimes I won't --
2927+        for example, if I am asked to read beyond the end of a file, or
2928+        an error occurs.
2929+        """
2930+        results = self._filehandle.read(length)
2931+        self._marker += len(results)
2932+        return [results]
2933+
2934+
2935+    def close(self):
2936+        """
2937+        I close the underlying filehandle. Any further operations on the
2938+        filehandle fail at this point.
2939+        """
2940+        self._filehandle.close()
2941+
2942+
2943+class MutableData(MutableFileHandle):
2944+    """
2945+    I am a mutable uploadable built around a string, which I then cast
2946+    into a StringIO and treat as a filehandle.
2947+    """
2948+
2949+    def __init__(self, s):
2950+        # Take a string and return a file-like uploadable.
2951+        assert isinstance(s, str)
2952+
2953+        MutableFileHandle.__init__(self, StringIO(s))
2954+
2955+
2956+class TransformingUploadable:
2957+    """
2958+    I am an IMutableUploadable that wraps another IMutableUploadable,
2959+    and some segments that are already on the grid. When I am called to
2960+    read, I handle merging of boundary segments.
2961+    """
2962+    implements(IMutableUploadable)
2963+
2964+
2965+    def __init__(self, data, offset, segment_size, start, end):
2966+        assert IMutableUploadable.providedBy(data)
2967+
2968+        self._newdata = data
2969+        self._offset = offset
2970+        self._segment_size = segment_size
2971+        self._start = start
2972+        self._end = end
2973+
2974+        self._read_marker = 0
2975+
2976+        self._first_segment_offset = offset % segment_size
2977+
2978+        num = self.log("TransformingUploadable: starting", parent=None)
2979+        self._log_number = num
2980+        self.log("got fso: %d" % self._first_segment_offset)
2981+        self.log("got offset: %d" % self._offset)
2982+
2983+
2984+    def log(self, *args, **kwargs):
2985+        if 'parent' not in kwargs:
2986+            kwargs['parent'] = self._log_number
2987+        if "facility" not in kwargs:
2988+            kwargs["facility"] = "tahoe.mutable.transforminguploadable"
2989+        return log.msg(*args, **kwargs)
2990+
2991+
2992+    def get_size(self):
2993+        return self._offset + self._newdata.get_size()
2994+
2995+
2996+    def read(self, length):
2997+        # We can get data from 3 sources here.
2998+        #   1. The first of the segments provided to us.
2999+        #   2. The data that we're replacing things with.
3000+        #   3. The last of the segments provided to us.
3001+
3002+        # are we in state 0?
3003+        self.log("reading %d bytes" % length)
3004+
3005+        old_start_data = ""
3006+        old_data_length = self._first_segment_offset - self._read_marker
3007+        if old_data_length > 0:
3008+            if old_data_length > length:
3009+                old_data_length = length
3010+            self.log("returning %d bytes of old start data" % old_data_length)
3011+
3012+            old_data_end = old_data_length + self._read_marker
3013+            old_start_data = self._start[self._read_marker:old_data_end]
3014+            length -= old_data_length
3015         else:
3016hunk ./src/allmydata/mutable/publish.py 1320
3017-            self.log("Publish done, success")
3018-            self._status.set_status("Finished")
3019-            self._status.set_progress(1.0)
3020-        eventually(self.done_deferred.callback, res)
3021+            # otherwise calculations later get screwed up.
3022+            old_data_length = 0
3023+
3024+        # Is there enough new data to satisfy this read? If not, we need
3025+        # to pad the end of the data with data from our last segment.
3026+        old_end_length = length - \
3027+            (self._newdata.get_size() - self._newdata.pos())
3028+        old_end_data = ""
3029+        if old_end_length > 0:
3030+            self.log("reading %d bytes of old end data" % old_end_length)
3031+
3032+            # TODO: We're not explicitly checking for tail segment size
3033+            # here. Is that a problem?
3034+            old_data_offset = (length - old_end_length + \
3035+                               old_data_length) % self._segment_size
3036+            self.log("reading at offset %d" % old_data_offset)
3037+            old_end = old_data_offset + old_end_length
3038+            old_end_data = self._end[old_data_offset:old_end]
3039+            length -= old_end_length
3040+            assert length == self._newdata.get_size() - self._newdata.pos()
3041+
3042+        self.log("reading %d bytes of new data" % length)
3043+        new_data = self._newdata.read(length)
3044+        new_data = "".join(new_data)
3045+
3046+        self._read_marker += len(old_start_data + new_data + old_end_data)
3047 
3048hunk ./src/allmydata/mutable/publish.py 1347
3049+        return old_start_data + new_data + old_end_data
3050 
3051hunk ./src/allmydata/mutable/publish.py 1349
3052+    def close(self):
3053+        pass
3054}
3055[interfaces: change interfaces to work with MDMF
3056Kevan Carstensen <kevan@isnotajoke.com>**20110802014119
3057 Ignore-this: 2f441022cf888c044bc9e6dd609db139
3058 
3059 A lot of this work concerns #993, in that it unifies (to an extent) the
3060 interfaces of mutable and immutable files.
3061] {
3062hunk ./src/allmydata/interfaces.py 7
3063      ChoiceOf, IntegerConstraint, Any, RemoteInterface, Referenceable
3064 
3065 HASH_SIZE=32
3066+SALT_SIZE=16
3067+
3068+SDMF_VERSION=0
3069+MDMF_VERSION=1
3070 
3071 Hash = StringConstraint(maxLength=HASH_SIZE,
3072                         minLength=HASH_SIZE)# binary format 32-byte SHA256 hash
3073hunk ./src/allmydata/interfaces.py 424
3074         """
3075 
3076 
3077+class IMutableSlotWriter(Interface):
3078+    """
3079+    The interface for a writer around a mutable slot on a remote server.
3080+    """
3081+    def set_checkstring(checkstring, *args):
3082+        """
3083+        Set the checkstring that I will pass to the remote server when
3084+        writing.
3085+
3086+            @param checkstring A packed checkstring to use.
3087+
3088+        Note that implementations can differ in which semantics they
3089+        wish to support for set_checkstring -- they can, for example,
3090+        build the checkstring themselves from its constituents, or
3091+        some other thing.
3092+        """
3093+
3094+    def get_checkstring():
3095+        """
3096+        Get the checkstring that I think currently exists on the remote
3097+        server.
3098+        """
3099+
3100+    def put_block(data, segnum, salt):
3101+        """
3102+        Add a block and salt to the share.
3103+        """
3104+
3105+    def put_encprivey(encprivkey):
3106+        """
3107+        Add the encrypted private key to the share.
3108+        """
3109+
3110+    def put_blockhashes(blockhashes=list):
3111+        """
3112+        Add the block hash tree to the share.
3113+        """
3114+
3115+    def put_sharehashes(sharehashes=dict):
3116+        """
3117+        Add the share hash chain to the share.
3118+        """
3119+
3120+    def get_signable():
3121+        """
3122+        Return the part of the share that needs to be signed.
3123+        """
3124+
3125+    def put_signature(signature):
3126+        """
3127+        Add the signature to the share.
3128+        """
3129+
3130+    def put_verification_key(verification_key):
3131+        """
3132+        Add the verification key to the share.
3133+        """
3134+
3135+    def finish_publishing():
3136+        """
3137+        Do anything necessary to finish writing the share to a remote
3138+        server. I require that no further publishing needs to take place
3139+        after this method has been called.
3140+        """
3141+
3142+
3143 class IURI(Interface):
3144     def init_from_string(uri):
3145         """Accept a string (as created by my to_string() method) and populate
3146hunk ./src/allmydata/interfaces.py 546
3147 
3148 class IMutableFileURI(Interface):
3149     """I am a URI which represents a mutable filenode."""
3150+    def get_extension_params():
3151+        """Return the extension parameters in the URI"""
3152+
3153+    def set_extension_params():
3154+        """Set the extension parameters that should be in the URI"""
3155 
3156 class IDirectoryURI(Interface):
3157     pass
3158hunk ./src/allmydata/interfaces.py 574
3159 class MustNotBeUnknownRWError(CapConstraintError):
3160     """Cannot add an unknown child cap specified in a rw_uri field."""
3161 
3162+
3163+class IReadable(Interface):
3164+    """I represent a readable object -- either an immutable file, or a
3165+    specific version of a mutable file.
3166+    """
3167+
3168+    def is_readonly():
3169+        """Return True if this reference provides mutable access to the given
3170+        file or directory (i.e. if you can modify it), or False if not. Note
3171+        that even if this reference is read-only, someone else may hold a
3172+        read-write reference to it.
3173+
3174+        For an IReadable returned by get_best_readable_version(), this will
3175+        always return True, but for instances of subinterfaces such as
3176+        IMutableFileVersion, it may return False."""
3177+
3178+    def is_mutable():
3179+        """Return True if this file or directory is mutable (by *somebody*,
3180+        not necessarily you), False if it is is immutable. Note that a file
3181+        might be mutable overall, but your reference to it might be
3182+        read-only. On the other hand, all references to an immutable file
3183+        will be read-only; there are no read-write references to an immutable
3184+        file."""
3185+
3186+    def get_storage_index():
3187+        """Return the storage index of the file."""
3188+
3189+    def get_size():
3190+        """Return the length (in bytes) of this readable object."""
3191+
3192+    def download_to_data():
3193+        """Download all of the file contents. I return a Deferred that fires
3194+        with the contents as a byte string."""
3195+
3196+    def read(consumer, offset=0, size=None):
3197+        """Download a portion (possibly all) of the file's contents, making
3198+        them available to the given IConsumer. Return a Deferred that fires
3199+        (with the consumer) when the consumer is unregistered (either because
3200+        the last byte has been given to it, or because the consumer threw an
3201+        exception during write(), possibly because it no longer wants to
3202+        receive data). The portion downloaded will start at 'offset' and
3203+        contain 'size' bytes (or the remainder of the file if size==None).
3204+
3205+        The consumer will be used in non-streaming mode: an IPullProducer
3206+        will be attached to it.
3207+
3208+        The consumer will not receive data right away: several network trips
3209+        must occur first. The order of events will be::
3210+
3211+         consumer.registerProducer(p, streaming)
3212+          (if streaming == False)::
3213+           consumer does p.resumeProducing()
3214+            consumer.write(data)
3215+           consumer does p.resumeProducing()
3216+            consumer.write(data).. (repeat until all data is written)
3217+         consumer.unregisterProducer()
3218+         deferred.callback(consumer)
3219+
3220+        If a download error occurs, or an exception is raised by
3221+        consumer.registerProducer() or consumer.write(), I will call
3222+        consumer.unregisterProducer() and then deliver the exception via
3223+        deferred.errback(). To cancel the download, the consumer should call
3224+        p.stopProducing(), which will result in an exception being delivered
3225+        via deferred.errback().
3226+
3227+        See src/allmydata/util/consumer.py for an example of a simple
3228+        download-to-memory consumer.
3229+        """
3230+
3231+
3232+class IWritable(Interface):
3233+    """
3234+    I define methods that callers can use to update SDMF and MDMF
3235+    mutable files on a Tahoe-LAFS grid.
3236+    """
3237+    # XXX: For the moment, we have only this. It is possible that we
3238+    #      want to move overwrite() and modify() in here too.
3239+    def update(data, offset):
3240+        """
3241+        I write the data from my data argument to the MDMF file,
3242+        starting at offset. I continue writing data until my data
3243+        argument is exhausted, appending data to the file as necessary.
3244+        """
3245+        # assert IMutableUploadable.providedBy(data)
3246+        # to append data: offset=node.get_size_of_best_version()
3247+        # do we want to support compacting MDMF?
3248+        # for an MDMF file, this can be done with O(data.get_size())
3249+        # memory. For an SDMF file, any modification takes
3250+        # O(node.get_size_of_best_version()).
3251+
3252+
3253+class IMutableFileVersion(IReadable):
3254+    """I provide access to a particular version of a mutable file. The
3255+    access is read/write if I was obtained from a filenode derived from
3256+    a write cap, or read-only if the filenode was derived from a read cap.
3257+    """
3258+
3259+    def get_sequence_number():
3260+        """Return the sequence number of this version."""
3261+
3262+    def get_servermap():
3263+        """Return the IMutableFileServerMap instance that was used to create
3264+        this object.
3265+        """
3266+
3267+    def get_writekey():
3268+        """Return this filenode's writekey, or None if the node does not have
3269+        write-capability. This may be used to assist with data structures
3270+        that need to make certain data available only to writers, such as the
3271+        read-write child caps in dirnodes. The recommended process is to have
3272+        reader-visible data be submitted to the filenode in the clear (where
3273+        it will be encrypted by the filenode using the readkey), but encrypt
3274+        writer-visible data using this writekey.
3275+        """
3276+
3277+    # TODO: Can this be overwrite instead of replace?
3278+    def replace(new_contents):
3279+        """Replace the contents of the mutable file, provided that no other
3280+        node has published (or is attempting to publish, concurrently) a
3281+        newer version of the file than this one.
3282+
3283+        I will avoid modifying any share that is different than the version
3284+        given by get_sequence_number(). However, if another node is writing
3285+        to the file at the same time as me, I may manage to update some shares
3286+        while they update others. If I see any evidence of this, I will signal
3287+        UncoordinatedWriteError, and the file will be left in an inconsistent
3288+        state (possibly the version you provided, possibly the old version,
3289+        possibly somebody else's version, and possibly a mix of shares from
3290+        all of these).
3291+
3292+        The recommended response to UncoordinatedWriteError is to either
3293+        return it to the caller (since they failed to coordinate their
3294+        writes), or to attempt some sort of recovery. It may be sufficient to
3295+        wait a random interval (with exponential backoff) and repeat your
3296+        operation. If I do not signal UncoordinatedWriteError, then I was
3297+        able to write the new version without incident.
3298+
3299+        I return a Deferred that fires (with a PublishStatus object) when the
3300+        update has completed.
3301+        """
3302+
3303+    def modify(modifier_cb):
3304+        """Modify the contents of the file, by downloading this version,
3305+        applying the modifier function (or bound method), then uploading
3306+        the new version. This will succeed as long as no other node
3307+        publishes a version between the download and the upload.
3308+        I return a Deferred that fires (with a PublishStatus object) when
3309+        the update is complete.
3310+
3311+        The modifier callable will be given three arguments: a string (with
3312+        the old contents), a 'first_time' boolean, and a servermap. As with
3313+        download_to_data(), the old contents will be from this version,
3314+        but the modifier can use the servermap to make other decisions
3315+        (such as refusing to apply the delta if there are multiple parallel
3316+        versions, or if there is evidence of a newer unrecoverable version).
3317+        'first_time' will be True the first time the modifier is called,
3318+        and False on any subsequent calls.
3319+
3320+        The callable should return a string with the new contents. The
3321+        callable must be prepared to be called multiple times, and must
3322+        examine the input string to see if the change that it wants to make
3323+        is already present in the old version. If it does not need to make
3324+        any changes, it can either return None, or return its input string.
3325+
3326+        If the modifier raises an exception, it will be returned in the
3327+        errback.
3328+        """
3329+
3330+
3331 # The hierarchy looks like this:
3332 #  IFilesystemNode
3333 #   IFileNode
3334hunk ./src/allmydata/interfaces.py 833
3335     def raise_error():
3336         """Raise any error associated with this node."""
3337 
3338+    # XXX: These may not be appropriate outside the context of an IReadable.
3339     def get_size():
3340         """Return the length (in bytes) of the data this node represents. For
3341         directory nodes, I return the size of the backing store. I return
3342hunk ./src/allmydata/interfaces.py 850
3343 class IFileNode(IFilesystemNode):
3344     """I am a node which represents a file: a sequence of bytes. I am not a
3345     container, like IDirectoryNode."""
3346+    def get_best_readable_version():
3347+        """Return a Deferred that fires with an IReadable for the 'best'
3348+        available version of the file. The IReadable provides only read
3349+        access, even if this filenode was derived from a write cap.
3350 
3351hunk ./src/allmydata/interfaces.py 855
3352-class IImmutableFileNode(IFileNode):
3353-    def read(consumer, offset=0, size=None):
3354-        """Download a portion (possibly all) of the file's contents, making
3355-        them available to the given IConsumer. Return a Deferred that fires
3356-        (with the consumer) when the consumer is unregistered (either because
3357-        the last byte has been given to it, or because the consumer threw an
3358-        exception during write(), possibly because it no longer wants to
3359-        receive data). The portion downloaded will start at 'offset' and
3360-        contain 'size' bytes (or the remainder of the file if size==None).
3361-
3362-        The consumer will be used in non-streaming mode: an IPullProducer
3363-        will be attached to it.
3364+        For an immutable file, there is only one version. For a mutable
3365+        file, the 'best' version is the recoverable version with the
3366+        highest sequence number. If no uncoordinated writes have occurred,
3367+        and if enough shares are available, then this will be the most
3368+        recent version that has been uploaded. If no version is recoverable,
3369+        the Deferred will errback with an UnrecoverableFileError.
3370+        """
3371 
3372hunk ./src/allmydata/interfaces.py 863
3373-        The consumer will not receive data right away: several network trips
3374-        must occur first. The order of events will be::
3375+    def download_best_version():
3376+        """Download the contents of the version that would be returned
3377+        by get_best_readable_version(). This is equivalent to calling
3378+        download_to_data() on the IReadable given by that method.
3379 
3380hunk ./src/allmydata/interfaces.py 868
3381-         consumer.registerProducer(p, streaming)
3382-          (if streaming == False)::
3383-           consumer does p.resumeProducing()
3384-            consumer.write(data)
3385-           consumer does p.resumeProducing()
3386-            consumer.write(data).. (repeat until all data is written)
3387-         consumer.unregisterProducer()
3388-         deferred.callback(consumer)
3389+        I return a Deferred that fires with a byte string when the file
3390+        has been fully downloaded. To support streaming download, use
3391+        the 'read' method of IReadable. If no version is recoverable,
3392+        the Deferred will errback with an UnrecoverableFileError.
3393+        """
3394 
3395hunk ./src/allmydata/interfaces.py 874
3396-        If a download error occurs, or an exception is raised by
3397-        consumer.registerProducer() or consumer.write(), I will call
3398-        consumer.unregisterProducer() and then deliver the exception via
3399-        deferred.errback(). To cancel the download, the consumer should call
3400-        p.stopProducing(), which will result in an exception being delivered
3401-        via deferred.errback().
3402+    def get_size_of_best_version():
3403+        """Find the size of the version that would be returned by
3404+        get_best_readable_version().
3405 
3406hunk ./src/allmydata/interfaces.py 878
3407-        See src/allmydata/util/consumer.py for an example of a simple
3408-        download-to-memory consumer.
3409+        I return a Deferred that fires with an integer. If no version
3410+        is recoverable, the Deferred will errback with an
3411+        UnrecoverableFileError.
3412         """
3413 
3414hunk ./src/allmydata/interfaces.py 883
3415+
3416+class IImmutableFileNode(IFileNode, IReadable):
3417+    """I am a node representing an immutable file. Immutable files have
3418+    only one version"""
3419+
3420+
3421 class IMutableFileNode(IFileNode):
3422     """I provide access to a 'mutable file', which retains its identity
3423     regardless of what contents are put in it.
3424hunk ./src/allmydata/interfaces.py 948
3425     only be retrieved and updated all-at-once, as a single big string. Future
3426     versions of our mutable files will remove this restriction.
3427     """
3428-
3429-    def download_best_version():
3430-        """Download the 'best' available version of the file, meaning one of
3431-        the recoverable versions with the highest sequence number. If no
3432+    def get_best_mutable_version():
3433+        """Return a Deferred that fires with an IMutableFileVersion for
3434+        the 'best' available version of the file. The best version is
3435+        the recoverable version with the highest sequence number. If no
3436         uncoordinated writes have occurred, and if enough shares are
3437hunk ./src/allmydata/interfaces.py 953
3438-        available, then this will be the most recent version that has been
3439-        uploaded.
3440+        available, then this will be the most recent version that has
3441+        been uploaded.
3442 
3443hunk ./src/allmydata/interfaces.py 956
3444-        I update an internal servermap with MODE_READ, determine which
3445-        version of the file is indicated by
3446-        servermap.best_recoverable_version(), and return a Deferred that
3447-        fires with its contents. If no version is recoverable, the Deferred
3448-        will errback with UnrecoverableFileError.
3449-        """
3450-
3451-    def get_size_of_best_version():
3452-        """Find the size of the version that would be downloaded with
3453-        download_best_version(), without actually downloading the whole file.
3454-
3455-        I return a Deferred that fires with an integer.
3456+        If no version is recoverable, the Deferred will errback with an
3457+        UnrecoverableFileError.
3458         """
3459 
3460     def overwrite(new_contents):
3461hunk ./src/allmydata/interfaces.py 996
3462         errback.
3463         """
3464 
3465-
3466     def get_servermap(mode):
3467         """Return a Deferred that fires with an IMutableFileServerMap
3468         instance, updated using the given mode.
3469hunk ./src/allmydata/interfaces.py 1049
3470         writer-visible data using this writekey.
3471         """
3472 
3473+    def get_version():
3474+        """Returns the mutable file protocol version."""
3475+
3476 class NotEnoughSharesError(Exception):
3477     """Download was unable to get enough shares"""
3478 
3479hunk ./src/allmydata/interfaces.py 1888
3480         """The upload is finished, and whatever filehandle was in use may be
3481         closed."""
3482 
3483+
3484+class IMutableUploadable(Interface):
3485+    """
3486+    I represent content that is due to be uploaded to a mutable filecap.
3487+    """
3488+    # This is somewhat simpler than the IUploadable interface above
3489+    # because mutable files do not need to be concerned with possibly
3490+    # generating a CHK, nor with per-file keys. It is a subset of the
3491+    # methods in IUploadable, though, so we could just as well implement
3492+    # the mutable uploadables as IUploadables that don't happen to use
3493+    # those methods (with the understanding that the unused methods will
3494+    # never be called on such objects)
3495+    def get_size():
3496+        """
3497+        Returns a Deferred that fires with the size of the content held
3498+        by the uploadable.
3499+        """
3500+
3501+    def read(length):
3502+        """
3503+        Returns a list of strings which, when concatenated, are the next
3504+        length bytes of the file, or fewer if there are fewer bytes
3505+        between the current location and the end of the file.
3506+        """
3507+
3508+    def close():
3509+        """
3510+        The process that used the Uploadable is finished using it, so
3511+        the uploadable may be closed.
3512+        """
3513+
3514 class IUploadResults(Interface):
3515     """I am returned by upload() methods. I contain a number of public
3516     attributes which can be read to determine the results of the upload. Some
3517}
3518[nodemaker: teach nodemaker how to create MDMF mutable files
3519Kevan Carstensen <kevan@isnotajoke.com>**20110802014258
3520 Ignore-this: 2bf1fd4f8c1d1ad0e855c678347b76c2
3521] {
3522hunk ./src/allmydata/nodemaker.py 3
3523 import weakref
3524 from zope.interface import implements
3525-from allmydata.interfaces import INodeMaker
3526+from allmydata.util.assertutil import precondition
3527+from allmydata.interfaces import INodeMaker, SDMF_VERSION
3528 from allmydata.immutable.literal import LiteralFileNode
3529 from allmydata.immutable.filenode import ImmutableFileNode, CiphertextFileNode
3530 from allmydata.immutable.upload import Data
3531hunk ./src/allmydata/nodemaker.py 9
3532 from allmydata.mutable.filenode import MutableFileNode
3533+from allmydata.mutable.publish import MutableData
3534 from allmydata.dirnode import DirectoryNode, pack_children
3535 from allmydata.unknown import UnknownNode
3536 from allmydata import uri
3537hunk ./src/allmydata/nodemaker.py 92
3538             return self._create_dirnode(filenode)
3539         return None
3540 
3541-    def create_mutable_file(self, contents=None, keysize=None):
3542+    def create_mutable_file(self, contents=None, keysize=None,
3543+                            version=SDMF_VERSION):
3544         n = MutableFileNode(self.storage_broker, self.secret_holder,
3545                             self.default_encoding_parameters, self.history)
3546         d = self.key_generator.generate(keysize)
3547hunk ./src/allmydata/nodemaker.py 97
3548-        d.addCallback(n.create_with_keys, contents)
3549+        d.addCallback(n.create_with_keys, contents, version=version)
3550         d.addCallback(lambda res: n)
3551         return d
3552 
3553hunk ./src/allmydata/nodemaker.py 101
3554-    def create_new_mutable_directory(self, initial_children={}):
3555+    def create_new_mutable_directory(self, initial_children={},
3556+                                     version=SDMF_VERSION):
3557+        # initial_children must have metadata (i.e. {} instead of None)
3558+        for (name, (node, metadata)) in initial_children.iteritems():
3559+            precondition(isinstance(metadata, dict),
3560+                         "create_new_mutable_directory requires metadata to be a dict, not None", metadata)
3561+            node.raise_error()
3562         d = self.create_mutable_file(lambda n:
3563hunk ./src/allmydata/nodemaker.py 109
3564-                                     pack_children(initial_children, n.get_writekey()))
3565+                                     MutableData(pack_children(initial_children,
3566+                                                    n.get_writekey())),
3567+                                     version=version)
3568         d.addCallback(self._create_dirnode)
3569         return d
3570 
3571}
3572[mutable/filenode: Modify mutable filenodes for use with MDMF
3573Kevan Carstensen <kevan@isnotajoke.com>**20110802014501
3574 Ignore-this: 3c230bb0ebe60a94c667b0ee0c3b28e0
3575 
3576 In particular:
3577     - Break MutableFileNode and MutableFileVersion into distinct classes.
3578     - Implement the interface modifications made for MDMF.
3579     - Be aware of MDMF caps.
3580     - Learn how to create and work with MDMF files.
3581] {
3582hunk ./src/allmydata/mutable/filenode.py 7
3583 from zope.interface import implements
3584 from twisted.internet import defer, reactor
3585 from foolscap.api import eventually
3586-from allmydata.interfaces import IMutableFileNode, \
3587-     ICheckable, ICheckResults, NotEnoughSharesError
3588-from allmydata.util import hashutil, log
3589+from allmydata.interfaces import IMutableFileNode, ICheckable, ICheckResults, \
3590+     NotEnoughSharesError, MDMF_VERSION, SDMF_VERSION, IMutableUploadable, \
3591+     IMutableFileVersion, IWritable
3592+from allmydata.util import hashutil, log, consumer, deferredutil, mathutil
3593 from allmydata.util.assertutil import precondition
3594hunk ./src/allmydata/mutable/filenode.py 12
3595-from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI
3596+from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI, \
3597+                          WritableMDMFFileURI, ReadonlyMDMFFileURI
3598 from allmydata.monitor import Monitor
3599 from pycryptopp.cipher.aes import AES
3600 
3601hunk ./src/allmydata/mutable/filenode.py 17
3602-from allmydata.mutable.publish import Publish
3603-from allmydata.mutable.common import MODE_READ, MODE_WRITE, UnrecoverableFileError, \
3604+from allmydata.mutable.publish import Publish, MutableData,\
3605+                                      TransformingUploadable
3606+from allmydata.mutable.common import MODE_READ, MODE_WRITE, MODE_CHECK, UnrecoverableFileError, \
3607      ResponseCache, UncoordinatedWriteError
3608 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
3609 from allmydata.mutable.retrieve import Retrieve
3610hunk ./src/allmydata/mutable/filenode.py 70
3611         self._sharemap = {} # known shares, shnum-to-[nodeids]
3612         self._cache = ResponseCache()
3613         self._most_recent_size = None
3614+        # filled in after __init__ if we're being created for the first time;
3615+        # filled in by the servermap updater before publishing, otherwise.
3616+        # set to this default value in case neither of those things happen,
3617+        # or in case the servermap can't find any shares to tell us what
3618+        # to publish as.
3619+        self._protocol_version = None
3620 
3621         # all users of this MutableFileNode go through the serializer. This
3622         # takes advantage of the fact that Deferreds discard the callbacks
3623hunk ./src/allmydata/mutable/filenode.py 83
3624         # forever without consuming more and more memory.
3625         self._serializer = defer.succeed(None)
3626 
3627+        # Starting with MDMF, we can get these from caps if they're
3628+        # there. Leave them alone for now; they'll be filled in by my
3629+        # init_from_cap method if necessary.
3630+        self._downloader_hints = {}
3631+
3632     def __repr__(self):
3633         if hasattr(self, '_uri'):
3634             return "<%s %x %s %s>" % (self.__class__.__name__, id(self), self.is_readonly() and 'RO' or 'RW', self._uri.abbrev())
3635hunk ./src/allmydata/mutable/filenode.py 99
3636         # verification key, nor things like 'k' or 'N'. If and when someone
3637         # wants to get our contents, we'll pull from shares and fill those
3638         # in.
3639-        assert isinstance(filecap, (ReadonlySSKFileURI, WriteableSSKFileURI))
3640+        if isinstance(filecap, (WritableMDMFFileURI, ReadonlyMDMFFileURI)):
3641+            self._protocol_version = MDMF_VERSION
3642+        elif isinstance(filecap, (ReadonlySSKFileURI, WriteableSSKFileURI)):
3643+            self._protocol_version = SDMF_VERSION
3644+
3645         self._uri = filecap
3646         self._writekey = None
3647hunk ./src/allmydata/mutable/filenode.py 106
3648-        if isinstance(filecap, WriteableSSKFileURI):
3649+
3650+        if not filecap.is_readonly() and filecap.is_mutable():
3651             self._writekey = self._uri.writekey
3652         self._readkey = self._uri.readkey
3653         self._storage_index = self._uri.storage_index
3654hunk ./src/allmydata/mutable/filenode.py 120
3655         # if possible, otherwise by the first peer that Publish talks to.
3656         self._privkey = None
3657         self._encprivkey = None
3658+
3659+        # Starting with MDMF caps, we allowed arbitrary extensions in
3660+        # caps. If we were initialized with a cap that had extensions,
3661+        # we want to remember them so we can tell MutableFileVersions
3662+        # about them.
3663+        extensions = self._uri.get_extension_params()
3664+        if extensions:
3665+            extensions = map(int, extensions)
3666+            suspected_k, suspected_segsize = extensions
3667+            self._downloader_hints['k'] = suspected_k
3668+            self._downloader_hints['segsize'] = suspected_segsize
3669+
3670         return self
3671 
3672hunk ./src/allmydata/mutable/filenode.py 134
3673-    def create_with_keys(self, (pubkey, privkey), contents):
3674+    def create_with_keys(self, (pubkey, privkey), contents,
3675+                         version=SDMF_VERSION):
3676         """Call this to create a brand-new mutable file. It will create the
3677         shares, find homes for them, and upload the initial contents (created
3678         with the same rules as IClient.create_mutable_file() ). Returns a
3679hunk ./src/allmydata/mutable/filenode.py 148
3680         self._writekey = hashutil.ssk_writekey_hash(privkey_s)
3681         self._encprivkey = self._encrypt_privkey(self._writekey, privkey_s)
3682         self._fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
3683-        self._uri = WriteableSSKFileURI(self._writekey, self._fingerprint)
3684+        if version == MDMF_VERSION:
3685+            self._uri = WritableMDMFFileURI(self._writekey, self._fingerprint)
3686+            self._protocol_version = version
3687+        elif version == SDMF_VERSION:
3688+            self._uri = WriteableSSKFileURI(self._writekey, self._fingerprint)
3689+            self._protocol_version = version
3690         self._readkey = self._uri.readkey
3691         self._storage_index = self._uri.storage_index
3692         initial_contents = self._get_initial_contents(contents)
3693hunk ./src/allmydata/mutable/filenode.py 160
3694         return self._upload(initial_contents, None)
3695 
3696     def _get_initial_contents(self, contents):
3697+        if contents is None:
3698+            return MutableData("")
3699+
3700         if isinstance(contents, str):
3701hunk ./src/allmydata/mutable/filenode.py 164
3702+            return MutableData(contents)
3703+
3704+        if IMutableUploadable.providedBy(contents):
3705             return contents
3706hunk ./src/allmydata/mutable/filenode.py 168
3707-        if contents is None:
3708-            return ""
3709+
3710         assert callable(contents), "%s should be callable, not %s" % \
3711                (contents, type(contents))
3712         return contents(self)
3713hunk ./src/allmydata/mutable/filenode.py 238
3714 
3715     def get_size(self):
3716         return self._most_recent_size
3717+
3718     def get_current_size(self):
3719         d = self.get_size_of_best_version()
3720         d.addCallback(self._stash_size)
3721hunk ./src/allmydata/mutable/filenode.py 243
3722         return d
3723+
3724     def _stash_size(self, size):
3725         self._most_recent_size = size
3726         return size
3727hunk ./src/allmydata/mutable/filenode.py 302
3728             return cmp(self.__class__, them.__class__)
3729         return cmp(self._uri, them._uri)
3730 
3731-    def _do_serialized(self, cb, *args, **kwargs):
3732-        # note: to avoid deadlock, this callable is *not* allowed to invoke
3733-        # other serialized methods within this (or any other)
3734-        # MutableFileNode. The callable should be a bound method of this same
3735-        # MFN instance.
3736-        d = defer.Deferred()
3737-        self._serializer.addCallback(lambda ignore: cb(*args, **kwargs))
3738-        # we need to put off d.callback until this Deferred is finished being
3739-        # processed. Otherwise the caller's subsequent activities (like,
3740-        # doing other things with this node) can cause reentrancy problems in
3741-        # the Deferred code itself
3742-        self._serializer.addBoth(lambda res: eventually(d.callback, res))
3743-        # add a log.err just in case something really weird happens, because
3744-        # self._serializer stays around forever, therefore we won't see the
3745-        # usual Unhandled Error in Deferred that would give us a hint.
3746-        self._serializer.addErrback(log.err)
3747-        return d
3748 
3749     #################################
3750     # ICheckable
3751hunk ./src/allmydata/mutable/filenode.py 327
3752 
3753 
3754     #################################
3755-    # IMutableFileNode
3756+    # IFileNode
3757+
3758+    def get_best_readable_version(self):
3759+        """
3760+        I return a Deferred that fires with a MutableFileVersion
3761+        representing the best readable version of the file that I
3762+        represent
3763+        """
3764+        return self.get_readable_version()
3765+
3766+
3767+    def get_readable_version(self, servermap=None, version=None):
3768+        """
3769+        I return a Deferred that fires with an MutableFileVersion for my
3770+        version argument, if there is a recoverable file of that version
3771+        on the grid. If there is no recoverable version, I fire with an
3772+        UnrecoverableFileError.
3773+
3774+        If a servermap is provided, I look in there for the requested
3775+        version. If no servermap is provided, I create and update a new
3776+        one.
3777+
3778+        If no version is provided, then I return a MutableFileVersion
3779+        representing the best recoverable version of the file.
3780+        """
3781+        d = self._get_version_from_servermap(MODE_READ, servermap, version)
3782+        def _build_version((servermap, their_version)):
3783+            assert their_version in servermap.recoverable_versions()
3784+            assert their_version in servermap.make_versionmap()
3785+
3786+            mfv = MutableFileVersion(self,
3787+                                     servermap,
3788+                                     their_version,
3789+                                     self._storage_index,
3790+                                     self._storage_broker,
3791+                                     self._readkey,
3792+                                     history=self._history)
3793+            assert mfv.is_readonly()
3794+            mfv.set_downloader_hints(self._downloader_hints)
3795+            # our caller can use this to download the contents of the
3796+            # mutable file.
3797+            return mfv
3798+        return d.addCallback(_build_version)
3799+
3800+
3801+    def _get_version_from_servermap(self,
3802+                                    mode,
3803+                                    servermap=None,
3804+                                    version=None):
3805+        """
3806+        I return a Deferred that fires with (servermap, version).
3807+
3808+        This function performs validation and a servermap update. If it
3809+        returns (servermap, version), the caller can assume that:
3810+            - servermap was last updated in mode.
3811+            - version is recoverable, and corresponds to the servermap.
3812+
3813+        If version and servermap are provided to me, I will validate
3814+        that version exists in the servermap, and that the servermap was
3815+        updated correctly.
3816+
3817+        If version is not provided, but servermap is, I will validate
3818+        the servermap and return the best recoverable version that I can
3819+        find in the servermap.
3820+
3821+        If the version is provided but the servermap isn't, I will
3822+        obtain a servermap that has been updated in the correct mode and
3823+        validate that version is found and recoverable.
3824+
3825+        If neither servermap nor version are provided, I will obtain a
3826+        servermap updated in the correct mode, and return the best
3827+        recoverable version that I can find in there.
3828+        """
3829+        # XXX: wording ^^^^
3830+        if servermap and servermap.last_update_mode == mode:
3831+            d = defer.succeed(servermap)
3832+        else:
3833+            d = self._get_servermap(mode)
3834+
3835+        def _get_version(servermap, v):
3836+            if v and v not in servermap.recoverable_versions():
3837+                v = None
3838+            elif not v:
3839+                v = servermap.best_recoverable_version()
3840+            if not v:
3841+                raise UnrecoverableFileError("no recoverable versions")
3842+
3843+            return (servermap, v)
3844+        return d.addCallback(_get_version, version)
3845+
3846 
3847     def download_best_version(self):
3848hunk ./src/allmydata/mutable/filenode.py 419
3849+        """
3850+        I return a Deferred that fires with the contents of the best
3851+        version of this mutable file.
3852+        """
3853         return self._do_serialized(self._download_best_version)
3854hunk ./src/allmydata/mutable/filenode.py 424
3855+
3856+
3857     def _download_best_version(self):
3858hunk ./src/allmydata/mutable/filenode.py 427
3859-        servermap = ServerMap()
3860-        d = self._try_once_to_download_best_version(servermap, MODE_READ)
3861-        def _maybe_retry(f):
3862-            f.trap(NotEnoughSharesError)
3863-            # the download is worth retrying once. Make sure to use the
3864-            # old servermap, since it is what remembers the bad shares,
3865-            # but use MODE_WRITE to make it look for even more shares.
3866-            # TODO: consider allowing this to retry multiple times.. this
3867-            # approach will let us tolerate about 8 bad shares, I think.
3868-            return self._try_once_to_download_best_version(servermap,
3869-                                                           MODE_WRITE)
3870+        """
3871+        I am the serialized sibling of download_best_version.
3872+        """
3873+        d = self.get_best_readable_version()
3874+        d.addCallback(self._record_size)
3875+        d.addCallback(lambda version: version.download_to_data())
3876+
3877+        # It is possible that the download will fail because there
3878+        # aren't enough shares to be had. If so, we will try again after
3879+        # updating the servermap in MODE_WRITE, which may find more
3880+        # shares than updating in MODE_READ, as we just did. We can do
3881+        # this by getting the best mutable version and downloading from
3882+        # that -- the best mutable version will be a MutableFileVersion
3883+        # with a servermap that was last updated in MODE_WRITE, as we
3884+        # want. If this fails, then we give up.
3885+        def _maybe_retry(failure):
3886+            failure.trap(NotEnoughSharesError)
3887+
3888+            d = self.get_best_mutable_version()
3889+            d.addCallback(self._record_size)
3890+            d.addCallback(lambda version: version.download_to_data())
3891+            return d
3892+
3893         d.addErrback(_maybe_retry)
3894         return d
3895hunk ./src/allmydata/mutable/filenode.py 452
3896-    def _try_once_to_download_best_version(self, servermap, mode):
3897-        d = self._update_servermap(servermap, mode)
3898-        d.addCallback(self._once_updated_download_best_version, servermap)
3899-        return d
3900-    def _once_updated_download_best_version(self, ignored, servermap):
3901-        goal = servermap.best_recoverable_version()
3902-        if not goal:
3903-            raise UnrecoverableFileError("no recoverable versions")
3904-        return self._try_once_to_download_version(servermap, goal)
3905+
3906+
3907+    def _record_size(self, mfv):
3908+        """
3909+        I record the size of a mutable file version.
3910+        """
3911+        self._most_recent_size = mfv.get_size()
3912+        return mfv
3913+
3914 
3915     def get_size_of_best_version(self):
3916hunk ./src/allmydata/mutable/filenode.py 463
3917-        d = self.get_servermap(MODE_READ)
3918-        def _got_servermap(smap):
3919-            ver = smap.best_recoverable_version()
3920-            if not ver:
3921-                raise UnrecoverableFileError("no recoverable version")
3922-            return smap.size_of_version(ver)
3923-        d.addCallback(_got_servermap)
3924-        return d
3925+        """
3926+        I return the size of the best version of this mutable file.
3927+
3928+        This is equivalent to calling get_size() on the result of
3929+        get_best_readable_version().
3930+        """
3931+        d = self.get_best_readable_version()
3932+        return d.addCallback(lambda mfv: mfv.get_size())
3933+
3934+
3935+    #################################
3936+    # IMutableFileNode
3937+
3938+    def get_best_mutable_version(self, servermap=None):
3939+        """
3940+        I return a Deferred that fires with a MutableFileVersion
3941+        representing the best readable version of the file that I
3942+        represent. I am like get_best_readable_version, except that I
3943+        will try to make a writable version if I can.
3944+        """
3945+        return self.get_mutable_version(servermap=servermap)
3946+
3947+
3948+    def get_mutable_version(self, servermap=None, version=None):
3949+        """
3950+        I return a version of this mutable file. I return a Deferred
3951+        that fires with a MutableFileVersion
3952+
3953+        If version is provided, the Deferred will fire with a
3954+        MutableFileVersion initailized with that version. Otherwise, it
3955+        will fire with the best version that I can recover.
3956+
3957+        If servermap is provided, I will use that to find versions
3958+        instead of performing my own servermap update.
3959+        """
3960+        if self.is_readonly():
3961+            return self.get_readable_version(servermap=servermap,
3962+                                             version=version)
3963+
3964+        # get_mutable_version => write intent, so we require that the
3965+        # servermap is updated in MODE_WRITE
3966+        d = self._get_version_from_servermap(MODE_WRITE, servermap, version)
3967+        def _build_version((servermap, smap_version)):
3968+            # these should have been set by the servermap update.
3969+            assert self._secret_holder
3970+            assert self._writekey
3971+
3972+            mfv = MutableFileVersion(self,
3973+                                     servermap,
3974+                                     smap_version,
3975+                                     self._storage_index,
3976+                                     self._storage_broker,
3977+                                     self._readkey,
3978+                                     self._writekey,
3979+                                     self._secret_holder,
3980+                                     history=self._history)
3981+            assert not mfv.is_readonly()
3982+            mfv.set_downloader_hints(self._downloader_hints)
3983+            return mfv
3984+
3985+        return d.addCallback(_build_version)
3986 
3987hunk ./src/allmydata/mutable/filenode.py 525
3988+
3989+    # XXX: I'm uncomfortable with the difference between upload and
3990+    #      overwrite, which, FWICT, is basically that you don't have to
3991+    #      do a servermap update before you overwrite. We split them up
3992+    #      that way anyway, so I guess there's no real difficulty in
3993+    #      offering both ways to callers, but it also makes the
3994+    #      public-facing API cluttery, and makes it hard to discern the
3995+    #      right way of doing things.
3996+
3997+    # In general, we leave it to callers to ensure that they aren't
3998+    # going to cause UncoordinatedWriteErrors when working with
3999+    # MutableFileVersions. We know that the next three operations
4000+    # (upload, overwrite, and modify) will all operate on the same
4001+    # version, so we say that only one of them can be going on at once,
4002+    # and serialize them to ensure that that actually happens, since as
4003+    # the caller in this situation it is our job to do that.
4004     def overwrite(self, new_contents):
4005hunk ./src/allmydata/mutable/filenode.py 542
4006+        """
4007+        I overwrite the contents of the best recoverable version of this
4008+        mutable file with new_contents. This is equivalent to calling
4009+        overwrite on the result of get_best_mutable_version with
4010+        new_contents as an argument. I return a Deferred that eventually
4011+        fires with the results of my replacement process.
4012+        """
4013+        # TODO: Update downloader hints.
4014         return self._do_serialized(self._overwrite, new_contents)
4015hunk ./src/allmydata/mutable/filenode.py 551
4016+
4017+
4018     def _overwrite(self, new_contents):
4019hunk ./src/allmydata/mutable/filenode.py 554
4020+        """
4021+        I am the serialized sibling of overwrite.
4022+        """
4023+        d = self.get_best_mutable_version()
4024+        d.addCallback(lambda mfv: mfv.overwrite(new_contents))
4025+        d.addCallback(self._did_upload, new_contents.get_size())
4026+        return d
4027+
4028+
4029+    def upload(self, new_contents, servermap):
4030+        """
4031+        I overwrite the contents of the best recoverable version of this
4032+        mutable file with new_contents, using servermap instead of
4033+        creating/updating our own servermap. I return a Deferred that
4034+        fires with the results of my upload.
4035+        """
4036+        # TODO: Update downloader hints
4037+        return self._do_serialized(self._upload, new_contents, servermap)
4038+
4039+
4040+    def modify(self, modifier, backoffer=None):
4041+        """
4042+        I modify the contents of the best recoverable version of this
4043+        mutable file with the modifier. This is equivalent to calling
4044+        modify on the result of get_best_mutable_version. I return a
4045+        Deferred that eventually fires with an UploadResults instance
4046+        describing this process.
4047+        """
4048+        # TODO: Update downloader hints.
4049+        return self._do_serialized(self._modify, modifier, backoffer)
4050+
4051+
4052+    def _modify(self, modifier, backoffer):
4053+        """
4054+        I am the serialized sibling of modify.
4055+        """
4056+        d = self.get_best_mutable_version()
4057+        d.addCallback(lambda mfv: mfv.modify(modifier, backoffer))
4058+        return d
4059+
4060+
4061+    def download_version(self, servermap, version, fetch_privkey=False):
4062+        """
4063+        Download the specified version of this mutable file. I return a
4064+        Deferred that fires with the contents of the specified version
4065+        as a bytestring, or errbacks if the file is not recoverable.
4066+        """
4067+        d = self.get_readable_version(servermap, version)
4068+        return d.addCallback(lambda mfv: mfv.download_to_data(fetch_privkey))
4069+
4070+
4071+    def get_servermap(self, mode):
4072+        """
4073+        I return a servermap that has been updated in mode.
4074+
4075+        mode should be one of MODE_READ, MODE_WRITE, MODE_CHECK or
4076+        MODE_ANYTHING. See servermap.py for more on what these mean.
4077+        """
4078+        return self._do_serialized(self._get_servermap, mode)
4079+
4080+
4081+    def _get_servermap(self, mode):
4082+        """
4083+        I am a serialized twin to get_servermap.
4084+        """
4085         servermap = ServerMap()
4086hunk ./src/allmydata/mutable/filenode.py 620
4087-        d = self._update_servermap(servermap, mode=MODE_WRITE)
4088-        d.addCallback(lambda ignored: self._upload(new_contents, servermap))
4089+        d = self._update_servermap(servermap, mode)
4090+        # The servermap will tell us about the most recent size of the
4091+        # file, so we may as well set that so that callers might get
4092+        # more data about us.
4093+        if not self._most_recent_size:
4094+            d.addCallback(self._get_size_from_servermap)
4095+        return d
4096+
4097+
4098+    def _get_size_from_servermap(self, servermap):
4099+        """
4100+        I extract the size of the best version of this file and record
4101+        it in self._most_recent_size. I return the servermap that I was
4102+        given.
4103+        """
4104+        if servermap.recoverable_versions():
4105+            v = servermap.best_recoverable_version()
4106+            size = v[4] # verinfo[4] == size
4107+            self._most_recent_size = size
4108+        return servermap
4109+
4110+
4111+    def _update_servermap(self, servermap, mode):
4112+        u = ServermapUpdater(self, self._storage_broker, Monitor(), servermap,
4113+                             mode)
4114+        if self._history:
4115+            self._history.notify_mapupdate(u.get_status())
4116+        return u.update()
4117+
4118+
4119+    #def set_version(self, version):
4120+        # I can be set in two ways:
4121+        #  1. When the node is created.
4122+        #  2. (for an existing share) when the Servermap is updated
4123+        #     before I am read.
4124+    #    assert version in (MDMF_VERSION, SDMF_VERSION)
4125+    #    self._protocol_version = version
4126+
4127+
4128+    def get_version(self):
4129+        return self._protocol_version
4130+
4131+
4132+    def _do_serialized(self, cb, *args, **kwargs):
4133+        # note: to avoid deadlock, this callable is *not* allowed to invoke
4134+        # other serialized methods within this (or any other)
4135+        # MutableFileNode. The callable should be a bound method of this same
4136+        # MFN instance.
4137+        d = defer.Deferred()
4138+        self._serializer.addCallback(lambda ignore: cb(*args, **kwargs))
4139+        # we need to put off d.callback until this Deferred is finished being
4140+        # processed. Otherwise the caller's subsequent activities (like,
4141+        # doing other things with this node) can cause reentrancy problems in
4142+        # the Deferred code itself
4143+        self._serializer.addBoth(lambda res: eventually(d.callback, res))
4144+        # add a log.err just in case something really weird happens, because
4145+        # self._serializer stays around forever, therefore we won't see the
4146+        # usual Unhandled Error in Deferred that would give us a hint.
4147+        self._serializer.addErrback(log.err)
4148+        return d
4149+
4150+
4151+    def _upload(self, new_contents, servermap):
4152+        """
4153+        A MutableFileNode still has to have some way of getting
4154+        published initially, which is what I am here for. After that,
4155+        all publishing, updating, modifying and so on happens through
4156+        MutableFileVersions.
4157+        """
4158+        assert self._pubkey, "update_servermap must be called before publish"
4159+
4160+        # Define IPublishInvoker with a set_downloader_hints method?
4161+        # Then have the publisher call that method when it's done publishing?
4162+        p = Publish(self, self._storage_broker, servermap)
4163+        if self._history:
4164+            self._history.notify_publish(p.get_status(),
4165+                                         new_contents.get_size())
4166+        d = p.publish(new_contents)
4167+        d.addCallback(self._did_upload, new_contents.get_size())
4168         return d
4169 
4170 
4171hunk ./src/allmydata/mutable/filenode.py 702
4172+    def set_downloader_hints(self, hints):
4173+        self._downloader_hints = hints
4174+        extensions = hints.values()
4175+        self._uri.set_extension_params(extensions)
4176+
4177+
4178+    def _did_upload(self, res, size):
4179+        self._most_recent_size = size
4180+        return res
4181+
4182+
4183+class MutableFileVersion:
4184+    """
4185+    I represent a specific version (most likely the best version) of a
4186+    mutable file.
4187+
4188+    Since I implement IReadable, instances which hold a
4189+    reference to an instance of me are guaranteed the ability (absent
4190+    connection difficulties or unrecoverable versions) to read the file
4191+    that I represent. Depending on whether I was initialized with a
4192+    write capability or not, I may also provide callers the ability to
4193+    overwrite or modify the contents of the mutable file that I
4194+    reference.
4195+    """
4196+    implements(IMutableFileVersion, IWritable)
4197+
4198+    def __init__(self,
4199+                 node,
4200+                 servermap,
4201+                 version,
4202+                 storage_index,
4203+                 storage_broker,
4204+                 readcap,
4205+                 writekey=None,
4206+                 write_secrets=None,
4207+                 history=None):
4208+
4209+        self._node = node
4210+        self._servermap = servermap
4211+        self._version = version
4212+        self._storage_index = storage_index
4213+        self._write_secrets = write_secrets
4214+        self._history = history
4215+        self._storage_broker = storage_broker
4216+
4217+        #assert isinstance(readcap, IURI)
4218+        self._readcap = readcap
4219+
4220+        self._writekey = writekey
4221+        self._serializer = defer.succeed(None)
4222+
4223+
4224+    def get_sequence_number(self):
4225+        """
4226+        Get the sequence number of the mutable version that I represent.
4227+        """
4228+        return self._version[0] # verinfo[0] == the sequence number
4229+
4230+
4231+    # TODO: Terminology?
4232+    def get_writekey(self):
4233+        """
4234+        I return a writekey or None if I don't have a writekey.
4235+        """
4236+        return self._writekey
4237+
4238+
4239+    def set_downloader_hints(self, hints):
4240+        """
4241+        I set the downloader hints.
4242+        """
4243+        assert isinstance(hints, dict)
4244+
4245+        self._downloader_hints = hints
4246+
4247+
4248+    def get_downloader_hints(self):
4249+        """
4250+        I return the downloader hints.
4251+        """
4252+        return self._downloader_hints
4253+
4254+
4255+    def overwrite(self, new_contents):
4256+        """
4257+        I overwrite the contents of this mutable file version with the
4258+        data in new_contents.
4259+        """
4260+        assert not self.is_readonly()
4261+
4262+        return self._do_serialized(self._overwrite, new_contents)
4263+
4264+
4265+    def _overwrite(self, new_contents):
4266+        assert IMutableUploadable.providedBy(new_contents)
4267+        assert self._servermap.last_update_mode == MODE_WRITE
4268+
4269+        return self._upload(new_contents)
4270+
4271+
4272     def modify(self, modifier, backoffer=None):
4273         """I use a modifier callback to apply a change to the mutable file.
4274         I implement the following pseudocode::
4275hunk ./src/allmydata/mutable/filenode.py 842
4276         backoffer should not invoke any methods on this MutableFileNode
4277         instance, and it needs to be highly conscious of deadlock issues.
4278         """
4279+        assert not self.is_readonly()
4280+
4281         return self._do_serialized(self._modify, modifier, backoffer)
4282hunk ./src/allmydata/mutable/filenode.py 845
4283+
4284+
4285     def _modify(self, modifier, backoffer):
4286hunk ./src/allmydata/mutable/filenode.py 848
4287-        servermap = ServerMap()
4288         if backoffer is None:
4289             backoffer = BackoffAgent().delay
4290hunk ./src/allmydata/mutable/filenode.py 850
4291-        return self._modify_and_retry(servermap, modifier, backoffer, True)
4292-    def _modify_and_retry(self, servermap, modifier, backoffer, first_time):
4293-        d = self._modify_once(servermap, modifier, first_time)
4294+        return self._modify_and_retry(modifier, backoffer, True)
4295+
4296+
4297+    def _modify_and_retry(self, modifier, backoffer, first_time):
4298+        """
4299+        I try to apply modifier to the contents of this version of the
4300+        mutable file. If I succeed, I return an UploadResults instance
4301+        describing my success. If I fail, I try again after waiting for
4302+        a little bit.
4303+        """
4304+        log.msg("doing modify")
4305+        if first_time:
4306+            d = self._update_servermap()
4307+        else:
4308+            # We ran into trouble; do MODE_CHECK so we're a little more
4309+            # careful on subsequent tries.
4310+            d = self._update_servermap(mode=MODE_CHECK)
4311+
4312+        d.addCallback(lambda ignored:
4313+            self._modify_once(modifier, first_time))
4314         def _retry(f):
4315             f.trap(UncoordinatedWriteError)
4316hunk ./src/allmydata/mutable/filenode.py 872
4317+            # Uh oh, it broke. We're allowed to trust the servermap for our
4318+            # first try, but after that we need to update it. It's
4319+            # possible that we've failed due to a race with another
4320+            # uploader, and if the race is to converge correctly, we
4321+            # need to know about that upload.
4322             d2 = defer.maybeDeferred(backoffer, self, f)
4323             d2.addCallback(lambda ignored:
4324hunk ./src/allmydata/mutable/filenode.py 879
4325-                           self._modify_and_retry(servermap, modifier,
4326+                           self._modify_and_retry(modifier,
4327                                                   backoffer, False))
4328             return d2
4329         d.addErrback(_retry)
4330hunk ./src/allmydata/mutable/filenode.py 884
4331         return d
4332-    def _modify_once(self, servermap, modifier, first_time):
4333-        d = self._update_servermap(servermap, MODE_WRITE)
4334-        d.addCallback(self._once_updated_download_best_version, servermap)
4335+
4336+
4337+    def _modify_once(self, modifier, first_time):
4338+        """
4339+        I attempt to apply a modifier to the contents of the mutable
4340+        file.
4341+        """
4342+        assert self._servermap.last_update_mode != MODE_READ
4343+
4344+        # download_to_data is serialized, so we have to call this to
4345+        # avoid deadlock.
4346+        d = self._try_to_download_data()
4347         def _apply(old_contents):
4348hunk ./src/allmydata/mutable/filenode.py 897
4349-            new_contents = modifier(old_contents, servermap, first_time)
4350+            new_contents = modifier(old_contents, self._servermap, first_time)
4351+            precondition((isinstance(new_contents, str) or
4352+                          new_contents is None),
4353+                         "Modifier function must return a string "
4354+                         "or None")
4355+
4356             if new_contents is None or new_contents == old_contents:
4357hunk ./src/allmydata/mutable/filenode.py 904
4358+                log.msg("no changes")
4359                 # no changes need to be made
4360                 if first_time:
4361                     return
4362hunk ./src/allmydata/mutable/filenode.py 912
4363                 # recovery when it observes UCWE, we need to do a second
4364                 # publish. See #551 for details. We'll basically loop until
4365                 # we managed an uncontested publish.
4366-                new_contents = old_contents
4367-            precondition(isinstance(new_contents, str),
4368-                         "Modifier function must return a string or None")
4369-            return self._upload(new_contents, servermap)
4370+                old_uploadable = MutableData(old_contents)
4371+                new_contents = old_uploadable
4372+            else:
4373+                new_contents = MutableData(new_contents)
4374+
4375+            return self._upload(new_contents)
4376         d.addCallback(_apply)
4377         return d
4378 
4379hunk ./src/allmydata/mutable/filenode.py 921
4380-    def get_servermap(self, mode):
4381-        return self._do_serialized(self._get_servermap, mode)
4382-    def _get_servermap(self, mode):
4383-        servermap = ServerMap()
4384-        return self._update_servermap(servermap, mode)
4385-    def _update_servermap(self, servermap, mode):
4386-        u = ServermapUpdater(self, self._storage_broker, Monitor(), servermap,
4387-                             mode)
4388-        if self._history:
4389-            self._history.notify_mapupdate(u.get_status())
4390-        return u.update()
4391 
4392hunk ./src/allmydata/mutable/filenode.py 922
4393-    def download_version(self, servermap, version, fetch_privkey=False):
4394-        return self._do_serialized(self._try_once_to_download_version,
4395-                                   servermap, version, fetch_privkey)
4396-    def _try_once_to_download_version(self, servermap, version,
4397-                                      fetch_privkey=False):
4398-        r = Retrieve(self, servermap, version, fetch_privkey)
4399+    def is_readonly(self):
4400+        """
4401+        I return True if this MutableFileVersion provides no write
4402+        access to the file that it encapsulates, and False if it
4403+        provides the ability to modify the file.
4404+        """
4405+        return self._writekey is None
4406+
4407+
4408+    def is_mutable(self):
4409+        """
4410+        I return True, since mutable files are always mutable by
4411+        somebody.
4412+        """
4413+        return True
4414+
4415+
4416+    def get_storage_index(self):
4417+        """
4418+        I return the storage index of the reference that I encapsulate.
4419+        """
4420+        return self._storage_index
4421+
4422+
4423+    def get_size(self):
4424+        """
4425+        I return the length, in bytes, of this readable object.
4426+        """
4427+        return self._servermap.size_of_version(self._version)
4428+
4429+
4430+    def download_to_data(self, fetch_privkey=False):
4431+        """
4432+        I return a Deferred that fires with the contents of this
4433+        readable object as a byte string.
4434+
4435+        """
4436+        c = consumer.MemoryConsumer()
4437+        d = self.read(c, fetch_privkey=fetch_privkey)
4438+        d.addCallback(lambda mc: "".join(mc.chunks))
4439+        return d
4440+
4441+
4442+    def _try_to_download_data(self):
4443+        """
4444+        I am an unserialized cousin of download_to_data; I am called
4445+        from the children of modify() to download the data associated
4446+        with this mutable version.
4447+        """
4448+        c = consumer.MemoryConsumer()
4449+        # modify will almost certainly write, so we need the privkey.
4450+        d = self._read(c, fetch_privkey=True)
4451+        d.addCallback(lambda mc: "".join(mc.chunks))
4452+        return d
4453+
4454+
4455+    def read(self, consumer, offset=0, size=None, fetch_privkey=False):
4456+        """
4457+        I read a portion (possibly all) of the mutable file that I
4458+        reference into consumer.
4459+        """
4460+        return self._do_serialized(self._read, consumer, offset, size,
4461+                                   fetch_privkey)
4462+
4463+
4464+    def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
4465+        """
4466+        I am the serialized companion of read.
4467+        """
4468+        r = Retrieve(self._node, self._servermap, self._version, fetch_privkey)
4469         if self._history:
4470             self._history.notify_retrieve(r.get_status())
4471hunk ./src/allmydata/mutable/filenode.py 994
4472-        d = r.download()
4473-        d.addCallback(self._downloaded_version)
4474+        d = r.download(consumer, offset, size)
4475         return d
4476hunk ./src/allmydata/mutable/filenode.py 996
4477-    def _downloaded_version(self, data):
4478-        self._most_recent_size = len(data)
4479-        return data
4480 
4481hunk ./src/allmydata/mutable/filenode.py 997
4482-    def upload(self, new_contents, servermap):
4483-        return self._do_serialized(self._upload, new_contents, servermap)
4484-    def _upload(self, new_contents, servermap):
4485-        assert self._pubkey, "update_servermap must be called before publish"
4486-        p = Publish(self, self._storage_broker, servermap)
4487+
4488+    def _do_serialized(self, cb, *args, **kwargs):
4489+        # note: to avoid deadlock, this callable is *not* allowed to invoke
4490+        # other serialized methods within this (or any other)
4491+        # MutableFileNode. The callable should be a bound method of this same
4492+        # MFN instance.
4493+        d = defer.Deferred()
4494+        self._serializer.addCallback(lambda ignore: cb(*args, **kwargs))
4495+        # we need to put off d.callback until this Deferred is finished being
4496+        # processed. Otherwise the caller's subsequent activities (like,
4497+        # doing other things with this node) can cause reentrancy problems in
4498+        # the Deferred code itself
4499+        self._serializer.addBoth(lambda res: eventually(d.callback, res))
4500+        # add a log.err just in case something really weird happens, because
4501+        # self._serializer stays around forever, therefore we won't see the
4502+        # usual Unhandled Error in Deferred that would give us a hint.
4503+        self._serializer.addErrback(log.err)
4504+        return d
4505+
4506+
4507+    def _upload(self, new_contents):
4508+        #assert self._pubkey, "update_servermap must be called before publish"
4509+        p = Publish(self._node, self._storage_broker, self._servermap)
4510         if self._history:
4511hunk ./src/allmydata/mutable/filenode.py 1021
4512-            self._history.notify_publish(p.get_status(), len(new_contents))
4513+            self._history.notify_publish(p.get_status(),
4514+                                         new_contents.get_size())
4515         d = p.publish(new_contents)
4516hunk ./src/allmydata/mutable/filenode.py 1024
4517-        d.addCallback(self._did_upload, len(new_contents))
4518+        d.addCallback(self._did_upload, new_contents.get_size())
4519         return d
4520hunk ./src/allmydata/mutable/filenode.py 1026
4521+
4522+
4523     def _did_upload(self, res, size):
4524         self._most_recent_size = size
4525         return res
4526hunk ./src/allmydata/mutable/filenode.py 1031
4527+
4528+    def update(self, data, offset):
4529+        """
4530+        Do an update of this mutable file version by inserting data at
4531+        offset within the file. If offset is the EOF, this is an append
4532+        operation. I return a Deferred that fires with the results of
4533+        the update operation when it has completed.
4534+
4535+        In cases where update does not append any data, or where it does
4536+        not append so many blocks that the block count crosses a
4537+        power-of-two boundary, this operation will use roughly
4538+        O(data.get_size()) memory/bandwidth/CPU to perform the update.
4539+        Otherwise, it must download, re-encode, and upload the entire
4540+        file again, which will use O(filesize) resources.
4541+        """
4542+        return self._do_serialized(self._update, data, offset)
4543+
4544+
4545+    def _update(self, data, offset):
4546+        """
4547+        I update the mutable file version represented by this particular
4548+        IMutableVersion by inserting the data in data at the offset
4549+        offset. I return a Deferred that fires when this has been
4550+        completed.
4551+        """
4552+        new_size = data.get_size() + offset
4553+        old_size = self.get_size()
4554+        segment_size = self._version[3]
4555+        num_old_segments = mathutil.div_ceil(old_size,
4556+                                             segment_size)
4557+        num_new_segments = mathutil.div_ceil(new_size,
4558+                                             segment_size)
4559+        log.msg("got %d old segments, %d new segments" % \
4560+                        (num_old_segments, num_new_segments))
4561+
4562+        # We do a whole file re-encode if the file is an SDMF file.
4563+        if self._version[2]: # version[2] == SDMF salt, which MDMF lacks
4564+            log.msg("doing re-encode instead of in-place update")
4565+            return self._do_modify_update(data, offset)
4566+
4567+        # Otherwise, we can replace just the parts that are changing.
4568+        log.msg("updating in place")
4569+        d = self._do_update_update(data, offset)
4570+        d.addCallback(self._decode_and_decrypt_segments, data, offset)
4571+        d.addCallback(self._build_uploadable_and_finish, data, offset)
4572+        return d
4573+
4574+
4575+    def _do_modify_update(self, data, offset):
4576+        """
4577+        I perform a file update by modifying the contents of the file
4578+        after downloading it, then reuploading it. I am less efficient
4579+        than _do_update_update, but am necessary for certain updates.
4580+        """
4581+        def m(old, servermap, first_time):
4582+            start = offset
4583+            rest = offset + data.get_size()
4584+            new = old[:start]
4585+            new += "".join(data.read(data.get_size()))
4586+            new += old[rest:]
4587+            return new
4588+        return self._modify(m, None)
4589+
4590+
4591+    def _do_update_update(self, data, offset):
4592+        """
4593+        I start the Servermap update that gets us the data we need to
4594+        continue the update process. I return a Deferred that fires when
4595+        the servermap update is done.
4596+        """
4597+        assert IMutableUploadable.providedBy(data)
4598+        assert self.is_mutable()
4599+        # offset == self.get_size() is valid and means that we are
4600+        # appending data to the file.
4601+        assert offset <= self.get_size()
4602+
4603+        segsize = self._version[3]
4604+        # We'll need the segment that the data starts in, regardless of
4605+        # what we'll do later.
4606+        start_segment = offset // segsize
4607+
4608+        # We only need the end segment if the data we append does not go
4609+        # beyond the current end-of-file.
4610+        end_segment = start_segment
4611+        if offset + data.get_size() < self.get_size():
4612+            end_data = offset + data.get_size()
4613+            end_segment = end_data // segsize
4614+
4615+        self._start_segment = start_segment
4616+        self._end_segment = end_segment
4617+
4618+        # Now ask for the servermap to be updated in MODE_WRITE with
4619+        # this update range.
4620+        return self._update_servermap(update_range=(start_segment,
4621+                                                    end_segment))
4622+
4623+
4624+    def _decode_and_decrypt_segments(self, ignored, data, offset):
4625+        """
4626+        After the servermap update, I take the encrypted and encoded
4627+        data that the servermap fetched while doing its update and
4628+        transform it into decoded-and-decrypted plaintext that can be
4629+        used by the new uploadable. I return a Deferred that fires with
4630+        the segments.
4631+        """
4632+        r = Retrieve(self._node, self._servermap, self._version)
4633+        # decode: takes in our blocks and salts from the servermap,
4634+        # returns a Deferred that fires with the corresponding plaintext
4635+        # segments. Does not download -- simply takes advantage of
4636+        # existing infrastructure within the Retrieve class to avoid
4637+        # duplicating code.
4638+        sm = self._servermap
4639+        # XXX: If the methods in the servermap don't work as
4640+        # abstractions, you should rewrite them instead of going around
4641+        # them.
4642+        update_data = sm.update_data
4643+        start_segments = {} # shnum -> start segment
4644+        end_segments = {} # shnum -> end segment
4645+        blockhashes = {} # shnum -> blockhash tree
4646+        for (shnum, data) in update_data.iteritems():
4647+            data = [d[1] for d in data if d[0] == self._version]
4648+
4649+            # Every data entry in our list should now be share shnum for
4650+            # a particular version of the mutable file, so all of the
4651+            # entries should be identical.
4652+            datum = data[0]
4653+            assert filter(lambda x: x != datum, data) == []
4654+
4655+            blockhashes[shnum] = datum[0]
4656+            start_segments[shnum] = datum[1]
4657+            end_segments[shnum] = datum[2]
4658+
4659+        d1 = r.decode(start_segments, self._start_segment)
4660+        d2 = r.decode(end_segments, self._end_segment)
4661+        d3 = defer.succeed(blockhashes)
4662+        return deferredutil.gatherResults([d1, d2, d3])
4663+
4664+
4665+    def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
4666+        """
4667+        After the process has the plaintext segments, I build the
4668+        TransformingUploadable that the publisher will eventually
4669+        re-upload to the grid. I then invoke the publisher with that
4670+        uploadable, and return a Deferred when the publish operation has
4671+        completed without issue.
4672+        """
4673+        u = TransformingUploadable(data, offset,
4674+                                   self._version[3],
4675+                                   segments_and_bht[0],
4676+                                   segments_and_bht[1])
4677+        p = Publish(self._node, self._storage_broker, self._servermap)
4678+        return p.update(u, offset, segments_and_bht[2], self._version)
4679+
4680+
4681+    def _update_servermap(self, mode=MODE_WRITE, update_range=None):
4682+        """
4683+        I update the servermap. I return a Deferred that fires when the
4684+        servermap update is done.
4685+        """
4686+        if update_range:
4687+            u = ServermapUpdater(self._node, self._storage_broker, Monitor(),
4688+                                 self._servermap,
4689+                                 mode=mode,
4690+                                 update_range=update_range)
4691+        else:
4692+            u = ServermapUpdater(self._node, self._storage_broker, Monitor(),
4693+                                 self._servermap,
4694+                                 mode=mode)
4695+        return u.update()
4696}
4697[client: teach client how to create and work with MDMF files
4698Kevan Carstensen <kevan@isnotajoke.com>**20110802014811
4699 Ignore-this: d72fbc4c2ca63f00d9ab9dc2919098ff
4700] {
4701hunk ./src/allmydata/client.py 25
4702 from allmydata.util.time_format import parse_duration, parse_date
4703 from allmydata.stats import StatsProvider
4704 from allmydata.history import History
4705-from allmydata.interfaces import IStatsProducer, RIStubClient
4706+from allmydata.interfaces import IStatsProducer, RIStubClient, \
4707+                                 SDMF_VERSION, MDMF_VERSION
4708 from allmydata.nodemaker import NodeMaker
4709 
4710 
4711hunk ./src/allmydata/client.py 341
4712                                    self.terminator,
4713                                    self.get_encoding_parameters(),
4714                                    self._key_generator)
4715+        default = self.get_config("client", "mutable.format", default="sdmf")
4716+        if default == "mdmf":
4717+            self.mutable_file_default = MDMF_VERSION
4718+        else:
4719+            self.mutable_file_default = SDMF_VERSION
4720 
4721     def get_history(self):
4722         return self.history
4723hunk ./src/allmydata/client.py 477
4724         # may get an opaque node if there were any problems.
4725         return self.nodemaker.create_from_cap(write_uri, read_uri, deep_immutable=deep_immutable, name=name)
4726 
4727-    def create_dirnode(self, initial_children={}):
4728-        d = self.nodemaker.create_new_mutable_directory(initial_children)
4729+    def create_dirnode(self, initial_children={}, version=SDMF_VERSION):
4730+        d = self.nodemaker.create_new_mutable_directory(initial_children, version=version)
4731         return d
4732 
4733     def create_immutable_dirnode(self, children, convergence=None):
4734hunk ./src/allmydata/client.py 484
4735         return self.nodemaker.create_immutable_directory(children, convergence)
4736 
4737-    def create_mutable_file(self, contents=None, keysize=None):
4738-        return self.nodemaker.create_mutable_file(contents, keysize)
4739+    def create_mutable_file(self, contents=None, keysize=None, version=None):
4740+        if not version:
4741+            version = self.mutable_file_default
4742+        return self.nodemaker.create_mutable_file(contents, keysize,
4743+                                                  version=version)
4744 
4745     def upload(self, uploadable):
4746         uploader = self.getServiceNamed("uploader")
4747}
4748[nodemaker: teach nodemaker about MDMF caps
4749Kevan Carstensen <kevan@isnotajoke.com>**20110802014926
4750 Ignore-this: 430c73121b6883b99626cfd652fc65c4
4751] {
4752hunk ./src/allmydata/nodemaker.py 82
4753             return self._create_immutable(cap)
4754         if isinstance(cap, uri.CHKFileVerifierURI):
4755             return self._create_immutable_verifier(cap)
4756-        if isinstance(cap, (uri.ReadonlySSKFileURI, uri.WriteableSSKFileURI)):
4757+        if isinstance(cap, (uri.ReadonlySSKFileURI, uri.WriteableSSKFileURI,
4758+                            uri.WritableMDMFFileURI, uri.ReadonlyMDMFFileURI)):
4759             return self._create_mutable(cap)
4760         if isinstance(cap, (uri.DirectoryURI,
4761                             uri.ReadonlyDirectoryURI,
4762hunk ./src/allmydata/nodemaker.py 88
4763                             uri.ImmutableDirectoryURI,
4764-                            uri.LiteralDirectoryURI)):
4765+                            uri.LiteralDirectoryURI,
4766+                            uri.MDMFDirectoryURI,
4767+                            uri.ReadonlyMDMFDirectoryURI)):
4768             filenode = self._create_from_single_cap(cap.get_filenode_cap())
4769             return self._create_dirnode(filenode)
4770         return None
4771}
4772[mutable: train checker and repairer to work with MDMF mutable files
4773Kevan Carstensen <kevan@isnotajoke.com>**20110802015140
4774 Ignore-this: 8b1928925bed63708b71ab0de8d4306f
4775] {
4776hunk ./src/allmydata/mutable/checker.py 2
4777 
4778-from twisted.internet import defer
4779-from twisted.python import failure
4780-from allmydata import hashtree
4781 from allmydata.uri import from_string
4782hunk ./src/allmydata/mutable/checker.py 3
4783-from allmydata.util import hashutil, base32, idlib, log
4784+from allmydata.util import base32, idlib, log
4785 from allmydata.check_results import CheckAndRepairResults, CheckResults
4786 
4787 from allmydata.mutable.common import MODE_CHECK, CorruptShareError
4788hunk ./src/allmydata/mutable/checker.py 8
4789 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
4790-from allmydata.mutable.layout import unpack_share, SIGNED_PREFIX_LENGTH
4791+from allmydata.mutable.retrieve import Retrieve # for verifying
4792 
4793 class MutableChecker:
4794 
4795hunk ./src/allmydata/mutable/checker.py 25
4796 
4797     def check(self, verify=False, add_lease=False):
4798         servermap = ServerMap()
4799+        # Updating the servermap in MODE_CHECK will stand a good chance
4800+        # of finding all of the shares, and getting a good idea of
4801+        # recoverability, etc, without verifying.
4802         u = ServermapUpdater(self._node, self._storage_broker, self._monitor,
4803                              servermap, MODE_CHECK, add_lease=add_lease)
4804         if self._history:
4805hunk ./src/allmydata/mutable/checker.py 51
4806         if num_recoverable:
4807             self.best_version = servermap.best_recoverable_version()
4808 
4809+        # The file is unhealthy and needs to be repaired if:
4810+        # - There are unrecoverable versions.
4811         if servermap.unrecoverable_versions():
4812             self.need_repair = True
4813hunk ./src/allmydata/mutable/checker.py 55
4814+        # - There isn't a recoverable version.
4815         if num_recoverable != 1:
4816             self.need_repair = True
4817hunk ./src/allmydata/mutable/checker.py 58
4818+        # - The best recoverable version is missing some shares.
4819         if self.best_version:
4820             available_shares = servermap.shares_available()
4821             (num_distinct_shares, k, N) = available_shares[self.best_version]
4822hunk ./src/allmydata/mutable/checker.py 69
4823 
4824     def _verify_all_shares(self, servermap):
4825         # read every byte of each share
4826+        #
4827+        # This logic is going to be very nearly the same as the
4828+        # downloader. I bet we could pass the downloader a flag that
4829+        # makes it do this, and piggyback onto that instead of
4830+        # duplicating a bunch of code.
4831+        #
4832+        # Like:
4833+        #  r = Retrieve(blah, blah, blah, verify=True)
4834+        #  d = r.download()
4835+        #  (wait, wait, wait, d.callback)
4836+        # 
4837+        #  Then, when it has finished, we can check the servermap (which
4838+        #  we provided to Retrieve) to figure out which shares are bad,
4839+        #  since the Retrieve process will have updated the servermap as
4840+        #  it went along.
4841+        #
4842+        #  By passing the verify=True flag to the constructor, we are
4843+        #  telling the downloader a few things.
4844+        #
4845+        #  1. It needs to download all N shares, not just K shares.
4846+        #  2. It doesn't need to decrypt or decode the shares, only
4847+        #     verify them.
4848         if not self.best_version:
4849             return
4850hunk ./src/allmydata/mutable/checker.py 93
4851-        versionmap = servermap.make_versionmap()
4852-        shares = versionmap[self.best_version]
4853-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
4854-         offsets_tuple) = self.best_version
4855-        offsets = dict(offsets_tuple)
4856-        readv = [ (0, offsets["EOF"]) ]
4857-        dl = []
4858-        for (shnum, peerid, timestamp) in shares:
4859-            ss = servermap.connections[peerid]
4860-            d = self._do_read(ss, peerid, self._storage_index, [shnum], readv)
4861-            d.addCallback(self._got_answer, peerid, servermap)
4862-            dl.append(d)
4863-        return defer.DeferredList(dl, fireOnOneErrback=True, consumeErrors=True)
4864 
4865hunk ./src/allmydata/mutable/checker.py 94
4866-    def _do_read(self, ss, peerid, storage_index, shnums, readv):
4867-        # isolate the callRemote to a separate method, so tests can subclass
4868-        # Publish and override it
4869-        d = ss.callRemote("slot_readv", storage_index, shnums, readv)
4870+        r = Retrieve(self._node, servermap, self.best_version, verify=True)
4871+        d = r.download()
4872+        d.addCallback(self._process_bad_shares)
4873         return d
4874 
4875hunk ./src/allmydata/mutable/checker.py 99
4876-    def _got_answer(self, datavs, peerid, servermap):
4877-        for shnum,datav in datavs.items():
4878-            data = datav[0]
4879-            try:
4880-                self._got_results_one_share(shnum, peerid, data)
4881-            except CorruptShareError:
4882-                f = failure.Failure()
4883-                self.need_repair = True
4884-                self.bad_shares.append( (peerid, shnum, f) )
4885-                prefix = data[:SIGNED_PREFIX_LENGTH]
4886-                servermap.mark_bad_share(peerid, shnum, prefix)
4887-                ss = servermap.connections[peerid]
4888-                self.notify_server_corruption(ss, shnum, str(f.value))
4889-
4890-    def check_prefix(self, peerid, shnum, data):
4891-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
4892-         offsets_tuple) = self.best_version
4893-        got_prefix = data[:SIGNED_PREFIX_LENGTH]
4894-        if got_prefix != prefix:
4895-            raise CorruptShareError(peerid, shnum,
4896-                                    "prefix mismatch: share changed while we were reading it")
4897-
4898-    def _got_results_one_share(self, shnum, peerid, data):
4899-        self.check_prefix(peerid, shnum, data)
4900-
4901-        # the [seqnum:signature] pieces are validated by _compare_prefix,
4902-        # which checks their signature against the pubkey known to be
4903-        # associated with this file.
4904 
4905hunk ./src/allmydata/mutable/checker.py 100
4906-        (seqnum, root_hash, IV, k, N, segsize, datalen, pubkey, signature,
4907-         share_hash_chain, block_hash_tree, share_data,
4908-         enc_privkey) = unpack_share(data)
4909-
4910-        # validate [share_hash_chain,block_hash_tree,share_data]
4911-
4912-        leaves = [hashutil.block_hash(share_data)]
4913-        t = hashtree.HashTree(leaves)
4914-        if list(t) != block_hash_tree:
4915-            raise CorruptShareError(peerid, shnum, "block hash tree failure")
4916-        share_hash_leaf = t[0]
4917-        t2 = hashtree.IncompleteHashTree(N)
4918-        # root_hash was checked by the signature
4919-        t2.set_hashes({0: root_hash})
4920-        try:
4921-            t2.set_hashes(hashes=share_hash_chain,
4922-                          leaves={shnum: share_hash_leaf})
4923-        except (hashtree.BadHashError, hashtree.NotEnoughHashesError,
4924-                IndexError), e:
4925-            msg = "corrupt hashes: %s" % (e,)
4926-            raise CorruptShareError(peerid, shnum, msg)
4927-
4928-        # validate enc_privkey: only possible if we have a write-cap
4929-        if not self._node.is_readonly():
4930-            alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
4931-            alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
4932-            if alleged_writekey != self._node.get_writekey():
4933-                raise CorruptShareError(peerid, shnum, "invalid privkey")
4934+    def _process_bad_shares(self, bad_shares):
4935+        if bad_shares:
4936+            self.need_repair = True
4937+        self.bad_shares = bad_shares
4938 
4939hunk ./src/allmydata/mutable/checker.py 105
4940-    def notify_server_corruption(self, ss, shnum, reason):
4941-        ss.callRemoteOnly("advise_corrupt_share",
4942-                          "mutable", self._storage_index, shnum, reason)
4943 
4944     def _count_shares(self, smap, version):
4945         available_shares = smap.shares_available()
4946hunk ./src/allmydata/mutable/repairer.py 5
4947 from zope.interface import implements
4948 from twisted.internet import defer
4949 from allmydata.interfaces import IRepairResults, ICheckResults
4950+from allmydata.mutable.publish import MutableData
4951 
4952 class RepairResults:
4953     implements(IRepairResults)
4954hunk ./src/allmydata/mutable/repairer.py 108
4955             raise RepairRequiresWritecapError("Sorry, repair currently requires a writecap, to set the write-enabler properly.")
4956 
4957         d = self.node.download_version(smap, best_version, fetch_privkey=True)
4958+        d.addCallback(lambda data:
4959+            MutableData(data))
4960         d.addCallback(self.node.upload, smap)
4961         d.addCallback(self.get_results, smap)
4962         return d
4963}
4964[test/common: Alter common test code to work with MDMF.
4965Kevan Carstensen <kevan@isnotajoke.com>**20110802015643
4966 Ignore-this: e564403182d0030439b168dd9f8726fa
4967 
4968 This mostly has to do with making the test code implement the new
4969 unified filenode interfaces.
4970] {
4971hunk ./src/allmydata/test/common.py 11
4972 from foolscap.api import flushEventualQueue, fireEventually
4973 from allmydata import uri, dirnode, client
4974 from allmydata.introducer.server import IntroducerNode
4975-from allmydata.interfaces import IMutableFileNode, IImmutableFileNode, \
4976-     FileTooLargeError, NotEnoughSharesError, ICheckable
4977+from allmydata.interfaces import IMutableFileNode, IImmutableFileNode,\
4978+                                 NotEnoughSharesError, ICheckable, \
4979+                                 IMutableUploadable, SDMF_VERSION, \
4980+                                 MDMF_VERSION
4981 from allmydata.check_results import CheckResults, CheckAndRepairResults, \
4982      DeepCheckResults, DeepCheckAndRepairResults
4983 from allmydata.mutable.common import CorruptShareError
4984hunk ./src/allmydata/test/common.py 19
4985 from allmydata.mutable.layout import unpack_header
4986+from allmydata.mutable.publish import MutableData
4987+from allmydata.storage.server import storage_index_to_dir
4988 from allmydata.storage.mutable import MutableShareFile
4989 from allmydata.util import hashutil, log, fileutil, pollmixin
4990 from allmydata.util.assertutil import precondition
4991hunk ./src/allmydata/test/common.py 152
4992         consumer.write(data[start:end])
4993         return consumer
4994 
4995+
4996+    def get_best_readable_version(self):
4997+        return defer.succeed(self)
4998+
4999+
5000+    def download_to_data(self):
5001+        return download_to_data(self)
5002+
5003+
5004+    download_best_version = download_to_data
5005+
5006+
5007+    def get_size_of_best_version(self):
5008+        return defer.succeed(self.get_size)
5009+
5010+
5011 def make_chk_file_cap(size):
5012     return uri.CHKFileURI(key=os.urandom(16),
5013                           uri_extension_hash=os.urandom(32),
5014hunk ./src/allmydata/test/common.py 192
5015     MUTABLE_SIZELIMIT = 10000
5016     all_contents = {}
5017     bad_shares = {}
5018+    file_types = {} # storage index => MDMF_VERSION or SDMF_VERSION
5019 
5020     def __init__(self, storage_broker, secret_holder,
5021                  default_encoding_parameters, history):
5022hunk ./src/allmydata/test/common.py 197
5023         self.init_from_cap(make_mutable_file_cap())
5024-    def create(self, contents, key_generator=None, keysize=None):
5025+        self._k = default_encoding_parameters['k']
5026+        self._segsize = default_encoding_parameters['max_segment_size']
5027+    def create(self, contents, key_generator=None, keysize=None,
5028+               version=SDMF_VERSION):
5029+        if version == MDMF_VERSION and \
5030+            isinstance(self.my_uri, (uri.ReadonlySSKFileURI,
5031+                                 uri.WriteableSSKFileURI)):
5032+            self.init_from_cap(make_mdmf_mutable_file_cap())
5033+        self.file_types[self.storage_index] = version
5034         initial_contents = self._get_initial_contents(contents)
5035hunk ./src/allmydata/test/common.py 207
5036-        if len(initial_contents) > self.MUTABLE_SIZELIMIT:
5037-            raise FileTooLargeError("SDMF is limited to one segment, and "
5038-                                    "%d > %d" % (len(initial_contents),
5039-                                                 self.MUTABLE_SIZELIMIT))
5040-        self.all_contents[self.storage_index] = initial_contents
5041+        data = initial_contents.read(initial_contents.get_size())
5042+        data = "".join(data)
5043+        self.all_contents[self.storage_index] = data
5044+        self.my_uri.set_extension_params([self._k, self._segsize])
5045         return defer.succeed(self)
5046     def _get_initial_contents(self, contents):
5047hunk ./src/allmydata/test/common.py 213
5048-        if isinstance(contents, str):
5049-            return contents
5050         if contents is None:
5051hunk ./src/allmydata/test/common.py 214
5052-            return ""
5053+            return MutableData("")
5054+
5055+        if IMutableUploadable.providedBy(contents):
5056+            return contents
5057+
5058         assert callable(contents), "%s should be callable, not %s" % \
5059                (contents, type(contents))
5060         return contents(self)
5061hunk ./src/allmydata/test/common.py 224
5062     def init_from_cap(self, filecap):
5063         assert isinstance(filecap, (uri.WriteableSSKFileURI,
5064-                                    uri.ReadonlySSKFileURI))
5065+                                    uri.ReadonlySSKFileURI,
5066+                                    uri.WritableMDMFFileURI,
5067+                                    uri.ReadonlyMDMFFileURI))
5068         self.my_uri = filecap
5069         self.storage_index = self.my_uri.get_storage_index()
5070hunk ./src/allmydata/test/common.py 229
5071+        if isinstance(filecap, (uri.WritableMDMFFileURI,
5072+                                uri.ReadonlyMDMFFileURI)):
5073+            self.file_types[self.storage_index] = MDMF_VERSION
5074+
5075+        else:
5076+            self.file_types[self.storage_index] = SDMF_VERSION
5077+
5078         return self
5079     def get_cap(self):
5080         return self.my_uri
5081hunk ./src/allmydata/test/common.py 253
5082         return self.my_uri.get_readonly().to_string()
5083     def get_verify_cap(self):
5084         return self.my_uri.get_verify_cap()
5085+    def get_repair_cap(self):
5086+        if self.my_uri.is_readonly():
5087+            return None
5088+        return self.my_uri
5089     def is_readonly(self):
5090         return self.my_uri.is_readonly()
5091     def is_mutable(self):
5092hunk ./src/allmydata/test/common.py 279
5093     def get_storage_index(self):
5094         return self.storage_index
5095 
5096+    def get_servermap(self, mode):
5097+        return defer.succeed(None)
5098+
5099+    def get_version(self):
5100+        assert self.storage_index in self.file_types
5101+        return self.file_types[self.storage_index]
5102+
5103     def check(self, monitor, verify=False, add_lease=False):
5104         r = CheckResults(self.my_uri, self.storage_index)
5105         is_bad = self.bad_shares.get(self.storage_index, None)
5106hunk ./src/allmydata/test/common.py 344
5107         return d
5108 
5109     def download_best_version(self):
5110+        return defer.succeed(self._download_best_version())
5111+
5112+
5113+    def _download_best_version(self, ignored=None):
5114         if isinstance(self.my_uri, uri.LiteralFileURI):
5115hunk ./src/allmydata/test/common.py 349
5116-            return defer.succeed(self.my_uri.data)
5117+            return self.my_uri.data
5118         if self.storage_index not in self.all_contents:
5119hunk ./src/allmydata/test/common.py 351
5120-            return defer.fail(NotEnoughSharesError(None, 0, 3))
5121-        return defer.succeed(self.all_contents[self.storage_index])
5122+            raise NotEnoughSharesError(None, 0, 3)
5123+        return self.all_contents[self.storage_index]
5124+
5125 
5126     def overwrite(self, new_contents):
5127hunk ./src/allmydata/test/common.py 356
5128-        if len(new_contents) > self.MUTABLE_SIZELIMIT:
5129-            raise FileTooLargeError("SDMF is limited to one segment, and "
5130-                                    "%d > %d" % (len(new_contents),
5131-                                                 self.MUTABLE_SIZELIMIT))
5132         assert not self.is_readonly()
5133hunk ./src/allmydata/test/common.py 357
5134-        self.all_contents[self.storage_index] = new_contents
5135+        new_data = new_contents.read(new_contents.get_size())
5136+        new_data = "".join(new_data)
5137+        self.all_contents[self.storage_index] = new_data
5138+        self.my_uri.set_extension_params([self._k, self._segsize])
5139         return defer.succeed(None)
5140     def modify(self, modifier):
5141         # this does not implement FileTooLargeError, but the real one does
5142hunk ./src/allmydata/test/common.py 368
5143     def _modify(self, modifier):
5144         assert not self.is_readonly()
5145         old_contents = self.all_contents[self.storage_index]
5146-        self.all_contents[self.storage_index] = modifier(old_contents, None, True)
5147+        new_data = modifier(old_contents, None, True)
5148+        self.all_contents[self.storage_index] = new_data
5149+        self.my_uri.set_extension_params([self._k, self._segsize])
5150         return None
5151 
5152hunk ./src/allmydata/test/common.py 373
5153+    # As actually implemented, MutableFilenode and MutableFileVersion
5154+    # are distinct. However, nothing in the webapi uses (yet) that
5155+    # distinction -- it just uses the unified download interface
5156+    # provided by get_best_readable_version and read. When we start
5157+    # doing cooler things like LDMF, we will want to revise this code to
5158+    # be less simplistic.
5159+    def get_best_readable_version(self):
5160+        return defer.succeed(self)
5161+
5162+
5163+    def get_best_mutable_version(self):
5164+        return defer.succeed(self)
5165+
5166+    # Ditto for this, which is an implementation of IWritable.
5167+    # XXX: Declare that the same is implemented.
5168+    def update(self, data, offset):
5169+        assert not self.is_readonly()
5170+        def modifier(old, servermap, first_time):
5171+            new = old[:offset] + "".join(data.read(data.get_size()))
5172+            new += old[len(new):]
5173+            return new
5174+        return self.modify(modifier)
5175+
5176+
5177+    def read(self, consumer, offset=0, size=None):
5178+        data = self._download_best_version()
5179+        if size:
5180+            data = data[offset:offset+size]
5181+        consumer.write(data)
5182+        return defer.succeed(consumer)
5183+
5184+
5185 def make_mutable_file_cap():
5186     return uri.WriteableSSKFileURI(writekey=os.urandom(16),
5187                                    fingerprint=os.urandom(32))
5188hunk ./src/allmydata/test/common.py 408
5189-def make_mutable_file_uri():
5190-    return make_mutable_file_cap().to_string()
5191+
5192+def make_mdmf_mutable_file_cap():
5193+    return uri.WritableMDMFFileURI(writekey=os.urandom(16),
5194+                                   fingerprint=os.urandom(32))
5195+
5196+def make_mutable_file_uri(mdmf=False):
5197+    if mdmf:
5198+        uri = make_mdmf_mutable_file_cap()
5199+    else:
5200+        uri = make_mutable_file_cap()
5201+
5202+    return uri.to_string()
5203 
5204 def make_verifier_uri():
5205     return uri.SSKVerifierURI(storage_index=os.urandom(16),
5206hunk ./src/allmydata/test/common.py 425
5207                               fingerprint=os.urandom(32)).to_string()
5208 
5209+def create_mutable_filenode(contents, mdmf=False):
5210+    # XXX: All of these arguments are kind of stupid.
5211+    if mdmf:
5212+        cap = make_mdmf_mutable_file_cap()
5213+    else:
5214+        cap = make_mutable_file_cap()
5215+
5216+    encoding_params = {}
5217+    encoding_params['k'] = 3
5218+    encoding_params['max_segment_size'] = 128*1024
5219+
5220+    filenode = FakeMutableFileNode(None, None, encoding_params, None)
5221+    filenode.init_from_cap(cap)
5222+    if mdmf:
5223+        filenode.create(MutableData(contents), version=MDMF_VERSION)
5224+    else:
5225+        filenode.create(MutableData(contents), version=SDMF_VERSION)
5226+    return filenode
5227+
5228+
5229 class FakeDirectoryNode(dirnode.DirectoryNode):
5230     """This offers IDirectoryNode, but uses a FakeMutableFileNode for the
5231     backing store, so it doesn't go to the grid. The child data is still
5232}
5233[immutable/literal.py: Implement interface changes in literal nodes.
5234Kevan Carstensen <kevan@isnotajoke.com>**20110802020814
5235 Ignore-this: 4371e71a50e65ce2607c4d67d3a32171
5236] {
5237hunk ./src/allmydata/immutable/literal.py 106
5238         d.addCallback(lambda lastSent: consumer)
5239         return d
5240 
5241+    # IReadable, IFileNode, IFilesystemNode
5242+    def get_best_readable_version(self):
5243+        return defer.succeed(self)
5244+
5245+
5246+    def download_best_version(self):
5247+        return defer.succeed(self.u.data)
5248+
5249+
5250+    download_to_data = download_best_version
5251+    get_size_of_best_version = get_current_size
5252+
5253hunk ./src/allmydata/test/test_filenode.py 98
5254         def _check_segment(res):
5255             self.failUnlessEqual(res, DATA[1:1+5])
5256         d.addCallback(_check_segment)
5257+        d.addCallback(lambda ignored: fn1.get_best_readable_version())
5258+        d.addCallback(lambda fn2: self.failUnlessEqual(fn1, fn2))
5259+        d.addCallback(lambda ignored:
5260+            fn1.get_size_of_best_version())
5261+        d.addCallback(lambda size:
5262+            self.failUnlessEqual(size, len(DATA)))
5263+        d.addCallback(lambda ignored:
5264+            fn1.download_to_data())
5265+        d.addCallback(lambda data:
5266+            self.failUnlessEqual(data, DATA))
5267+        d.addCallback(lambda ignored:
5268+            fn1.download_best_version())
5269+        d.addCallback(lambda data:
5270+            self.failUnlessEqual(data, DATA))
5271 
5272         return d
5273 
5274}
5275[immutable/filenode: implement unified filenode interface
5276Kevan Carstensen <kevan@isnotajoke.com>**20110802020905
5277 Ignore-this: d9a442fc285157f134f5d1b4607c6a48
5278] {
5279hunk ./src/allmydata/immutable/filenode.py 8
5280 now = time.time
5281 from zope.interface import implements
5282 from twisted.internet import defer
5283-from twisted.internet.interfaces import IConsumer
5284 
5285hunk ./src/allmydata/immutable/filenode.py 9
5286-from allmydata.interfaces import IImmutableFileNode, IUploadResults
5287 from allmydata import uri
5288hunk ./src/allmydata/immutable/filenode.py 10
5289+from twisted.internet.interfaces import IConsumer
5290+from twisted.protocols import basic
5291+from foolscap.api import eventually
5292+from allmydata.interfaces import IImmutableFileNode, ICheckable, \
5293+     IDownloadTarget, IUploadResults
5294+from allmydata.util import dictutil, log, base32, consumer
5295+from allmydata.immutable.checker import Checker
5296 from allmydata.check_results import CheckResults, CheckAndRepairResults
5297 from allmydata.util.dictutil import DictOfSets
5298 from pycryptopp.cipher.aes import AES
5299hunk ./src/allmydata/immutable/filenode.py 285
5300         return self._cnode.check_and_repair(monitor, verify, add_lease)
5301     def check(self, monitor, verify=False, add_lease=False):
5302         return self._cnode.check(monitor, verify, add_lease)
5303+
5304+    def get_best_readable_version(self):
5305+        """
5306+        Return an IReadable of the best version of this file. Since
5307+        immutable files can have only one version, we just return the
5308+        current filenode.
5309+        """
5310+        return defer.succeed(self)
5311+
5312+
5313+    def download_best_version(self):
5314+        """
5315+        Download the best version of this file, returning its contents
5316+        as a bytestring. Since there is only one version of an immutable
5317+        file, we download and return the contents of this file.
5318+        """
5319+        d = consumer.download_to_data(self)
5320+        return d
5321+
5322+    # for an immutable file, download_to_data (specified in IReadable)
5323+    # is the same as download_best_version (specified in IFileNode). For
5324+    # mutable files, the difference is more meaningful, since they can
5325+    # have multiple versions.
5326+    download_to_data = download_best_version
5327+
5328+
5329+    # get_size() (IReadable), get_current_size() (IFilesystemNode), and
5330+    # get_size_of_best_version(IFileNode) are all the same for immutable
5331+    # files.
5332+    get_size_of_best_version = get_current_size
5333hunk ./src/allmydata/test/test_immutable.py 290
5334         d.addCallback(_try_download)
5335         return d
5336 
5337+    def test_download_to_data(self):
5338+        d = self.n.download_to_data()
5339+        d.addCallback(lambda data:
5340+            self.failUnlessEqual(data, common.TEST_DATA))
5341+        return d
5342+
5343+
5344+    def test_download_best_version(self):
5345+        d = self.n.download_best_version()
5346+        d.addCallback(lambda data:
5347+            self.failUnlessEqual(data, common.TEST_DATA))
5348+        return d
5349+
5350+
5351+    def test_get_best_readable_version(self):
5352+        d = self.n.get_best_readable_version()
5353+        d.addCallback(lambda n2:
5354+            self.failUnlessEqual(n2, self.n))
5355+        return d
5356+
5357+    def test_get_size_of_best_version(self):
5358+        d = self.n.get_size_of_best_version()
5359+        d.addCallback(lambda size:
5360+            self.failUnlessEqual(size, len(common.TEST_DATA)))
5361+        return d
5362+
5363 
5364 # XXX extend these tests to show bad behavior of various kinds from servers:
5365 # raising exception from each remove_foo() method, for example
5366}
5367[mutable/layout: Define MDMF share format, write tools for working with MDMF share format
5368Kevan Carstensen <kevan@isnotajoke.com>**20110802021120
5369 Ignore-this: fa76ef4800939e19ba3cbc22a2eab4e
5370 
5371 The changes in layout.py are mostly concerned with the MDMF share
5372 format. In particular, we define read and write proxy objects used by
5373 retrieval, publishing, and other code to write and read the MDMF share
5374 format. We create equivalent proxies for SDMF objects so that these
5375 objects can be suitably general.
5376] {
5377hunk ./src/allmydata/mutable/layout.py 2
5378 
5379-import struct
5380+import struct, math
5381 from allmydata.mutable.common import NeedMoreDataError, UnknownVersionError
5382hunk ./src/allmydata/mutable/layout.py 4
5383+from allmydata.interfaces import HASH_SIZE, SALT_SIZE, SDMF_VERSION, \
5384+                                 MDMF_VERSION, IMutableSlotWriter
5385+from allmydata.util import mathutil, observer
5386+from twisted.python import failure
5387+from twisted.internet import defer
5388+from zope.interface import implements
5389+
5390+
5391+# These strings describe the format of the packed structs they help process
5392+# Here's what they mean:
5393+#
5394+#  PREFIX:
5395+#    >: Big-endian byte order; the most significant byte is first (leftmost).
5396+#    B: The version information; an 8 bit version identifier. Stored as
5397+#       an unsigned char. This is currently 00 00 00 00; our modifications
5398+#       will turn it into 00 00 00 01.
5399+#    Q: The sequence number; this is sort of like a revision history for
5400+#       mutable files; they start at 1 and increase as they are changed after
5401+#       being uploaded. Stored as an unsigned long long, which is 8 bytes in
5402+#       length.
5403+#  32s: The root hash of the share hash tree. We use sha-256d, so we use 32
5404+#       characters = 32 bytes to store the value.
5405+#  16s: The salt for the readkey. This is a 16-byte random value, stored as
5406+#       16 characters.
5407+#
5408+#  SIGNED_PREFIX additions, things that are covered by the signature:
5409+#    B: The "k" encoding parameter. We store this as an 8-bit character,
5410+#       which is convenient because our erasure coding scheme cannot
5411+#       encode if you ask for more than 255 pieces.
5412+#    B: The "N" encoding parameter. Stored as an 8-bit character for the
5413+#       same reasons as above.
5414+#    Q: The segment size of the uploaded file. This will essentially be the
5415+#       length of the file in SDMF. An unsigned long long, so we can store
5416+#       files of quite large size.
5417+#    Q: The data length of the uploaded file. Modulo padding, this will be
5418+#       the same of the data length field. Like the data length field, it is
5419+#       an unsigned long long and can be quite large.
5420+#
5421+#   HEADER additions:
5422+#     L: The offset of the signature of this. An unsigned long.
5423+#     L: The offset of the share hash chain. An unsigned long.
5424+#     L: The offset of the block hash tree. An unsigned long.
5425+#     L: The offset of the share data. An unsigned long.
5426+#     Q: The offset of the encrypted private key. An unsigned long long, to
5427+#        account for the possibility of a lot of share data.
5428+#     Q: The offset of the EOF. An unsigned long long, to account for the
5429+#        possibility of a lot of share data.
5430+#
5431+#  After all of these, we have the following:
5432+#    - The verification key: Occupies the space between the end of the header
5433+#      and the start of the signature (i.e.: data[HEADER_LENGTH:o['signature']].
5434+#    - The signature, which goes from the signature offset to the share hash
5435+#      chain offset.
5436+#    - The share hash chain, which goes from the share hash chain offset to
5437+#      the block hash tree offset.
5438+#    - The share data, which goes from the share data offset to the encrypted
5439+#      private key offset.
5440+#    - The encrypted private key offset, which goes until the end of the file.
5441+#
5442+#  The block hash tree in this encoding has only one share, so the offset of
5443+#  the share data will be 32 bits more than the offset of the block hash tree.
5444+#  Given this, we may need to check to see how many bytes a reasonably sized
5445+#  block hash tree will take up.
5446 
5447 PREFIX = ">BQ32s16s" # each version has a different prefix
5448 SIGNED_PREFIX = ">BQ32s16s BBQQ" # this is covered by the signature
5449hunk ./src/allmydata/mutable/layout.py 73
5450 SIGNED_PREFIX_LENGTH = struct.calcsize(SIGNED_PREFIX)
5451 HEADER = ">BQ32s16s BBQQ LLLLQQ" # includes offsets
5452 HEADER_LENGTH = struct.calcsize(HEADER)
5453+OFFSETS = ">LLLLQQ"
5454+OFFSETS_LENGTH = struct.calcsize(OFFSETS)
5455 
5456hunk ./src/allmydata/mutable/layout.py 76
5457+# These are still used for some tests.
5458 def unpack_header(data):
5459     o = {}
5460     (version,
5461hunk ./src/allmydata/mutable/layout.py 92
5462      o['EOF']) = struct.unpack(HEADER, data[:HEADER_LENGTH])
5463     return (version, seqnum, root_hash, IV, k, N, segsize, datalen, o)
5464 
5465-def unpack_prefix_and_signature(data):
5466-    assert len(data) >= HEADER_LENGTH, len(data)
5467-    prefix = data[:SIGNED_PREFIX_LENGTH]
5468-
5469-    (version,
5470-     seqnum,
5471-     root_hash,
5472-     IV,
5473-     k, N, segsize, datalen,
5474-     o) = unpack_header(data)
5475-
5476-    if version != 0:
5477-        raise UnknownVersionError("got mutable share version %d, but I only understand version 0" % version)
5478-
5479-    if len(data) < o['share_hash_chain']:
5480-        raise NeedMoreDataError(o['share_hash_chain'],
5481-                                o['enc_privkey'], o['EOF']-o['enc_privkey'])
5482-
5483-    pubkey_s = data[HEADER_LENGTH:o['signature']]
5484-    signature = data[o['signature']:o['share_hash_chain']]
5485-
5486-    return (seqnum, root_hash, IV, k, N, segsize, datalen,
5487-            pubkey_s, signature, prefix)
5488-
5489 def unpack_share(data):
5490     assert len(data) >= HEADER_LENGTH
5491     o = {}
5492hunk ./src/allmydata/mutable/layout.py 139
5493             pubkey, signature, share_hash_chain, block_hash_tree,
5494             share_data, enc_privkey)
5495 
5496-def unpack_share_data(verinfo, hash_and_data):
5497-    (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, o_t) = verinfo
5498-
5499-    # hash_and_data starts with the share_hash_chain, so figure out what the
5500-    # offsets really are
5501-    o = dict(o_t)
5502-    o_share_hash_chain = 0
5503-    o_block_hash_tree = o['block_hash_tree'] - o['share_hash_chain']
5504-    o_share_data = o['share_data'] - o['share_hash_chain']
5505-    o_enc_privkey = o['enc_privkey'] - o['share_hash_chain']
5506-
5507-    share_hash_chain_s = hash_and_data[o_share_hash_chain:o_block_hash_tree]
5508-    share_hash_format = ">H32s"
5509-    hsize = struct.calcsize(share_hash_format)
5510-    assert len(share_hash_chain_s) % hsize == 0, len(share_hash_chain_s)
5511-    share_hash_chain = []
5512-    for i in range(0, len(share_hash_chain_s), hsize):
5513-        chunk = share_hash_chain_s[i:i+hsize]
5514-        (hid, h) = struct.unpack(share_hash_format, chunk)
5515-        share_hash_chain.append( (hid, h) )
5516-    share_hash_chain = dict(share_hash_chain)
5517-    block_hash_tree_s = hash_and_data[o_block_hash_tree:o_share_data]
5518-    assert len(block_hash_tree_s) % 32 == 0, len(block_hash_tree_s)
5519-    block_hash_tree = []
5520-    for i in range(0, len(block_hash_tree_s), 32):
5521-        block_hash_tree.append(block_hash_tree_s[i:i+32])
5522-
5523-    share_data = hash_and_data[o_share_data:o_enc_privkey]
5524-
5525-    return (share_hash_chain, block_hash_tree, share_data)
5526-
5527-
5528-def pack_checkstring(seqnum, root_hash, IV):
5529-    return struct.pack(PREFIX,
5530-                       0, # version,
5531-                       seqnum,
5532-                       root_hash,
5533-                       IV)
5534-
5535 def unpack_checkstring(checkstring):
5536     cs_len = struct.calcsize(PREFIX)
5537     version, seqnum, root_hash, IV = struct.unpack(PREFIX, checkstring[:cs_len])
5538hunk ./src/allmydata/mutable/layout.py 146
5539         raise UnknownVersionError("got mutable share version %d, but I only understand version 0" % version)
5540     return (seqnum, root_hash, IV)
5541 
5542-def pack_prefix(seqnum, root_hash, IV,
5543-                required_shares, total_shares,
5544-                segment_size, data_length):
5545-    prefix = struct.pack(SIGNED_PREFIX,
5546-                         0, # version,
5547-                         seqnum,
5548-                         root_hash,
5549-                         IV,
5550-
5551-                         required_shares,
5552-                         total_shares,
5553-                         segment_size,
5554-                         data_length,
5555-                         )
5556-    return prefix
5557 
5558 def pack_offsets(verification_key_length, signature_length,
5559                  share_hash_chain_length, block_hash_tree_length,
5560hunk ./src/allmydata/mutable/layout.py 192
5561                            encprivkey])
5562     return final_share
5563 
5564+def pack_prefix(seqnum, root_hash, IV,
5565+                required_shares, total_shares,
5566+                segment_size, data_length):
5567+    prefix = struct.pack(SIGNED_PREFIX,
5568+                         0, # version,
5569+                         seqnum,
5570+                         root_hash,
5571+                         IV,
5572+                         required_shares,
5573+                         total_shares,
5574+                         segment_size,
5575+                         data_length,
5576+                         )
5577+    return prefix
5578+
5579+
5580+class SDMFSlotWriteProxy:
5581+    implements(IMutableSlotWriter)
5582+    """
5583+    I represent a remote write slot for an SDMF mutable file. I build a
5584+    share in memory, and then write it in one piece to the remote
5585+    server. This mimics how SDMF shares were built before MDMF (and the
5586+    new MDMF uploader), but provides that functionality in a way that
5587+    allows the MDMF uploader to be built without much special-casing for
5588+    file format, which makes the uploader code more readable.
5589+    """
5590+    def __init__(self,
5591+                 shnum,
5592+                 rref, # a remote reference to a storage server
5593+                 storage_index,
5594+                 secrets, # (write_enabler, renew_secret, cancel_secret)
5595+                 seqnum, # the sequence number of the mutable file
5596+                 required_shares,
5597+                 total_shares,
5598+                 segment_size,
5599+                 data_length): # the length of the original file
5600+        self.shnum = shnum
5601+        self._rref = rref
5602+        self._storage_index = storage_index
5603+        self._secrets = secrets
5604+        self._seqnum = seqnum
5605+        self._required_shares = required_shares
5606+        self._total_shares = total_shares
5607+        self._segment_size = segment_size
5608+        self._data_length = data_length
5609+
5610+        # This is an SDMF file, so it should have only one segment, so,
5611+        # modulo padding of the data length, the segment size and the
5612+        # data length should be the same.
5613+        expected_segment_size = mathutil.next_multiple(data_length,
5614+                                                       self._required_shares)
5615+        assert expected_segment_size == segment_size
5616+
5617+        self._block_size = self._segment_size / self._required_shares
5618+
5619+        # This is meant to mimic how SDMF files were built before MDMF
5620+        # entered the picture: we generate each share in its entirety,
5621+        # then push it off to the storage server in one write. When
5622+        # callers call set_*, they are just populating this dict.
5623+        # finish_publishing will stitch these pieces together into a
5624+        # coherent share, and then write the coherent share to the
5625+        # storage server.
5626+        self._share_pieces = {}
5627+
5628+        # This tells the write logic what checkstring to use when
5629+        # writing remote shares.
5630+        self._testvs = []
5631+
5632+        self._readvs = [(0, struct.calcsize(PREFIX))]
5633+
5634+
5635+    def set_checkstring(self, checkstring_or_seqnum,
5636+                              root_hash=None,
5637+                              salt=None):
5638+        """
5639+        Set the checkstring that I will pass to the remote server when
5640+        writing.
5641+
5642+            @param checkstring_or_seqnum: A packed checkstring to use,
5643+                   or a sequence number. I will treat this as a checkstr
5644+
5645+        Note that implementations can differ in which semantics they
5646+        wish to support for set_checkstring -- they can, for example,
5647+        build the checkstring themselves from its constituents, or
5648+        some other thing.
5649+        """
5650+        if root_hash and salt:
5651+            checkstring = struct.pack(PREFIX,
5652+                                      0,
5653+                                      checkstring_or_seqnum,
5654+                                      root_hash,
5655+                                      salt)
5656+        else:
5657+            checkstring = checkstring_or_seqnum
5658+        self._testvs = [(0, len(checkstring), "eq", checkstring)]
5659+
5660+
5661+    def get_checkstring(self):
5662+        """
5663+        Get the checkstring that I think currently exists on the remote
5664+        server.
5665+        """
5666+        if self._testvs:
5667+            return self._testvs[0][3]
5668+        return ""
5669+
5670+
5671+    def put_block(self, data, segnum, salt):
5672+        """
5673+        Add a block and salt to the share.
5674+        """
5675+        # SDMF files have only one segment
5676+        assert segnum == 0
5677+        assert len(data) == self._block_size
5678+        assert len(salt) == SALT_SIZE
5679+
5680+        self._share_pieces['sharedata'] = data
5681+        self._share_pieces['salt'] = salt
5682+
5683+        # TODO: Figure out something intelligent to return.
5684+        return defer.succeed(None)
5685+
5686+
5687+    def put_encprivkey(self, encprivkey):
5688+        """
5689+        Add the encrypted private key to the share.
5690+        """
5691+        self._share_pieces['encprivkey'] = encprivkey
5692+
5693+        return defer.succeed(None)
5694+
5695+
5696+    def put_blockhashes(self, blockhashes):
5697+        """
5698+        Add the block hash tree to the share.
5699+        """
5700+        assert isinstance(blockhashes, list)
5701+        for h in blockhashes:
5702+            assert len(h) == HASH_SIZE
5703+
5704+        # serialize the blockhashes, then set them.
5705+        blockhashes_s = "".join(blockhashes)
5706+        self._share_pieces['block_hash_tree'] = blockhashes_s
5707+
5708+        return defer.succeed(None)
5709+
5710+
5711+    def put_sharehashes(self, sharehashes):
5712+        """
5713+        Add the share hash chain to the share.
5714+        """
5715+        assert isinstance(sharehashes, dict)
5716+        for h in sharehashes.itervalues():
5717+            assert len(h) == HASH_SIZE
5718+
5719+        # serialize the sharehashes, then set them.
5720+        sharehashes_s = "".join([struct.pack(">H32s", i, sharehashes[i])
5721+                                 for i in sorted(sharehashes.keys())])
5722+        self._share_pieces['share_hash_chain'] = sharehashes_s
5723+
5724+        return defer.succeed(None)
5725+
5726+
5727+    def put_root_hash(self, root_hash):
5728+        """
5729+        Add the root hash to the share.
5730+        """
5731+        assert len(root_hash) == HASH_SIZE
5732+
5733+        self._share_pieces['root_hash'] = root_hash
5734+
5735+        return defer.succeed(None)
5736+
5737+
5738+    def put_salt(self, salt):
5739+        """
5740+        Add a salt to an empty SDMF file.
5741+        """
5742+        assert len(salt) == SALT_SIZE
5743+
5744+        self._share_pieces['salt'] = salt
5745+        self._share_pieces['sharedata'] = ""
5746+
5747+
5748+    def get_signable(self):
5749+        """
5750+        Return the part of the share that needs to be signed.
5751+
5752+        SDMF writers need to sign the packed representation of the
5753+        first eight fields of the remote share, that is:
5754+            - version number (0)
5755+            - sequence number
5756+            - root of the share hash tree
5757+            - salt
5758+            - k
5759+            - n
5760+            - segsize
5761+            - datalen
5762+
5763+        This method is responsible for returning that to callers.
5764+        """
5765+        return struct.pack(SIGNED_PREFIX,
5766+                           0,
5767+                           self._seqnum,
5768+                           self._share_pieces['root_hash'],
5769+                           self._share_pieces['salt'],
5770+                           self._required_shares,
5771+                           self._total_shares,
5772+                           self._segment_size,
5773+                           self._data_length)
5774+
5775+
5776+    def put_signature(self, signature):
5777+        """
5778+        Add the signature to the share.
5779+        """
5780+        self._share_pieces['signature'] = signature
5781+
5782+        return defer.succeed(None)
5783+
5784+
5785+    def put_verification_key(self, verification_key):
5786+        """
5787+        Add the verification key to the share.
5788+        """
5789+        self._share_pieces['verification_key'] = verification_key
5790+
5791+        return defer.succeed(None)
5792+
5793+
5794+    def get_verinfo(self):
5795+        """
5796+        I return my verinfo tuple. This is used by the ServermapUpdater
5797+        to keep track of versions of mutable files.
5798+
5799+        The verinfo tuple for MDMF files contains:
5800+            - seqnum
5801+            - root hash
5802+            - a blank (nothing)
5803+            - segsize
5804+            - datalen
5805+            - k
5806+            - n
5807+            - prefix (the thing that you sign)
5808+            - a tuple of offsets
5809+
5810+        We include the nonce in MDMF to simplify processing of version
5811+        information tuples.
5812+
5813+        The verinfo tuple for SDMF files is the same, but contains a
5814+        16-byte IV instead of a hash of salts.
5815+        """
5816+        return (self._seqnum,
5817+                self._share_pieces['root_hash'],
5818+                self._share_pieces['salt'],
5819+                self._segment_size,
5820+                self._data_length,
5821+                self._required_shares,
5822+                self._total_shares,
5823+                self.get_signable(),
5824+                self._get_offsets_tuple())
5825+
5826+    def _get_offsets_dict(self):
5827+        post_offset = HEADER_LENGTH
5828+        offsets = {}
5829+
5830+        verification_key_length = len(self._share_pieces['verification_key'])
5831+        o1 = offsets['signature'] = post_offset + verification_key_length
5832+
5833+        signature_length = len(self._share_pieces['signature'])
5834+        o2 = offsets['share_hash_chain'] = o1 + signature_length
5835+
5836+        share_hash_chain_length = len(self._share_pieces['share_hash_chain'])
5837+        o3 = offsets['block_hash_tree'] = o2 + share_hash_chain_length
5838+
5839+        block_hash_tree_length = len(self._share_pieces['block_hash_tree'])
5840+        o4 = offsets['share_data'] = o3 + block_hash_tree_length
5841+
5842+        share_data_length = len(self._share_pieces['sharedata'])
5843+        o5 = offsets['enc_privkey'] = o4 + share_data_length
5844+
5845+        encprivkey_length = len(self._share_pieces['encprivkey'])
5846+        offsets['EOF'] = o5 + encprivkey_length
5847+        return offsets
5848+
5849+
5850+    def _get_offsets_tuple(self):
5851+        offsets = self._get_offsets_dict()
5852+        return tuple([(key, value) for key, value in offsets.items()])
5853+
5854+
5855+    def _pack_offsets(self):
5856+        offsets = self._get_offsets_dict()
5857+        return struct.pack(">LLLLQQ",
5858+                           offsets['signature'],
5859+                           offsets['share_hash_chain'],
5860+                           offsets['block_hash_tree'],
5861+                           offsets['share_data'],
5862+                           offsets['enc_privkey'],
5863+                           offsets['EOF'])
5864+
5865+
5866+    def finish_publishing(self):
5867+        """
5868+        Do anything necessary to finish writing the share to a remote
5869+        server. I require that no further publishing needs to take place
5870+        after this method has been called.
5871+        """
5872+        for k in ["sharedata", "encprivkey", "signature", "verification_key",
5873+                  "share_hash_chain", "block_hash_tree"]:
5874+            assert k in self._share_pieces
5875+        # This is the only method that actually writes something to the
5876+        # remote server.
5877+        # First, we need to pack the share into data that we can write
5878+        # to the remote server in one write.
5879+        offsets = self._pack_offsets()
5880+        prefix = self.get_signable()
5881+        final_share = "".join([prefix,
5882+                               offsets,
5883+                               self._share_pieces['verification_key'],
5884+                               self._share_pieces['signature'],
5885+                               self._share_pieces['share_hash_chain'],
5886+                               self._share_pieces['block_hash_tree'],
5887+                               self._share_pieces['sharedata'],
5888+                               self._share_pieces['encprivkey']])
5889+
5890+        # Our only data vector is going to be writing the final share,
5891+        # in its entirely.
5892+        datavs = [(0, final_share)]
5893+
5894+        if not self._testvs:
5895+            # Our caller has not provided us with another checkstring
5896+            # yet, so we assume that we are writing a new share, and set
5897+            # a test vector that will allow a new share to be written.
5898+            self._testvs = []
5899+            self._testvs.append(tuple([0, 1, "eq", ""]))
5900+
5901+        tw_vectors = {}
5902+        tw_vectors[self.shnum] = (self._testvs, datavs, None)
5903+        return self._rref.callRemote("slot_testv_and_readv_and_writev",
5904+                                     self._storage_index,
5905+                                     self._secrets,
5906+                                     tw_vectors,
5907+                                     # TODO is it useful to read something?
5908+                                     self._readvs)
5909+
5910+
5911+MDMFHEADER = ">BQ32sBBQQ QQQQQQQQ"
5912+MDMFHEADERWITHOUTOFFSETS = ">BQ32sBBQQ"
5913+MDMFHEADERSIZE = struct.calcsize(MDMFHEADER)
5914+MDMFHEADERWITHOUTOFFSETSSIZE = struct.calcsize(MDMFHEADERWITHOUTOFFSETS)
5915+MDMFCHECKSTRING = ">BQ32s"
5916+MDMFSIGNABLEHEADER = ">BQ32sBBQQ"
5917+MDMFOFFSETS = ">QQQQQQQQ"
5918+MDMFOFFSETS_LENGTH = struct.calcsize(MDMFOFFSETS)
5919+
5920+PRIVATE_KEY_SIZE = 1220
5921+SIGNATURE_SIZE = 260
5922+VERIFICATION_KEY_SIZE = 292
5923+# We know we won't have more than 256 shares, and we know that we won't
5924+# need to store more than lg 256 of them to validate, so that's our
5925+# bound. We add 1 to the int cast to round to the next integer.
5926+SHARE_HASH_CHAIN_SIZE = int(math.log(HASH_SIZE * 256)) + 1
5927+
5928+class MDMFSlotWriteProxy:
5929+    implements(IMutableSlotWriter)
5930+
5931+    """
5932+    I represent a remote write slot for an MDMF mutable file.
5933+
5934+    I abstract away from my caller the details of block and salt
5935+    management, and the implementation of the on-disk format for MDMF
5936+    shares.
5937+    """
5938+    # Expected layout, MDMF:
5939+    # offset:     size:       name:
5940+    #-- signed part --
5941+    # 0           1           version number (01)
5942+    # 1           8           sequence number
5943+    # 9           32          share tree root hash
5944+    # 41          1           The "k" encoding parameter
5945+    # 42          1           The "N" encoding parameter
5946+    # 43          8           The segment size of the uploaded file
5947+    # 51          8           The data length of the original plaintext
5948+    #-- end signed part --
5949+    # 59          8           The offset of the encrypted private key
5950+    # 67          8           The offset of the signature
5951+    # 75          8           The offset of the verification key
5952+    # 83          8           The offset of the end of the v. key.
5953+    # 92          8           The offset of the share data
5954+    # 100         8           The offset of the block hash tree
5955+    # 108         8           The offset of the share hash chain
5956+    # 116         8           The offset of EOF
5957+    #
5958+    # followed by the encrypted private key, signature, verification
5959+    # key, share hash chain, data, and block hash tree. We order the
5960+    # fields that way to make smart downloaders -- downloaders which
5961+    # prempetively read a big part of the share -- possible.
5962+    #
5963+    # The checkstring is the first three fields -- the version number,
5964+    # sequence number, root hash and root salt hash. This is consistent
5965+    # in meaning to what we have with SDMF files, except now instead of
5966+    # using the literal salt, we use a value derived from all of the
5967+    # salts -- the share hash root.
5968+    #
5969+    # The salt is stored before the block for each segment. The block
5970+    # hash tree is computed over the combination of block and salt for
5971+    # each segment. In this way, we get integrity checking for both
5972+    # block and salt with the current block hash tree arrangement.
5973+    #
5974+    # The ordering of the offsets is different to reflect the dependencies
5975+    # that we'll run into with an MDMF file. The expected write flow is
5976+    # something like this:
5977+    #
5978+    #   0: Initialize with the sequence number, encoding parameters and
5979+    #      data length. From this, we can deduce the number of segments,
5980+    #      and where they should go.. We can also figure out where the
5981+    #      encrypted private key should go, because we can figure out how
5982+    #      big the share data will be.
5983+    #
5984+    #   1: Encrypt, encode, and upload the file in chunks. Do something
5985+    #      like
5986+    #
5987+    #       put_block(data, segnum, salt)
5988+    #
5989+    #      to write a block and a salt to the disk. We can do both of
5990+    #      these operations now because we have enough of the offsets to
5991+    #      know where to put them.
5992+    #
5993+    #   2: Put the encrypted private key. Use:
5994+    #
5995+    #        put_encprivkey(encprivkey)
5996+    #
5997+    #      Now that we know the length of the private key, we can fill
5998+    #      in the offset for the block hash tree.
5999+    #
6000+    #   3: We're now in a position to upload the block hash tree for
6001+    #      a share. Put that using something like:
6002+    #       
6003+    #        put_blockhashes(block_hash_tree)
6004+    #
6005+    #      Note that block_hash_tree is a list of hashes -- we'll take
6006+    #      care of the details of serializing that appropriately. When
6007+    #      we get the block hash tree, we are also in a position to
6008+    #      calculate the offset for the share hash chain, and fill that
6009+    #      into the offsets table.
6010+    #
6011+    #   4: We're now in a position to upload the share hash chain for
6012+    #      a share. Do that with something like:
6013+    #     
6014+    #        put_sharehashes(share_hash_chain)
6015+    #
6016+    #      share_hash_chain should be a dictionary mapping shnums to
6017+    #      32-byte hashes -- the wrapper handles serialization.
6018+    #      We'll know where to put the signature at this point, also.
6019+    #      The root of this tree will be put explicitly in the next
6020+    #      step.
6021+    #
6022+    #   5: Before putting the signature, we must first put the
6023+    #      root_hash. Do this with:
6024+    #
6025+    #        put_root_hash(root_hash).
6026+    #     
6027+    #      In terms of knowing where to put this value, it was always
6028+    #      possible to place it, but it makes sense semantically to
6029+    #      place it after the share hash tree, so that's why you do it
6030+    #      in this order.
6031+    #
6032+    #   6: With the root hash put, we can now sign the header. Use:
6033+    #
6034+    #        get_signable()
6035+    #
6036+    #      to get the part of the header that you want to sign, and use:
6037+    #       
6038+    #        put_signature(signature)
6039+    #
6040+    #      to write your signature to the remote server.
6041+    #
6042+    #   6: Add the verification key, and finish. Do:
6043+    #
6044+    #        put_verification_key(key)
6045+    #
6046+    #      and
6047+    #
6048+    #        finish_publish()
6049+    #
6050+    # Checkstring management:
6051+    #
6052+    # To write to a mutable slot, we have to provide test vectors to ensure
6053+    # that we are writing to the same data that we think we are. These
6054+    # vectors allow us to detect uncoordinated writes; that is, writes
6055+    # where both we and some other shareholder are writing to the
6056+    # mutable slot, and to report those back to the parts of the program
6057+    # doing the writing.
6058+    #
6059+    # With SDMF, this was easy -- all of the share data was written in
6060+    # one go, so it was easy to detect uncoordinated writes, and we only
6061+    # had to do it once. With MDMF, not all of the file is written at
6062+    # once.
6063+    #
6064+    # If a share is new, we write out as much of the header as we can
6065+    # before writing out anything else. This gives other writers a
6066+    # canary that they can use to detect uncoordinated writes, and, if
6067+    # they do the same thing, gives us the same canary. We them update
6068+    # the share. We won't be able to write out two fields of the header
6069+    # -- the share tree hash and the salt hash -- until we finish
6070+    # writing out the share. We only require the writer to provide the
6071+    # initial checkstring, and keep track of what it should be after
6072+    # updates ourselves.
6073+    #
6074+    # If we haven't written anything yet, then on the first write (which
6075+    # will probably be a block + salt of a share), we'll also write out
6076+    # the header. On subsequent passes, we'll expect to see the header.
6077+    # This changes in two places:
6078+    #
6079+    #   - When we write out the salt hash
6080+    #   - When we write out the root of the share hash tree
6081+    #
6082+    # since these values will change the header. It is possible that we
6083+    # can just make those be written in one operation to minimize
6084+    # disruption.
6085+    def __init__(self,
6086+                 shnum,
6087+                 rref, # a remote reference to a storage server
6088+                 storage_index,
6089+                 secrets, # (write_enabler, renew_secret, cancel_secret)
6090+                 seqnum, # the sequence number of the mutable file
6091+                 required_shares,
6092+                 total_shares,
6093+                 segment_size,
6094+                 data_length): # the length of the original file
6095+        self.shnum = shnum
6096+        self._rref = rref
6097+        self._storage_index = storage_index
6098+        self._seqnum = seqnum
6099+        self._required_shares = required_shares
6100+        assert self.shnum >= 0 and self.shnum < total_shares
6101+        self._total_shares = total_shares
6102+        # We build up the offset table as we write things. It is the
6103+        # last thing we write to the remote server.
6104+        self._offsets = {}
6105+        self._testvs = []
6106+        # This is a list of write vectors that will be sent to our
6107+        # remote server once we are directed to write things there.
6108+        self._writevs = []
6109+        self._secrets = secrets
6110+        # The segment size needs to be a multiple of the k parameter --
6111+        # any padding should have been carried out by the publisher
6112+        # already.
6113+        assert segment_size % required_shares == 0
6114+        self._segment_size = segment_size
6115+        self._data_length = data_length
6116+
6117+        # These are set later -- we define them here so that we can
6118+        # check for their existence easily
6119+
6120+        # This is the root of the share hash tree -- the Merkle tree
6121+        # over the roots of the block hash trees computed for shares in
6122+        # this upload.
6123+        self._root_hash = None
6124+
6125+        # We haven't yet written anything to the remote bucket. By
6126+        # setting this, we tell the _write method as much. The write
6127+        # method will then know that it also needs to add a write vector
6128+        # for the checkstring (or what we have of it) to the first write
6129+        # request. We'll then record that value for future use.  If
6130+        # we're expecting something to be there already, we need to call
6131+        # set_checkstring before we write anything to tell the first
6132+        # write about that.
6133+        self._written = False
6134+
6135+        # When writing data to the storage servers, we get a read vector
6136+        # for free. We'll read the checkstring, which will help us
6137+        # figure out what's gone wrong if a write fails.
6138+        self._readv = [(0, struct.calcsize(MDMFCHECKSTRING))]
6139+
6140+        # We calculate the number of segments because it tells us
6141+        # where the salt part of the file ends/share segment begins,
6142+        # and also because it provides a useful amount of bounds checking.
6143+        self._num_segments = mathutil.div_ceil(self._data_length,
6144+                                               self._segment_size)
6145+        self._block_size = self._segment_size / self._required_shares
6146+        # We also calculate the share size, to help us with block
6147+        # constraints later.
6148+        tail_size = self._data_length % self._segment_size
6149+        if not tail_size:
6150+            self._tail_block_size = self._block_size
6151+        else:
6152+            self._tail_block_size = mathutil.next_multiple(tail_size,
6153+                                                           self._required_shares)
6154+            self._tail_block_size /= self._required_shares
6155+
6156+        # We already know where the sharedata starts; right after the end
6157+        # of the header (which is defined as the signable part + the offsets)
6158+        # We can also calculate where the encrypted private key begins
6159+        # from what we know know.
6160+        self._actual_block_size = self._block_size + SALT_SIZE
6161+        data_size = self._actual_block_size * (self._num_segments - 1)
6162+        data_size += self._tail_block_size
6163+        data_size += SALT_SIZE
6164+        self._offsets['enc_privkey'] = MDMFHEADERSIZE
6165+
6166+        # We don't define offsets for these because we want them to be
6167+        # tightly packed -- this allows us to ignore the responsibility
6168+        # of padding individual values, and of removing that padding
6169+        # later. So nonconstant_start is where we start writing
6170+        # nonconstant data.
6171+        nonconstant_start = self._offsets['enc_privkey']
6172+        nonconstant_start += PRIVATE_KEY_SIZE
6173+        nonconstant_start += SIGNATURE_SIZE
6174+        nonconstant_start += VERIFICATION_KEY_SIZE
6175+        nonconstant_start += SHARE_HASH_CHAIN_SIZE
6176+
6177+        self._offsets['share_data'] = nonconstant_start
6178+
6179+        # Finally, we know how big the share data will be, so we can
6180+        # figure out where the block hash tree needs to go.
6181+        # XXX: But this will go away if Zooko wants to make it so that
6182+        # you don't need to know the size of the file before you start
6183+        # uploading it.
6184+        self._offsets['block_hash_tree'] = self._offsets['share_data'] + \
6185+                    data_size
6186+
6187+        # Done. We can snow start writing.
6188+
6189+
6190+    def set_checkstring(self,
6191+                        seqnum_or_checkstring,
6192+                        root_hash=None,
6193+                        salt=None):
6194+        """
6195+        Set checkstring checkstring for the given shnum.
6196+
6197+        This can be invoked in one of two ways.
6198+
6199+        With one argument, I assume that you are giving me a literal
6200+        checkstring -- e.g., the output of get_checkstring. I will then
6201+        set that checkstring as it is. This form is used by unit tests.
6202+
6203+        With two arguments, I assume that you are giving me a sequence
6204+        number and root hash to make a checkstring from. In that case, I
6205+        will build a checkstring and set it for you. This form is used
6206+        by the publisher.
6207+
6208+        By default, I assume that I am writing new shares to the grid.
6209+        If you don't explcitly set your own checkstring, I will use
6210+        one that requires that the remote share not exist. You will want
6211+        to use this method if you are updating a share in-place;
6212+        otherwise, writes will fail.
6213+        """
6214+        # You're allowed to overwrite checkstrings with this method;
6215+        # I assume that users know what they are doing when they call
6216+        # it.
6217+        if root_hash:
6218+            checkstring = struct.pack(MDMFCHECKSTRING,
6219+                                      1,
6220+                                      seqnum_or_checkstring,
6221+                                      root_hash)
6222+        else:
6223+            checkstring = seqnum_or_checkstring
6224+
6225+        if checkstring == "":
6226+            # We special-case this, since len("") = 0, but we need
6227+            # length of 1 for the case of an empty share to work on the
6228+            # storage server, which is what a checkstring that is the
6229+            # empty string means.
6230+            self._testvs = []
6231+        else:
6232+            self._testvs = []
6233+            self._testvs.append((0, len(checkstring), "eq", checkstring))
6234+
6235+
6236+    def __repr__(self):
6237+        return "MDMFSlotWriteProxy for share %d" % self.shnum
6238+
6239+
6240+    def get_checkstring(self):
6241+        """
6242+        Given a share number, I return a representation of what the
6243+        checkstring for that share on the server will look like.
6244+
6245+        I am mostly used for tests.
6246+        """
6247+        if self._root_hash:
6248+            roothash = self._root_hash
6249+        else:
6250+            roothash = "\x00" * 32
6251+        return struct.pack(MDMFCHECKSTRING,
6252+                           1,
6253+                           self._seqnum,
6254+                           roothash)
6255+
6256+
6257+    def put_block(self, data, segnum, salt):
6258+        """
6259+        I queue a write vector for the data, salt, and segment number
6260+        provided to me. I return None, as I do not actually cause
6261+        anything to be written yet.
6262+        """
6263+        if segnum >= self._num_segments:
6264+            raise LayoutInvalid("I won't overwrite the block hash tree")
6265+        if len(salt) != SALT_SIZE:
6266+            raise LayoutInvalid("I was given a salt of size %d, but "
6267+                                "I wanted a salt of size %d")
6268+        if segnum + 1 == self._num_segments:
6269+            if len(data) != self._tail_block_size:
6270+                raise LayoutInvalid("I was given the wrong size block to write")
6271+        elif len(data) != self._block_size:
6272+            raise LayoutInvalid("I was given the wrong size block to write")
6273+
6274+        # We want to write at len(MDMFHEADER) + segnum * block_size.
6275+        offset = self._offsets['share_data'] + \
6276+            (self._actual_block_size * segnum)
6277+        data = salt + data
6278+
6279+        self._writevs.append(tuple([offset, data]))
6280+
6281+
6282+    def put_encprivkey(self, encprivkey):
6283+        """
6284+        I queue a write vector for the encrypted private key provided to
6285+        me.
6286+        """
6287+        assert self._offsets
6288+        assert self._offsets['enc_privkey']
6289+        # You shouldn't re-write the encprivkey after the block hash
6290+        # tree is written, since that could cause the private key to run
6291+        # into the block hash tree. Before it writes the block hash
6292+        # tree, the block hash tree writing method writes the offset of
6293+        # the share hash chain. So that's a good indicator of whether or
6294+        # not the block hash tree has been written.
6295+        if "signature" in self._offsets:
6296+            raise LayoutInvalid("You can't put the encrypted private key "
6297+                                "after putting the share hash chain")
6298+
6299+        self._offsets['share_hash_chain'] = self._offsets['enc_privkey'] + \
6300+                len(encprivkey)
6301+
6302+        self._writevs.append(tuple([self._offsets['enc_privkey'], encprivkey]))
6303+
6304+
6305+    def put_blockhashes(self, blockhashes):
6306+        """
6307+        I queue a write vector to put the block hash tree in blockhashes
6308+        onto the remote server.
6309+
6310+        The encrypted private key must be queued before the block hash
6311+        tree, since we need to know how large it is to know where the
6312+        block hash tree should go. The block hash tree must be put
6313+        before the share hash chain, since its size determines the
6314+        offset of the share hash chain.
6315+        """
6316+        assert self._offsets
6317+        assert "block_hash_tree" in self._offsets
6318+
6319+        assert isinstance(blockhashes, list)
6320+
6321+        blockhashes_s = "".join(blockhashes)
6322+        self._offsets['EOF'] = self._offsets['block_hash_tree'] + len(blockhashes_s)
6323+
6324+        self._writevs.append(tuple([self._offsets['block_hash_tree'],
6325+                                  blockhashes_s]))
6326+
6327+
6328+    def put_sharehashes(self, sharehashes):
6329+        """
6330+        I queue a write vector to put the share hash chain in my
6331+        argument onto the remote server.
6332+
6333+        The block hash tree must be queued before the share hash chain,
6334+        since we need to know where the block hash tree ends before we
6335+        can know where the share hash chain starts. The share hash chain
6336+        must be put before the signature, since the length of the packed
6337+        share hash chain determines the offset of the signature. Also,
6338+        semantically, you must know what the root of the block hash tree
6339+        is before you can generate a valid signature.
6340+        """
6341+        assert isinstance(sharehashes, dict)
6342+        assert self._offsets
6343+        if "share_hash_chain" not in self._offsets:
6344+            raise LayoutInvalid("You must put the block hash tree before "
6345+                                "putting the share hash chain")
6346+
6347+        # The signature comes after the share hash chain. If the
6348+        # signature has already been written, we must not write another
6349+        # share hash chain. The signature writes the verification key
6350+        # offset when it gets sent to the remote server, so we look for
6351+        # that.
6352+        if "verification_key" in self._offsets:
6353+            raise LayoutInvalid("You must write the share hash chain "
6354+                                "before you write the signature")
6355+        sharehashes_s = "".join([struct.pack(">H32s", i, sharehashes[i])
6356+                                  for i in sorted(sharehashes.keys())])
6357+        self._offsets['signature'] = self._offsets['share_hash_chain'] + \
6358+            len(sharehashes_s)
6359+        self._writevs.append(tuple([self._offsets['share_hash_chain'],
6360+                            sharehashes_s]))
6361+
6362+
6363+    def put_root_hash(self, roothash):
6364+        """
6365+        Put the root hash (the root of the share hash tree) in the
6366+        remote slot.
6367+        """
6368+        # It does not make sense to be able to put the root
6369+        # hash without first putting the share hashes, since you need
6370+        # the share hashes to generate the root hash.
6371+        #
6372+        # Signature is defined by the routine that places the share hash
6373+        # chain, so it's a good thing to look for in finding out whether
6374+        # or not the share hash chain exists on the remote server.
6375+        if len(roothash) != HASH_SIZE:
6376+            raise LayoutInvalid("hashes and salts must be exactly %d bytes"
6377+                                 % HASH_SIZE)
6378+        self._root_hash = roothash
6379+        # To write both of these values, we update the checkstring on
6380+        # the remote server, which includes them
6381+        checkstring = self.get_checkstring()
6382+        self._writevs.append(tuple([0, checkstring]))
6383+        # This write, if successful, changes the checkstring, so we need
6384+        # to update our internal checkstring to be consistent with the
6385+        # one on the server.
6386+
6387+
6388+    def get_signable(self):
6389+        """
6390+        Get the first seven fields of the mutable file; the parts that
6391+        are signed.
6392+        """
6393+        if not self._root_hash:
6394+            raise LayoutInvalid("You need to set the root hash "
6395+                                "before getting something to "
6396+                                "sign")
6397+        return struct.pack(MDMFSIGNABLEHEADER,
6398+                           1,
6399+                           self._seqnum,
6400+                           self._root_hash,
6401+                           self._required_shares,
6402+                           self._total_shares,
6403+                           self._segment_size,
6404+                           self._data_length)
6405+
6406+
6407+    def put_signature(self, signature):
6408+        """
6409+        I queue a write vector for the signature of the MDMF share.
6410+
6411+        I require that the root hash and share hash chain have been put
6412+        to the grid before I will write the signature to the grid.
6413+        """
6414+        if "signature" not in self._offsets:
6415+            raise LayoutInvalid("You must put the share hash chain "
6416+        # It does not make sense to put a signature without first
6417+        # putting the root hash and the salt hash (since otherwise
6418+        # the signature would be incomplete), so we don't allow that.
6419+                       "before putting the signature")
6420+        if not self._root_hash:
6421+            raise LayoutInvalid("You must complete the signed prefix "
6422+                                "before computing a signature")
6423+        # If we put the signature after we put the verification key, we
6424+        # could end up running into the verification key, and will
6425+        # probably screw up the offsets as well. So we don't allow that.
6426+        if "verification_key_end" in self._offsets:
6427+            raise LayoutInvalid("You can't put the signature after the "
6428+                                "verification key")
6429+        # The method that writes the verification key defines the EOF
6430+        # offset before writing the verification key, so look for that.
6431+        self._offsets['verification_key'] = self._offsets['signature'] +\
6432+            len(signature)
6433+        self._writevs.append(tuple([self._offsets['signature'], signature]))
6434+
6435+
6436+    def put_verification_key(self, verification_key):
6437+        """
6438+        I queue a write vector for the verification key.
6439+
6440+        I require that the signature have been written to the storage
6441+        server before I allow the verification key to be written to the
6442+        remote server.
6443+        """
6444+        if "verification_key" not in self._offsets:
6445+            raise LayoutInvalid("You must put the signature before you "
6446+                                "can put the verification key")
6447+
6448+        self._offsets['verification_key_end'] = \
6449+            self._offsets['verification_key'] + len(verification_key)
6450+        assert self._offsets['verification_key_end'] <= self._offsets['share_data']
6451+        self._writevs.append(tuple([self._offsets['verification_key'],
6452+                            verification_key]))
6453+
6454+
6455+    def _get_offsets_tuple(self):
6456+        return tuple([(key, value) for key, value in self._offsets.items()])
6457+
6458+
6459+    def get_verinfo(self):
6460+        return (self._seqnum,
6461+                self._root_hash,
6462+                self._required_shares,
6463+                self._total_shares,
6464+                self._segment_size,
6465+                self._data_length,
6466+                self.get_signable(),
6467+                self._get_offsets_tuple())
6468+
6469+
6470+    def finish_publishing(self):
6471+        """
6472+        I add a write vector for the offsets table, and then cause all
6473+        of the write vectors that I've dealt with so far to be published
6474+        to the remote server, ending the write process.
6475+        """
6476+        if "verification_key_end" not in self._offsets:
6477+            raise LayoutInvalid("You must put the verification key before "
6478+                                "you can publish the offsets")
6479+        offsets_offset = struct.calcsize(MDMFHEADERWITHOUTOFFSETS)
6480+        offsets = struct.pack(MDMFOFFSETS,
6481+                              self._offsets['enc_privkey'],
6482+                              self._offsets['share_hash_chain'],
6483+                              self._offsets['signature'],
6484+                              self._offsets['verification_key'],
6485+                              self._offsets['verification_key_end'],
6486+                              self._offsets['share_data'],
6487+                              self._offsets['block_hash_tree'],
6488+                              self._offsets['EOF'])
6489+        self._writevs.append(tuple([offsets_offset, offsets]))
6490+        encoding_parameters_offset = struct.calcsize(MDMFCHECKSTRING)
6491+        params = struct.pack(">BBQQ",
6492+                             self._required_shares,
6493+                             self._total_shares,
6494+                             self._segment_size,
6495+                             self._data_length)
6496+        self._writevs.append(tuple([encoding_parameters_offset, params]))
6497+        return self._write(self._writevs)
6498+
6499+
6500+    def _write(self, datavs, on_failure=None, on_success=None):
6501+        """I write the data vectors in datavs to the remote slot."""
6502+        tw_vectors = {}
6503+        if not self._testvs:
6504+            self._testvs = []
6505+            self._testvs.append(tuple([0, 1, "eq", ""]))
6506+        if not self._written:
6507+            # Write a new checkstring to the share when we write it, so
6508+            # that we have something to check later.
6509+            new_checkstring = self.get_checkstring()
6510+            datavs.append((0, new_checkstring))
6511+            def _first_write():
6512+                self._written = True
6513+                self._testvs = [(0, len(new_checkstring), "eq", new_checkstring)]
6514+            on_success = _first_write
6515+        tw_vectors[self.shnum] = (self._testvs, datavs, None)
6516+        d = self._rref.callRemote("slot_testv_and_readv_and_writev",
6517+                                  self._storage_index,
6518+                                  self._secrets,
6519+                                  tw_vectors,
6520+                                  self._readv)
6521+        def _result(results):
6522+            if isinstance(results, failure.Failure) or not results[0]:
6523+                # Do nothing; the write was unsuccessful.
6524+                if on_failure: on_failure()
6525+            else:
6526+                if on_success: on_success()
6527+            return results
6528+        d.addCallback(_result)
6529+        return d
6530+
6531+
6532+class MDMFSlotReadProxy:
6533+    """
6534+    I read from a mutable slot filled with data written in the MDMF data
6535+    format (which is described above).
6536+
6537+    I can be initialized with some amount of data, which I will use (if
6538+    it is valid) to eliminate some of the need to fetch it from servers.
6539+    """
6540+    def __init__(self,
6541+                 rref,
6542+                 storage_index,
6543+                 shnum,
6544+                 data=""):
6545+        # Start the initialization process.
6546+        self._rref = rref
6547+        self._storage_index = storage_index
6548+        self.shnum = shnum
6549+
6550+        # Before doing anything, the reader is probably going to want to
6551+        # verify that the signature is correct. To do that, they'll need
6552+        # the verification key, and the signature. To get those, we'll
6553+        # need the offset table. So fetch the offset table on the
6554+        # assumption that that will be the first thing that a reader is
6555+        # going to do.
6556+
6557+        # The fact that these encoding parameters are None tells us
6558+        # that we haven't yet fetched them from the remote share, so we
6559+        # should. We could just not set them, but the checks will be
6560+        # easier to read if we don't have to use hasattr.
6561+        self._version_number = None
6562+        self._sequence_number = None
6563+        self._root_hash = None
6564+        # Filled in if we're dealing with an SDMF file. Unused
6565+        # otherwise.
6566+        self._salt = None
6567+        self._required_shares = None
6568+        self._total_shares = None
6569+        self._segment_size = None
6570+        self._data_length = None
6571+        self._offsets = None
6572+
6573+        # If the user has chosen to initialize us with some data, we'll
6574+        # try to satisfy subsequent data requests with that data before
6575+        # asking the storage server for it. If
6576+        self._data = data
6577+        # The way callers interact with cache in the filenode returns
6578+        # None if there isn't any cached data, but the way we index the
6579+        # cached data requires a string, so convert None to "".
6580+        if self._data == None:
6581+            self._data = ""
6582+
6583+        self._queue_observers = observer.ObserverList()
6584+        self._queue_errbacks = observer.ObserverList()
6585+        self._readvs = []
6586+
6587+
6588+    def _maybe_fetch_offsets_and_header(self, force_remote=False):
6589+        """
6590+        I fetch the offset table and the header from the remote slot if
6591+        I don't already have them. If I do have them, I do nothing and
6592+        return an empty Deferred.
6593+        """
6594+        if self._offsets:
6595+            return defer.succeed(None)
6596+        # At this point, we may be either SDMF or MDMF. Fetching 107
6597+        # bytes will be enough to get header and offsets for both SDMF and
6598+        # MDMF, though we'll be left with 4 more bytes than we
6599+        # need if this ends up being MDMF. This is probably less
6600+        # expensive than the cost of a second roundtrip.
6601+        readvs = [(0, 123)]
6602+        d = self._read(readvs, force_remote)
6603+        d.addCallback(self._process_encoding_parameters)
6604+        d.addCallback(self._process_offsets)
6605+        return d
6606+
6607+
6608+    def _process_encoding_parameters(self, encoding_parameters):
6609+        assert self.shnum in encoding_parameters
6610+        encoding_parameters = encoding_parameters[self.shnum][0]
6611+        # The first byte is the version number. It will tell us what
6612+        # to do next.
6613+        (verno,) = struct.unpack(">B", encoding_parameters[:1])
6614+        if verno == MDMF_VERSION:
6615+            read_size = MDMFHEADERWITHOUTOFFSETSSIZE
6616+            (verno,
6617+             seqnum,
6618+             root_hash,
6619+             k,
6620+             n,
6621+             segsize,
6622+             datalen) = struct.unpack(MDMFHEADERWITHOUTOFFSETS,
6623+                                      encoding_parameters[:read_size])
6624+            if segsize == 0 and datalen == 0:
6625+                # Empty file, no segments.
6626+                self._num_segments = 0
6627+            else:
6628+                self._num_segments = mathutil.div_ceil(datalen, segsize)
6629+
6630+        elif verno == SDMF_VERSION:
6631+            read_size = SIGNED_PREFIX_LENGTH
6632+            (verno,
6633+             seqnum,
6634+             root_hash,
6635+             salt,
6636+             k,
6637+             n,
6638+             segsize,
6639+             datalen) = struct.unpack(">BQ32s16s BBQQ",
6640+                                encoding_parameters[:SIGNED_PREFIX_LENGTH])
6641+            self._salt = salt
6642+            if segsize == 0 and datalen == 0:
6643+                # empty file
6644+                self._num_segments = 0
6645+            else:
6646+                # non-empty SDMF files have one segment.
6647+                self._num_segments = 1
6648+        else:
6649+            raise UnknownVersionError("You asked me to read mutable file "
6650+                                      "version %d, but I only understand "
6651+                                      "%d and %d" % (verno, SDMF_VERSION,
6652+                                                     MDMF_VERSION))
6653+
6654+        self._version_number = verno
6655+        self._sequence_number = seqnum
6656+        self._root_hash = root_hash
6657+        self._required_shares = k
6658+        self._total_shares = n
6659+        self._segment_size = segsize
6660+        self._data_length = datalen
6661+
6662+        self._block_size = self._segment_size / self._required_shares
6663+        # We can upload empty files, and need to account for this fact
6664+        # so as to avoid zero-division and zero-modulo errors.
6665+        if datalen > 0:
6666+            tail_size = self._data_length % self._segment_size
6667+        else:
6668+            tail_size = 0
6669+        if not tail_size:
6670+            self._tail_block_size = self._block_size
6671+        else:
6672+            self._tail_block_size = mathutil.next_multiple(tail_size,
6673+                                                    self._required_shares)
6674+            self._tail_block_size /= self._required_shares
6675+
6676+        return encoding_parameters
6677+
6678+
6679+    def _process_offsets(self, offsets):
6680+        if self._version_number == 0:
6681+            read_size = OFFSETS_LENGTH
6682+            read_offset = SIGNED_PREFIX_LENGTH
6683+            end = read_size + read_offset
6684+            (signature,
6685+             share_hash_chain,
6686+             block_hash_tree,
6687+             share_data,
6688+             enc_privkey,
6689+             EOF) = struct.unpack(">LLLLQQ",
6690+                                  offsets[read_offset:end])
6691+            self._offsets = {}
6692+            self._offsets['signature'] = signature
6693+            self._offsets['share_data'] = share_data
6694+            self._offsets['block_hash_tree'] = block_hash_tree
6695+            self._offsets['share_hash_chain'] = share_hash_chain
6696+            self._offsets['enc_privkey'] = enc_privkey
6697+            self._offsets['EOF'] = EOF
6698+
6699+        elif self._version_number == 1:
6700+            read_offset = MDMFHEADERWITHOUTOFFSETSSIZE
6701+            read_length = MDMFOFFSETS_LENGTH
6702+            end = read_offset + read_length
6703+            (encprivkey,
6704+             sharehashes,
6705+             signature,
6706+             verification_key,
6707+             verification_key_end,
6708+             sharedata,
6709+             blockhashes,
6710+             eof) = struct.unpack(MDMFOFFSETS,
6711+                                  offsets[read_offset:end])
6712+            self._offsets = {}
6713+            self._offsets['enc_privkey'] = encprivkey
6714+            self._offsets['block_hash_tree'] = blockhashes
6715+            self._offsets['share_hash_chain'] = sharehashes
6716+            self._offsets['signature'] = signature
6717+            self._offsets['verification_key'] = verification_key
6718+            self._offsets['verification_key_end']= \
6719+                verification_key_end
6720+            self._offsets['EOF'] = eof
6721+            self._offsets['share_data'] = sharedata
6722+
6723+
6724+    def get_block_and_salt(self, segnum, queue=False):
6725+        """
6726+        I return (block, salt), where block is the block data and
6727+        salt is the salt used to encrypt that segment.
6728+        """
6729+        d = self._maybe_fetch_offsets_and_header()
6730+        def _then(ignored):
6731+            base_share_offset = self._offsets['share_data']
6732+
6733+            if segnum + 1 > self._num_segments:
6734+                raise LayoutInvalid("Not a valid segment number")
6735+
6736+            if self._version_number == 0:
6737+                share_offset = base_share_offset + self._block_size * segnum
6738+            else:
6739+                share_offset = base_share_offset + (self._block_size + \
6740+                                                    SALT_SIZE) * segnum
6741+            if segnum + 1 == self._num_segments:
6742+                data = self._tail_block_size
6743+            else:
6744+                data = self._block_size
6745+
6746+            if self._version_number == 1:
6747+                data += SALT_SIZE
6748+
6749+            readvs = [(share_offset, data)]
6750+            return readvs
6751+        d.addCallback(_then)
6752+        d.addCallback(lambda readvs:
6753+            self._read(readvs, queue=queue))
6754+        def _process_results(results):
6755+            assert self.shnum in results
6756+            if self._version_number == 0:
6757+                # We only read the share data, but we know the salt from
6758+                # when we fetched the header
6759+                data = results[self.shnum]
6760+                if not data:
6761+                    data = ""
6762+                else:
6763+                    assert len(data) == 1
6764+                    data = data[0]
6765+                salt = self._salt
6766+            else:
6767+                data = results[self.shnum]
6768+                if not data:
6769+                    salt = data = ""
6770+                else:
6771+                    salt_and_data = results[self.shnum][0]
6772+                    salt = salt_and_data[:SALT_SIZE]
6773+                    data = salt_and_data[SALT_SIZE:]
6774+            return data, salt
6775+        d.addCallback(_process_results)
6776+        return d
6777+
6778+
6779+    def get_blockhashes(self, needed=None, queue=False, force_remote=False):
6780+        """
6781+        I return the block hash tree
6782+
6783+        I take an optional argument, needed, which is a set of indices
6784+        correspond to hashes that I should fetch. If this argument is
6785+        missing, I will fetch the entire block hash tree; otherwise, I
6786+        may attempt to fetch fewer hashes, based on what needed says
6787+        that I should do. Note that I may fetch as many hashes as I
6788+        want, so long as the set of hashes that I do fetch is a superset
6789+        of the ones that I am asked for, so callers should be prepared
6790+        to tolerate additional hashes.
6791+        """
6792+        # TODO: Return only the parts of the block hash tree necessary
6793+        # to validate the blocknum provided?
6794+        # This is a good idea, but it is hard to implement correctly. It
6795+        # is bad to fetch any one block hash more than once, so we
6796+        # probably just want to fetch the whole thing at once and then
6797+        # serve it.
6798+        if needed == set([]):
6799+            return defer.succeed([])
6800+        d = self._maybe_fetch_offsets_and_header()
6801+        def _then(ignored):
6802+            blockhashes_offset = self._offsets['block_hash_tree']
6803+            if self._version_number == 1:
6804+                blockhashes_length = self._offsets['EOF'] - blockhashes_offset
6805+            else:
6806+                blockhashes_length = self._offsets['share_data'] - blockhashes_offset
6807+            readvs = [(blockhashes_offset, blockhashes_length)]
6808+            return readvs
6809+        d.addCallback(_then)
6810+        d.addCallback(lambda readvs:
6811+            self._read(readvs, queue=queue, force_remote=force_remote))
6812+        def _build_block_hash_tree(results):
6813+            assert self.shnum in results
6814+
6815+            rawhashes = results[self.shnum][0]
6816+            results = [rawhashes[i:i+HASH_SIZE]
6817+                       for i in range(0, len(rawhashes), HASH_SIZE)]
6818+            return results
6819+        d.addCallback(_build_block_hash_tree)
6820+        return d
6821+
6822+
6823+    def get_sharehashes(self, needed=None, queue=False, force_remote=False):
6824+        """
6825+        I return the part of the share hash chain placed to validate
6826+        this share.
6827+
6828+        I take an optional argument, needed. Needed is a set of indices
6829+        that correspond to the hashes that I should fetch. If needed is
6830+        not present, I will fetch and return the entire share hash
6831+        chain. Otherwise, I may fetch and return any part of the share
6832+        hash chain that is a superset of the part that I am asked to
6833+        fetch. Callers should be prepared to deal with more hashes than
6834+        they've asked for.
6835+        """
6836+        if needed == set([]):
6837+            return defer.succeed([])
6838+        d = self._maybe_fetch_offsets_and_header()
6839+
6840+        def _make_readvs(ignored):
6841+            sharehashes_offset = self._offsets['share_hash_chain']
6842+            if self._version_number == 0:
6843+                sharehashes_length = self._offsets['block_hash_tree'] - sharehashes_offset
6844+            else:
6845+                sharehashes_length = self._offsets['signature'] - sharehashes_offset
6846+            readvs = [(sharehashes_offset, sharehashes_length)]
6847+            return readvs
6848+        d.addCallback(_make_readvs)
6849+        d.addCallback(lambda readvs:
6850+            self._read(readvs, queue=queue, force_remote=force_remote))
6851+        def _build_share_hash_chain(results):
6852+            assert self.shnum in results
6853+
6854+            sharehashes = results[self.shnum][0]
6855+            results = [sharehashes[i:i+(HASH_SIZE + 2)]
6856+                       for i in range(0, len(sharehashes), HASH_SIZE + 2)]
6857+            results = dict([struct.unpack(">H32s", data)
6858+                            for data in results])
6859+            return results
6860+        d.addCallback(_build_share_hash_chain)
6861+        return d
6862+
6863+
6864+    def get_encprivkey(self, queue=False):
6865+        """
6866+        I return the encrypted private key.
6867+        """
6868+        d = self._maybe_fetch_offsets_and_header()
6869+
6870+        def _make_readvs(ignored):
6871+            privkey_offset = self._offsets['enc_privkey']
6872+            if self._version_number == 0:
6873+                privkey_length = self._offsets['EOF'] - privkey_offset
6874+            else:
6875+                privkey_length = self._offsets['share_hash_chain'] - privkey_offset
6876+            readvs = [(privkey_offset, privkey_length)]
6877+            return readvs
6878+        d.addCallback(_make_readvs)
6879+        d.addCallback(lambda readvs:
6880+            self._read(readvs, queue=queue))
6881+        def _process_results(results):
6882+            assert self.shnum in results
6883+            privkey = results[self.shnum][0]
6884+            return privkey
6885+        d.addCallback(_process_results)
6886+        return d
6887+
6888+
6889+    def get_signature(self, queue=False):
6890+        """
6891+        I return the signature of my share.
6892+        """
6893+        d = self._maybe_fetch_offsets_and_header()
6894+
6895+        def _make_readvs(ignored):
6896+            signature_offset = self._offsets['signature']
6897+            if self._version_number == 1:
6898+                signature_length = self._offsets['verification_key'] - signature_offset
6899+            else:
6900+                signature_length = self._offsets['share_hash_chain'] - signature_offset
6901+            readvs = [(signature_offset, signature_length)]
6902+            return readvs
6903+        d.addCallback(_make_readvs)
6904+        d.addCallback(lambda readvs:
6905+            self._read(readvs, queue=queue))
6906+        def _process_results(results):
6907+            assert self.shnum in results
6908+            signature = results[self.shnum][0]
6909+            return signature
6910+        d.addCallback(_process_results)
6911+        return d
6912+
6913+
6914+    def get_verification_key(self, queue=False):
6915+        """
6916+        I return the verification key.
6917+        """
6918+        d = self._maybe_fetch_offsets_and_header()
6919+
6920+        def _make_readvs(ignored):
6921+            if self._version_number == 1:
6922+                vk_offset = self._offsets['verification_key']
6923+                vk_length = self._offsets['verification_key_end'] - vk_offset
6924+            else:
6925+                vk_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ")
6926+                vk_length = self._offsets['signature'] - vk_offset
6927+            readvs = [(vk_offset, vk_length)]
6928+            return readvs
6929+        d.addCallback(_make_readvs)
6930+        d.addCallback(lambda readvs:
6931+            self._read(readvs, queue=queue))
6932+        def _process_results(results):
6933+            assert self.shnum in results
6934+            verification_key = results[self.shnum][0]
6935+            return verification_key
6936+        d.addCallback(_process_results)
6937+        return d
6938+
6939+
6940+    def get_encoding_parameters(self):
6941+        """
6942+        I return (k, n, segsize, datalen)
6943+        """
6944+        d = self._maybe_fetch_offsets_and_header()
6945+        d.addCallback(lambda ignored:
6946+            (self._required_shares,
6947+             self._total_shares,
6948+             self._segment_size,
6949+             self._data_length))
6950+        return d
6951+
6952+
6953+    def get_seqnum(self):
6954+        """
6955+        I return the sequence number for this share.
6956+        """
6957+        d = self._maybe_fetch_offsets_and_header()
6958+        d.addCallback(lambda ignored:
6959+            self._sequence_number)
6960+        return d
6961+
6962+
6963+    def get_root_hash(self):
6964+        """
6965+        I return the root of the block hash tree
6966+        """
6967+        d = self._maybe_fetch_offsets_and_header()
6968+        d.addCallback(lambda ignored: self._root_hash)
6969+        return d
6970+
6971+
6972+    def get_checkstring(self):
6973+        """
6974+        I return the packed representation of the following:
6975+
6976+            - version number
6977+            - sequence number
6978+            - root hash
6979+            - salt hash
6980+
6981+        which my users use as a checkstring to detect other writers.
6982+        """
6983+        d = self._maybe_fetch_offsets_and_header()
6984+        def _build_checkstring(ignored):
6985+            if self._salt:
6986+                checkstring = struct.pack(PREFIX,
6987+                                          self._version_number,
6988+                                          self._sequence_number,
6989+                                          self._root_hash,
6990+                                          self._salt)
6991+            else:
6992+                checkstring = struct.pack(MDMFCHECKSTRING,
6993+                                          self._version_number,
6994+                                          self._sequence_number,
6995+                                          self._root_hash)
6996+
6997+            return checkstring
6998+        d.addCallback(_build_checkstring)
6999+        return d
7000+
7001+
7002+    def get_prefix(self, force_remote):
7003+        d = self._maybe_fetch_offsets_and_header(force_remote)
7004+        d.addCallback(lambda ignored:
7005+            self._build_prefix())
7006+        return d
7007+
7008+
7009+    def _build_prefix(self):
7010+        # The prefix is another name for the part of the remote share
7011+        # that gets signed. It consists of everything up to and
7012+        # including the datalength, packed by struct.
7013+        if self._version_number == SDMF_VERSION:
7014+            return struct.pack(SIGNED_PREFIX,
7015+                           self._version_number,
7016+                           self._sequence_number,
7017+                           self._root_hash,
7018+                           self._salt,
7019+                           self._required_shares,
7020+                           self._total_shares,
7021+                           self._segment_size,
7022+                           self._data_length)
7023+
7024+        else:
7025+            return struct.pack(MDMFSIGNABLEHEADER,
7026+                           self._version_number,
7027+                           self._sequence_number,
7028+                           self._root_hash,
7029+                           self._required_shares,
7030+                           self._total_shares,
7031+                           self._segment_size,
7032+                           self._data_length)
7033+
7034+
7035+    def _get_offsets_tuple(self):
7036+        # The offsets tuple is another component of the version
7037+        # information tuple. It is basically our offsets dictionary,
7038+        # itemized and in a tuple.
7039+        return self._offsets.copy()
7040+
7041+
7042+    def get_verinfo(self):
7043+        """
7044+        I return my verinfo tuple. This is used by the ServermapUpdater
7045+        to keep track of versions of mutable files.
7046+
7047+        The verinfo tuple for MDMF files contains:
7048+            - seqnum
7049+            - root hash
7050+            - a blank (nothing)
7051+            - segsize
7052+            - datalen
7053+            - k
7054+            - n
7055+            - prefix (the thing that you sign)
7056+            - a tuple of offsets
7057+
7058+        We include the nonce in MDMF to simplify processing of version
7059+        information tuples.
7060+
7061+        The verinfo tuple for SDMF files is the same, but contains a
7062+        16-byte IV instead of a hash of salts.
7063+        """
7064+        d = self._maybe_fetch_offsets_and_header()
7065+        def _build_verinfo(ignored):
7066+            if self._version_number == SDMF_VERSION:
7067+                salt_to_use = self._salt
7068+            else:
7069+                salt_to_use = None
7070+            return (self._sequence_number,
7071+                    self._root_hash,
7072+                    salt_to_use,
7073+                    self._segment_size,
7074+                    self._data_length,
7075+                    self._required_shares,
7076+                    self._total_shares,
7077+                    self._build_prefix(),
7078+                    self._get_offsets_tuple())
7079+        d.addCallback(_build_verinfo)
7080+        return d
7081+
7082+
7083+    def flush(self):
7084+        """
7085+        I flush my queue of read vectors.
7086+        """
7087+        d = self._read(self._readvs)
7088+        def _then(results):
7089+            self._readvs = []
7090+            if isinstance(results, failure.Failure):
7091+                self._queue_errbacks.notify(results)
7092+            else:
7093+                self._queue_observers.notify(results)
7094+            self._queue_observers = observer.ObserverList()
7095+            self._queue_errbacks = observer.ObserverList()
7096+        d.addBoth(_then)
7097+
7098+
7099+    def _read(self, readvs, force_remote=False, queue=False):
7100+        unsatisfiable = filter(lambda x: x[0] + x[1] > len(self._data), readvs)
7101+        # TODO: It's entirely possible to tweak this so that it just
7102+        # fulfills the requests that it can, and not demand that all
7103+        # requests are satisfiable before running it.
7104+        if not unsatisfiable and not force_remote:
7105+            results = [self._data[offset:offset+length]
7106+                       for (offset, length) in readvs]
7107+            results = {self.shnum: results}
7108+            return defer.succeed(results)
7109+        else:
7110+            if queue:
7111+                start = len(self._readvs)
7112+                self._readvs += readvs
7113+                end = len(self._readvs)
7114+                def _get_results(results, start, end):
7115+                    if not self.shnum in results:
7116+                        return {self._shnum: [""]}
7117+                    return {self.shnum: results[self.shnum][start:end]}
7118+                d = defer.Deferred()
7119+                d.addCallback(_get_results, start, end)
7120+                self._queue_observers.subscribe(d.callback)
7121+                self._queue_errbacks.subscribe(d.errback)
7122+                return d
7123+            return self._rref.callRemote("slot_readv",
7124+                                         self._storage_index,
7125+                                         [self.shnum],
7126+                                         readvs)
7127+
7128+
7129+    def is_sdmf(self):
7130+        """I tell my caller whether or not my remote file is SDMF or MDMF
7131+        """
7132+        d = self._maybe_fetch_offsets_and_header()
7133+        d.addCallback(lambda ignored:
7134+            self._version_number == 0)
7135+        return d
7136+
7137+
7138+class LayoutInvalid(Exception):
7139+    """
7140+    This isn't a valid MDMF mutable file
7141+    """
7142hunk ./src/allmydata/test/test_storage.py 1
7143-import time, os.path, platform, stat, re, simplejson, struct
7144+import time, os.path, platform, stat, re, simplejson, struct, shutil
7145 
7146 import mock
7147 
7148hunk ./src/allmydata/test/test_storage.py 23
7149 from allmydata.storage.expirer import LeaseCheckingCrawler
7150 from allmydata.immutable.layout import WriteBucketProxy, WriteBucketProxy_v2, \
7151      ReadBucketProxy
7152+from allmydata.mutable.layout import MDMFSlotWriteProxy, MDMFSlotReadProxy, \
7153+                                     LayoutInvalid, MDMFSIGNABLEHEADER, \
7154+                                     SIGNED_PREFIX, MDMFHEADER, \
7155+                                     MDMFOFFSETS, SDMFSlotWriteProxy, \
7156+                                     PRIVATE_KEY_SIZE, \
7157+                                     SIGNATURE_SIZE, \
7158+                                     VERIFICATION_KEY_SIZE, \
7159+                                     SHARE_HASH_CHAIN_SIZE
7160 from allmydata.interfaces import BadWriteEnablerError
7161hunk ./src/allmydata/test/test_storage.py 32
7162-from allmydata.test.common import LoggingServiceParent
7163+from allmydata.test.common import LoggingServiceParent, ShouldFailMixin
7164 from allmydata.test.common_web import WebRenderingMixin
7165 from allmydata.test.no_network import NoNetworkServer
7166 from allmydata.web.storage import StorageStatus, remove_prefix
7167hunk ./src/allmydata/test/test_storage.py 111
7168 
7169 class RemoteBucket:
7170 
7171+    def __init__(self):
7172+        self.read_count = 0
7173+        self.write_count = 0
7174+
7175     def callRemote(self, methname, *args, **kwargs):
7176         def _call():
7177             meth = getattr(self.target, "remote_" + methname)
7178hunk ./src/allmydata/test/test_storage.py 119
7179             return meth(*args, **kwargs)
7180+
7181+        if methname == "slot_readv":
7182+            self.read_count += 1
7183+        if "writev" in methname:
7184+            self.write_count += 1
7185+
7186         return defer.maybeDeferred(_call)
7187 
7188hunk ./src/allmydata/test/test_storage.py 127
7189+
7190 class BucketProxy(unittest.TestCase):
7191     def make_bucket(self, name, size):
7192         basedir = os.path.join("storage", "BucketProxy", name)
7193hunk ./src/allmydata/test/test_storage.py 1310
7194         self.failUnless(os.path.exists(prefixdir), prefixdir)
7195         self.failIf(os.path.exists(bucketdir), bucketdir)
7196 
7197+
7198+class MDMFProxies(unittest.TestCase, ShouldFailMixin):
7199+    def setUp(self):
7200+        self.sparent = LoggingServiceParent()
7201+        self._lease_secret = itertools.count()
7202+        self.ss = self.create("MDMFProxies storage test server")
7203+        self.rref = RemoteBucket()
7204+        self.rref.target = self.ss
7205+        self.secrets = (self.write_enabler("we_secret"),
7206+                        self.renew_secret("renew_secret"),
7207+                        self.cancel_secret("cancel_secret"))
7208+        self.segment = "aaaaaa"
7209+        self.block = "aa"
7210+        self.salt = "a" * 16
7211+        self.block_hash = "a" * 32
7212+        self.block_hash_tree = [self.block_hash for i in xrange(6)]
7213+        self.share_hash = self.block_hash
7214+        self.share_hash_chain = dict([(i, self.share_hash) for i in xrange(6)])
7215+        self.signature = "foobarbaz"
7216+        self.verification_key = "vvvvvv"
7217+        self.encprivkey = "private"
7218+        self.root_hash = self.block_hash
7219+        self.salt_hash = self.root_hash
7220+        self.salt_hash_tree = [self.salt_hash for i in xrange(6)]
7221+        self.block_hash_tree_s = self.serialize_blockhashes(self.block_hash_tree)
7222+        self.share_hash_chain_s = self.serialize_sharehashes(self.share_hash_chain)
7223+        # blockhashes and salt hashes are serialized in the same way,
7224+        # only we lop off the first element and store that in the
7225+        # header.
7226+        self.salt_hash_tree_s = self.serialize_blockhashes(self.salt_hash_tree[1:])
7227+
7228+
7229+    def tearDown(self):
7230+        self.sparent.stopService()
7231+        shutil.rmtree(self.workdir("MDMFProxies storage test server"))
7232+
7233+
7234+    def write_enabler(self, we_tag):
7235+        return hashutil.tagged_hash("we_blah", we_tag)
7236+
7237+
7238+    def renew_secret(self, tag):
7239+        return hashutil.tagged_hash("renew_blah", str(tag))
7240+
7241+
7242+    def cancel_secret(self, tag):
7243+        return hashutil.tagged_hash("cancel_blah", str(tag))
7244+
7245+
7246+    def workdir(self, name):
7247+        basedir = os.path.join("storage", "MutableServer", name)
7248+        return basedir
7249+
7250+
7251+    def create(self, name):
7252+        workdir = self.workdir(name)
7253+        ss = StorageServer(workdir, "\x00" * 20)
7254+        ss.setServiceParent(self.sparent)
7255+        return ss
7256+
7257+
7258+    def build_test_mdmf_share(self, tail_segment=False, empty=False):
7259+        # Start with the checkstring
7260+        data = struct.pack(">BQ32s",
7261+                           1,
7262+                           0,
7263+                           self.root_hash)
7264+        self.checkstring = data
7265+        # Next, the encoding parameters
7266+        if tail_segment:
7267+            data += struct.pack(">BBQQ",
7268+                                3,
7269+                                10,
7270+                                6,
7271+                                33)
7272+        elif empty:
7273+            data += struct.pack(">BBQQ",
7274+                                3,
7275+                                10,
7276+                                0,
7277+                                0)
7278+        else:
7279+            data += struct.pack(">BBQQ",
7280+                                3,
7281+                                10,
7282+                                6,
7283+                                36)
7284+        # Now we'll build the offsets.
7285+        sharedata = ""
7286+        if not tail_segment and not empty:
7287+            for i in xrange(6):
7288+                sharedata += self.salt + self.block
7289+        elif tail_segment:
7290+            for i in xrange(5):
7291+                sharedata += self.salt + self.block
7292+            sharedata += self.salt + "a"
7293+
7294+        # The encrypted private key comes after the shares + salts
7295+        offset_size = struct.calcsize(MDMFOFFSETS)
7296+        encrypted_private_key_offset = len(data) + offset_size
7297+        # The share has chain comes after the private key
7298+        sharehashes_offset = encrypted_private_key_offset + \
7299+            len(self.encprivkey)
7300+
7301+        # The signature comes after the share hash chain.
7302+        signature_offset = sharehashes_offset + len(self.share_hash_chain_s)
7303+
7304+        verification_key_offset = signature_offset + len(self.signature)
7305+        verification_key_end = verification_key_offset + \
7306+            len(self.verification_key)
7307+
7308+        share_data_offset = offset_size
7309+        share_data_offset += PRIVATE_KEY_SIZE
7310+        share_data_offset += SIGNATURE_SIZE
7311+        share_data_offset += VERIFICATION_KEY_SIZE
7312+        share_data_offset += SHARE_HASH_CHAIN_SIZE
7313+
7314+        blockhashes_offset = share_data_offset + len(sharedata)
7315+        eof_offset = blockhashes_offset + len(self.block_hash_tree_s)
7316+
7317+        data += struct.pack(MDMFOFFSETS,
7318+                            encrypted_private_key_offset,
7319+                            sharehashes_offset,
7320+                            signature_offset,
7321+                            verification_key_offset,
7322+                            verification_key_end,
7323+                            share_data_offset,
7324+                            blockhashes_offset,
7325+                            eof_offset)
7326+
7327+        self.offsets = {}
7328+        self.offsets['enc_privkey'] = encrypted_private_key_offset
7329+        self.offsets['block_hash_tree'] = blockhashes_offset
7330+        self.offsets['share_hash_chain'] = sharehashes_offset
7331+        self.offsets['signature'] = signature_offset
7332+        self.offsets['verification_key'] = verification_key_offset
7333+        self.offsets['share_data'] = share_data_offset
7334+        self.offsets['verification_key_end'] = verification_key_end
7335+        self.offsets['EOF'] = eof_offset
7336+
7337+        # the private key,
7338+        data += self.encprivkey
7339+        # the sharehashes
7340+        data += self.share_hash_chain_s
7341+        # the signature,
7342+        data += self.signature
7343+        # and the verification key
7344+        data += self.verification_key
7345+        # Then we'll add in gibberish until we get to the right point.
7346+        nulls = "".join([" " for i in xrange(len(data), share_data_offset)])
7347+        data += nulls
7348+
7349+        # Then the share data
7350+        data += sharedata
7351+        # the blockhashes
7352+        data += self.block_hash_tree_s
7353+        return data
7354+
7355+
7356+    def write_test_share_to_server(self,
7357+                                   storage_index,
7358+                                   tail_segment=False,
7359+                                   empty=False):
7360+        """
7361+        I write some data for the read tests to read to self.ss
7362+
7363+        If tail_segment=True, then I will write a share that has a
7364+        smaller tail segment than other segments.
7365+        """
7366+        write = self.ss.remote_slot_testv_and_readv_and_writev
7367+        data = self.build_test_mdmf_share(tail_segment, empty)
7368+        # Finally, we write the whole thing to the storage server in one
7369+        # pass.
7370+        testvs = [(0, 1, "eq", "")]
7371+        tws = {}
7372+        tws[0] = (testvs, [(0, data)], None)
7373+        readv = [(0, 1)]
7374+        results = write(storage_index, self.secrets, tws, readv)
7375+        self.failUnless(results[0])
7376+
7377+
7378+    def build_test_sdmf_share(self, empty=False):
7379+        if empty:
7380+            sharedata = ""
7381+        else:
7382+            sharedata = self.segment * 6
7383+        self.sharedata = sharedata
7384+        blocksize = len(sharedata) / 3
7385+        block = sharedata[:blocksize]
7386+        self.blockdata = block
7387+        prefix = struct.pack(">BQ32s16s BBQQ",
7388+                             0, # version,
7389+                             0,
7390+                             self.root_hash,
7391+                             self.salt,
7392+                             3,
7393+                             10,
7394+                             len(sharedata),
7395+                             len(sharedata),
7396+                            )
7397+        post_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ")
7398+        signature_offset = post_offset + len(self.verification_key)
7399+        sharehashes_offset = signature_offset + len(self.signature)
7400+        blockhashes_offset = sharehashes_offset + len(self.share_hash_chain_s)
7401+        sharedata_offset = blockhashes_offset + len(self.block_hash_tree_s)
7402+        encprivkey_offset = sharedata_offset + len(block)
7403+        eof_offset = encprivkey_offset + len(self.encprivkey)
7404+        offsets = struct.pack(">LLLLQQ",
7405+                              signature_offset,
7406+                              sharehashes_offset,
7407+                              blockhashes_offset,
7408+                              sharedata_offset,
7409+                              encprivkey_offset,
7410+                              eof_offset)
7411+        final_share = "".join([prefix,
7412+                           offsets,
7413+                           self.verification_key,
7414+                           self.signature,
7415+                           self.share_hash_chain_s,
7416+                           self.block_hash_tree_s,
7417+                           block,
7418+                           self.encprivkey])
7419+        self.offsets = {}
7420+        self.offsets['signature'] = signature_offset
7421+        self.offsets['share_hash_chain'] = sharehashes_offset
7422+        self.offsets['block_hash_tree'] = blockhashes_offset
7423+        self.offsets['share_data'] = sharedata_offset
7424+        self.offsets['enc_privkey'] = encprivkey_offset
7425+        self.offsets['EOF'] = eof_offset
7426+        return final_share
7427+
7428+
7429+    def write_sdmf_share_to_server(self,
7430+                                   storage_index,
7431+                                   empty=False):
7432+        # Some tests need SDMF shares to verify that we can still
7433+        # read them. This method writes one, which resembles but is not
7434+        assert self.rref
7435+        write = self.ss.remote_slot_testv_and_readv_and_writev
7436+        share = self.build_test_sdmf_share(empty)
7437+        testvs = [(0, 1, "eq", "")]
7438+        tws = {}
7439+        tws[0] = (testvs, [(0, share)], None)
7440+        readv = []
7441+        results = write(storage_index, self.secrets, tws, readv)
7442+        self.failUnless(results[0])
7443+
7444+
7445+    def test_read(self):
7446+        self.write_test_share_to_server("si1")
7447+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
7448+        # Check that every method equals what we expect it to.
7449+        d = defer.succeed(None)
7450+        def _check_block_and_salt((block, salt)):
7451+            self.failUnlessEqual(block, self.block)
7452+            self.failUnlessEqual(salt, self.salt)
7453+
7454+        for i in xrange(6):
7455+            d.addCallback(lambda ignored, i=i:
7456+                mr.get_block_and_salt(i))
7457+            d.addCallback(_check_block_and_salt)
7458+
7459+        d.addCallback(lambda ignored:
7460+            mr.get_encprivkey())
7461+        d.addCallback(lambda encprivkey:
7462+            self.failUnlessEqual(self.encprivkey, encprivkey))
7463+
7464+        d.addCallback(lambda ignored:
7465+            mr.get_blockhashes())
7466+        d.addCallback(lambda blockhashes:
7467+            self.failUnlessEqual(self.block_hash_tree, blockhashes))
7468+
7469+        d.addCallback(lambda ignored:
7470+            mr.get_sharehashes())
7471+        d.addCallback(lambda sharehashes:
7472+            self.failUnlessEqual(self.share_hash_chain, sharehashes))
7473+
7474+        d.addCallback(lambda ignored:
7475+            mr.get_signature())
7476+        d.addCallback(lambda signature:
7477+            self.failUnlessEqual(signature, self.signature))
7478+
7479+        d.addCallback(lambda ignored:
7480+            mr.get_verification_key())
7481+        d.addCallback(lambda verification_key:
7482+            self.failUnlessEqual(verification_key, self.verification_key))
7483+
7484+        d.addCallback(lambda ignored:
7485+            mr.get_seqnum())
7486+        d.addCallback(lambda seqnum:
7487+            self.failUnlessEqual(seqnum, 0))
7488+
7489+        d.addCallback(lambda ignored:
7490+            mr.get_root_hash())
7491+        d.addCallback(lambda root_hash:
7492+            self.failUnlessEqual(self.root_hash, root_hash))
7493+
7494+        d.addCallback(lambda ignored:
7495+            mr.get_seqnum())
7496+        d.addCallback(lambda seqnum:
7497+            self.failUnlessEqual(0, seqnum))
7498+
7499+        d.addCallback(lambda ignored:
7500+            mr.get_encoding_parameters())
7501+        def _check_encoding_parameters((k, n, segsize, datalen)):
7502+            self.failUnlessEqual(k, 3)
7503+            self.failUnlessEqual(n, 10)
7504+            self.failUnlessEqual(segsize, 6)
7505+            self.failUnlessEqual(datalen, 36)
7506+        d.addCallback(_check_encoding_parameters)
7507+
7508+        d.addCallback(lambda ignored:
7509+            mr.get_checkstring())
7510+        d.addCallback(lambda checkstring:
7511+            self.failUnlessEqual(checkstring, checkstring))
7512+        return d
7513+
7514+
7515+    def test_read_with_different_tail_segment_size(self):
7516+        self.write_test_share_to_server("si1", tail_segment=True)
7517+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
7518+        d = mr.get_block_and_salt(5)
7519+        def _check_tail_segment(results):
7520+            block, salt = results
7521+            self.failUnlessEqual(len(block), 1)
7522+            self.failUnlessEqual(block, "a")
7523+        d.addCallback(_check_tail_segment)
7524+        return d
7525+
7526+
7527+    def test_get_block_with_invalid_segnum(self):
7528+        self.write_test_share_to_server("si1")
7529+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
7530+        d = defer.succeed(None)
7531+        d.addCallback(lambda ignored:
7532+            self.shouldFail(LayoutInvalid, "test invalid segnum",
7533+                            None,
7534+                            mr.get_block_and_salt, 7))
7535+        return d
7536+
7537+
7538+    def test_get_encoding_parameters_first(self):
7539+        self.write_test_share_to_server("si1")
7540+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
7541+        d = mr.get_encoding_parameters()
7542+        def _check_encoding_parameters((k, n, segment_size, datalen)):
7543+            self.failUnlessEqual(k, 3)
7544+            self.failUnlessEqual(n, 10)
7545+            self.failUnlessEqual(segment_size, 6)
7546+            self.failUnlessEqual(datalen, 36)
7547+        d.addCallback(_check_encoding_parameters)
7548+        return d
7549+
7550+
7551+    def test_get_seqnum_first(self):
7552+        self.write_test_share_to_server("si1")
7553+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
7554+        d = mr.get_seqnum()
7555+        d.addCallback(lambda seqnum:
7556+            self.failUnlessEqual(seqnum, 0))
7557+        return d
7558+
7559+
7560+    def test_get_root_hash_first(self):
7561+        self.write_test_share_to_server("si1")
7562+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
7563+        d = mr.get_root_hash()
7564+        d.addCallback(lambda root_hash:
7565+            self.failUnlessEqual(root_hash, self.root_hash))
7566+        return d
7567+
7568+
7569+    def test_get_checkstring_first(self):
7570+        self.write_test_share_to_server("si1")
7571+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
7572+        d = mr.get_checkstring()
7573+        d.addCallback(lambda checkstring:
7574+            self.failUnlessEqual(checkstring, self.checkstring))
7575+        return d
7576+
7577+
7578+    def test_write_read_vectors(self):
7579+        # When writing for us, the storage server will return to us a
7580+        # read vector, along with its result. If a write fails because
7581+        # the test vectors failed, this read vector can help us to
7582+        # diagnose the problem. This test ensures that the read vector
7583+        # is working appropriately.
7584+        mw = self._make_new_mw("si1", 0)
7585+
7586+        for i in xrange(6):
7587+            mw.put_block(self.block, i, self.salt)
7588+        mw.put_encprivkey(self.encprivkey)
7589+        mw.put_blockhashes(self.block_hash_tree)
7590+        mw.put_sharehashes(self.share_hash_chain)
7591+        mw.put_root_hash(self.root_hash)
7592+        mw.put_signature(self.signature)
7593+        mw.put_verification_key(self.verification_key)
7594+        d = mw.finish_publishing()
7595+        def _then(results):
7596+            self.failUnless(len(results), 2)
7597+            result, readv = results
7598+            self.failUnless(result)
7599+            self.failIf(readv)
7600+            self.old_checkstring = mw.get_checkstring()
7601+            mw.set_checkstring("")
7602+        d.addCallback(_then)
7603+        d.addCallback(lambda ignored:
7604+            mw.finish_publishing())
7605+        def _then_again(results):
7606+            self.failUnlessEqual(len(results), 2)
7607+            result, readvs = results
7608+            self.failIf(result)
7609+            self.failUnlessIn(0, readvs)
7610+            readv = readvs[0][0]
7611+            self.failUnlessEqual(readv, self.old_checkstring)
7612+        d.addCallback(_then_again)
7613+        # The checkstring remains the same for the rest of the process.
7614+        return d
7615+
7616+
7617+    def test_private_key_after_share_hash_chain(self):
7618+        mw = self._make_new_mw("si1", 0)
7619+        d = defer.succeed(None)
7620+        for i in xrange(6):
7621+            d.addCallback(lambda ignored, i=i:
7622+                mw.put_block(self.block, i, self.salt))
7623+        d.addCallback(lambda ignored:
7624+            mw.put_encprivkey(self.encprivkey))
7625+        d.addCallback(lambda ignored:
7626+            mw.put_sharehashes(self.share_hash_chain))
7627+
7628+        # Now try to put the private key again.
7629+        d.addCallback(lambda ignored:
7630+            self.shouldFail(LayoutInvalid, "test repeat private key",
7631+                            None,
7632+                            mw.put_encprivkey, self.encprivkey))
7633+        return d
7634+
7635+
7636+    def test_signature_after_verification_key(self):
7637+        mw = self._make_new_mw("si1", 0)
7638+        d = defer.succeed(None)
7639+        # Put everything up to and including the verification key.
7640+        for i in xrange(6):
7641+            d.addCallback(lambda ignored, i=i:
7642+                mw.put_block(self.block, i, self.salt))
7643+        d.addCallback(lambda ignored:
7644+            mw.put_encprivkey(self.encprivkey))
7645+        d.addCallback(lambda ignored:
7646+            mw.put_blockhashes(self.block_hash_tree))
7647+        d.addCallback(lambda ignored:
7648+            mw.put_sharehashes(self.share_hash_chain))
7649+        d.addCallback(lambda ignored:
7650+            mw.put_root_hash(self.root_hash))
7651+        d.addCallback(lambda ignored:
7652+            mw.put_signature(self.signature))
7653+        d.addCallback(lambda ignored:
7654+            mw.put_verification_key(self.verification_key))
7655+        # Now try to put the signature again. This should fail
7656+        d.addCallback(lambda ignored:
7657+            self.shouldFail(LayoutInvalid, "signature after verification",
7658+                            None,
7659+                            mw.put_signature, self.signature))
7660+        return d
7661+
7662+
7663+    def test_uncoordinated_write(self):
7664+        # Make two mutable writers, both pointing to the same storage
7665+        # server, both at the same storage index, and try writing to the
7666+        # same share.
7667+        mw1 = self._make_new_mw("si1", 0)
7668+        mw2 = self._make_new_mw("si1", 0)
7669+
7670+        def _check_success(results):
7671+            result, readvs = results
7672+            self.failUnless(result)
7673+
7674+        def _check_failure(results):
7675+            result, readvs = results
7676+            self.failIf(result)
7677+
7678+        def _write_share(mw):
7679+            for i in xrange(6):
7680+                mw.put_block(self.block, i, self.salt)
7681+            mw.put_encprivkey(self.encprivkey)
7682+            mw.put_blockhashes(self.block_hash_tree)
7683+            mw.put_sharehashes(self.share_hash_chain)
7684+            mw.put_root_hash(self.root_hash)
7685+            mw.put_signature(self.signature)
7686+            mw.put_verification_key(self.verification_key)
7687+            return mw.finish_publishing()
7688+        d = _write_share(mw1)
7689+        d.addCallback(_check_success)
7690+        d.addCallback(lambda ignored:
7691+            _write_share(mw2))
7692+        d.addCallback(_check_failure)
7693+        return d
7694+
7695+
7696+    def test_invalid_salt_size(self):
7697+        # Salts need to be 16 bytes in size. Writes that attempt to
7698+        # write more or less than this should be rejected.
7699+        mw = self._make_new_mw("si1", 0)
7700+        invalid_salt = "a" * 17 # 17 bytes
7701+        another_invalid_salt = "b" * 15 # 15 bytes
7702+        d = defer.succeed(None)
7703+        d.addCallback(lambda ignored:
7704+            self.shouldFail(LayoutInvalid, "salt too big",
7705+                            None,
7706+                            mw.put_block, self.block, 0, invalid_salt))
7707+        d.addCallback(lambda ignored:
7708+            self.shouldFail(LayoutInvalid, "salt too small",
7709+                            None,
7710+                            mw.put_block, self.block, 0,
7711+                            another_invalid_salt))
7712+        return d
7713+
7714+
7715+    def test_write_test_vectors(self):
7716+        # If we give the write proxy a bogus test vector at
7717+        # any point during the process, it should fail to write when we
7718+        # tell it to write.
7719+        def _check_failure(results):
7720+            self.failUnlessEqual(len(results), 2)
7721+            res, d = results
7722+            self.failIf(res)
7723+
7724+        def _check_success(results):
7725+            self.failUnlessEqual(len(results), 2)
7726+            res, d = results
7727+            self.failUnless(results)
7728+
7729+        mw = self._make_new_mw("si1", 0)
7730+        mw.set_checkstring("this is a lie")
7731+        for i in xrange(6):
7732+            mw.put_block(self.block, i, self.salt)
7733+        mw.put_encprivkey(self.encprivkey)
7734+        mw.put_blockhashes(self.block_hash_tree)
7735+        mw.put_sharehashes(self.share_hash_chain)
7736+        mw.put_root_hash(self.root_hash)
7737+        mw.put_signature(self.signature)
7738+        mw.put_verification_key(self.verification_key)
7739+        d = mw.finish_publishing()
7740+        d.addCallback(_check_failure)
7741+        d.addCallback(lambda ignored:
7742+            mw.set_checkstring(""))
7743+        d.addCallback(lambda ignored:
7744+            mw.finish_publishing())
7745+        d.addCallback(_check_success)
7746+        return d
7747+
7748+
7749+    def serialize_blockhashes(self, blockhashes):
7750+        return "".join(blockhashes)
7751+
7752+
7753+    def serialize_sharehashes(self, sharehashes):
7754+        ret = "".join([struct.pack(">H32s", i, sharehashes[i])
7755+                        for i in sorted(sharehashes.keys())])
7756+        return ret
7757+
7758+
7759+    def test_write(self):
7760+        # This translates to a file with 6 6-byte segments, and with 2-byte
7761+        # blocks.
7762+        mw = self._make_new_mw("si1", 0)
7763+        # Test writing some blocks.
7764+        read = self.ss.remote_slot_readv
7765+        expected_private_key_offset = struct.calcsize(MDMFHEADER)
7766+        expected_sharedata_offset = struct.calcsize(MDMFHEADER) + \
7767+                                    PRIVATE_KEY_SIZE + \
7768+                                    SIGNATURE_SIZE + \
7769+                                    VERIFICATION_KEY_SIZE + \
7770+                                    SHARE_HASH_CHAIN_SIZE
7771+        written_block_size = 2 + len(self.salt)
7772+        written_block = self.block + self.salt
7773+        for i in xrange(6):
7774+            mw.put_block(self.block, i, self.salt)
7775+
7776+        mw.put_encprivkey(self.encprivkey)
7777+        mw.put_blockhashes(self.block_hash_tree)
7778+        mw.put_sharehashes(self.share_hash_chain)
7779+        mw.put_root_hash(self.root_hash)
7780+        mw.put_signature(self.signature)
7781+        mw.put_verification_key(self.verification_key)
7782+        d = mw.finish_publishing()
7783+        def _check_publish(results):
7784+            self.failUnlessEqual(len(results), 2)
7785+            result, ign = results
7786+            self.failUnless(result, "publish failed")
7787+            for i in xrange(6):
7788+                self.failUnlessEqual(read("si1", [0], [(expected_sharedata_offset + (i * written_block_size), written_block_size)]),
7789+                                {0: [written_block]})
7790+
7791+            self.failUnlessEqual(len(self.encprivkey), 7)
7792+            self.failUnlessEqual(read("si1", [0], [(expected_private_key_offset, 7)]),
7793+                                 {0: [self.encprivkey]})
7794+
7795+            expected_block_hash_offset = expected_sharedata_offset + \
7796+                        (6 * written_block_size)
7797+            self.failUnlessEqual(len(self.block_hash_tree_s), 32 * 6)
7798+            self.failUnlessEqual(read("si1", [0], [(expected_block_hash_offset, 32 * 6)]),
7799+                                 {0: [self.block_hash_tree_s]})
7800+
7801+            expected_share_hash_offset = expected_private_key_offset + len(self.encprivkey)
7802+            self.failUnlessEqual(read("si1", [0],[(expected_share_hash_offset, (32 + 2) * 6)]),
7803+                                 {0: [self.share_hash_chain_s]})
7804+
7805+            self.failUnlessEqual(read("si1", [0], [(9, 32)]),
7806+                                 {0: [self.root_hash]})
7807+            expected_signature_offset = expected_share_hash_offset + \
7808+                len(self.share_hash_chain_s)
7809+            self.failUnlessEqual(len(self.signature), 9)
7810+            self.failUnlessEqual(read("si1", [0], [(expected_signature_offset, 9)]),
7811+                                 {0: [self.signature]})
7812+
7813+            expected_verification_key_offset = expected_signature_offset + len(self.signature)
7814+            self.failUnlessEqual(len(self.verification_key), 6)
7815+            self.failUnlessEqual(read("si1", [0], [(expected_verification_key_offset, 6)]),
7816+                                 {0: [self.verification_key]})
7817+
7818+            signable = mw.get_signable()
7819+            verno, seq, roothash, k, n, segsize, datalen = \
7820+                                            struct.unpack(">BQ32sBBQQ",
7821+                                                          signable)
7822+            self.failUnlessEqual(verno, 1)
7823+            self.failUnlessEqual(seq, 0)
7824+            self.failUnlessEqual(roothash, self.root_hash)
7825+            self.failUnlessEqual(k, 3)
7826+            self.failUnlessEqual(n, 10)
7827+            self.failUnlessEqual(segsize, 6)
7828+            self.failUnlessEqual(datalen, 36)
7829+            expected_eof_offset = expected_block_hash_offset + \
7830+                len(self.block_hash_tree_s)
7831+
7832+            # Check the version number to make sure that it is correct.
7833+            expected_version_number = struct.pack(">B", 1)
7834+            self.failUnlessEqual(read("si1", [0], [(0, 1)]),
7835+                                 {0: [expected_version_number]})
7836+            # Check the sequence number to make sure that it is correct
7837+            expected_sequence_number = struct.pack(">Q", 0)
7838+            self.failUnlessEqual(read("si1", [0], [(1, 8)]),
7839+                                 {0: [expected_sequence_number]})
7840+            # Check that the encoding parameters (k, N, segement size, data
7841+            # length) are what they should be. These are  3, 10, 6, 36
7842+            expected_k = struct.pack(">B", 3)
7843+            self.failUnlessEqual(read("si1", [0], [(41, 1)]),
7844+                                 {0: [expected_k]})
7845+            expected_n = struct.pack(">B", 10)
7846+            self.failUnlessEqual(read("si1", [0], [(42, 1)]),
7847+                                 {0: [expected_n]})
7848+            expected_segment_size = struct.pack(">Q", 6)
7849+            self.failUnlessEqual(read("si1", [0], [(43, 8)]),
7850+                                 {0: [expected_segment_size]})
7851+            expected_data_length = struct.pack(">Q", 36)
7852+            self.failUnlessEqual(read("si1", [0], [(51, 8)]),
7853+                                 {0: [expected_data_length]})
7854+            expected_offset = struct.pack(">Q", expected_private_key_offset)
7855+            self.failUnlessEqual(read("si1", [0], [(59, 8)]),
7856+                                 {0: [expected_offset]})
7857+            expected_offset = struct.pack(">Q", expected_share_hash_offset)
7858+            self.failUnlessEqual(read("si1", [0], [(67, 8)]),
7859+                                 {0: [expected_offset]})
7860+            expected_offset = struct.pack(">Q", expected_signature_offset)
7861+            self.failUnlessEqual(read("si1", [0], [(75, 8)]),
7862+                                 {0: [expected_offset]})
7863+            expected_offset = struct.pack(">Q", expected_verification_key_offset)
7864+            self.failUnlessEqual(read("si1", [0], [(83, 8)]),
7865+                                 {0: [expected_offset]})
7866+            expected_offset = struct.pack(">Q", expected_verification_key_offset + len(self.verification_key))
7867+            self.failUnlessEqual(read("si1", [0], [(91, 8)]),
7868+                                 {0: [expected_offset]})
7869+            expected_offset = struct.pack(">Q", expected_sharedata_offset)
7870+            self.failUnlessEqual(read("si1", [0], [(99, 8)]),
7871+                                 {0: [expected_offset]})
7872+            expected_offset = struct.pack(">Q", expected_block_hash_offset)
7873+            self.failUnlessEqual(read("si1", [0], [(107, 8)]),
7874+                                 {0: [expected_offset]})
7875+            expected_offset = struct.pack(">Q", expected_eof_offset)
7876+            self.failUnlessEqual(read("si1", [0], [(115, 8)]),
7877+                                 {0: [expected_offset]})
7878+        d.addCallback(_check_publish)
7879+        return d
7880+
7881+    def _make_new_mw(self, si, share, datalength=36):
7882+        # This is a file of size 36 bytes. Since it has a segment
7883+        # size of 6, we know that it has 6 byte segments, which will
7884+        # be split into blocks of 2 bytes because our FEC k
7885+        # parameter is 3.
7886+        mw = MDMFSlotWriteProxy(share, self.rref, si, self.secrets, 0, 3, 10,
7887+                                6, datalength)
7888+        return mw
7889+
7890+
7891+    def test_write_rejected_with_too_many_blocks(self):
7892+        mw = self._make_new_mw("si0", 0)
7893+
7894+        # Try writing too many blocks. We should not be able to write
7895+        # more than 6
7896+        # blocks into each share.
7897+        d = defer.succeed(None)
7898+        for i in xrange(6):
7899+            d.addCallback(lambda ignored, i=i:
7900+                mw.put_block(self.block, i, self.salt))
7901+        d.addCallback(lambda ignored:
7902+            self.shouldFail(LayoutInvalid, "too many blocks",
7903+                            None,
7904+                            mw.put_block, self.block, 7, self.salt))
7905+        return d
7906+
7907+
7908+    def test_write_rejected_with_invalid_salt(self):
7909+        # Try writing an invalid salt. Salts are 16 bytes -- any more or
7910+        # less should cause an error.
7911+        mw = self._make_new_mw("si1", 0)
7912+        bad_salt = "a" * 17 # 17 bytes
7913+        d = defer.succeed(None)
7914+        d.addCallback(lambda ignored:
7915+            self.shouldFail(LayoutInvalid, "test_invalid_salt",
7916+                            None, mw.put_block, self.block, 7, bad_salt))
7917+        return d
7918+
7919+
7920+    def test_write_rejected_with_invalid_root_hash(self):
7921+        # Try writing an invalid root hash. This should be SHA256d, and
7922+        # 32 bytes long as a result.
7923+        mw = self._make_new_mw("si2", 0)
7924+        # 17 bytes != 32 bytes
7925+        invalid_root_hash = "a" * 17
7926+        d = defer.succeed(None)
7927+        # Before this test can work, we need to put some blocks + salts,
7928+        # a block hash tree, and a share hash tree. Otherwise, we'll see
7929+        # failures that match what we are looking for, but are caused by
7930+        # the constraints imposed on operation ordering.
7931+        for i in xrange(6):
7932+            d.addCallback(lambda ignored, i=i:
7933+                mw.put_block(self.block, i, self.salt))
7934+        d.addCallback(lambda ignored:
7935+            mw.put_encprivkey(self.encprivkey))
7936+        d.addCallback(lambda ignored:
7937+            mw.put_blockhashes(self.block_hash_tree))
7938+        d.addCallback(lambda ignored:
7939+            mw.put_sharehashes(self.share_hash_chain))
7940+        d.addCallback(lambda ignored:
7941+            self.shouldFail(LayoutInvalid, "invalid root hash",
7942+                            None, mw.put_root_hash, invalid_root_hash))
7943+        return d
7944+
7945+
7946+    def test_write_rejected_with_invalid_blocksize(self):
7947+        # The blocksize implied by the writer that we get from
7948+        # _make_new_mw is 2bytes -- any more or any less than this
7949+        # should be cause for failure, unless it is the tail segment, in
7950+        # which case it may not be failure.
7951+        invalid_block = "a"
7952+        mw = self._make_new_mw("si3", 0, 33) # implies a tail segment with
7953+                                             # one byte blocks
7954+        # 1 bytes != 2 bytes
7955+        d = defer.succeed(None)
7956+        d.addCallback(lambda ignored, invalid_block=invalid_block:
7957+            self.shouldFail(LayoutInvalid, "test blocksize too small",
7958+                            None, mw.put_block, invalid_block, 0,
7959+                            self.salt))
7960+        invalid_block = invalid_block * 3
7961+        # 3 bytes != 2 bytes
7962+        d.addCallback(lambda ignored:
7963+            self.shouldFail(LayoutInvalid, "test blocksize too large",
7964+                            None,
7965+                            mw.put_block, invalid_block, 0, self.salt))
7966+        for i in xrange(5):
7967+            d.addCallback(lambda ignored, i=i:
7968+                mw.put_block(self.block, i, self.salt))
7969+        # Try to put an invalid tail segment
7970+        d.addCallback(lambda ignored:
7971+            self.shouldFail(LayoutInvalid, "test invalid tail segment",
7972+                            None,
7973+                            mw.put_block, self.block, 5, self.salt))
7974+        valid_block = "a"
7975+        d.addCallback(lambda ignored:
7976+            mw.put_block(valid_block, 5, self.salt))
7977+        return d
7978+
7979+
7980+    def test_write_enforces_order_constraints(self):
7981+        # We require that the MDMFSlotWriteProxy be interacted with in a
7982+        # specific way.
7983+        # That way is:
7984+        # 0: __init__
7985+        # 1: write blocks and salts
7986+        # 2: Write the encrypted private key
7987+        # 3: Write the block hashes
7988+        # 4: Write the share hashes
7989+        # 5: Write the root hash and salt hash
7990+        # 6: Write the signature and verification key
7991+        # 7: Write the file.
7992+        #
7993+        # Some of these can be performed out-of-order, and some can't.
7994+        # The dependencies that I want to test here are:
7995+        #  - Private key before block hashes
7996+        #  - share hashes and block hashes before root hash
7997+        #  - root hash before signature
7998+        #  - signature before verification key
7999+        mw0 = self._make_new_mw("si0", 0)
8000+        # Write some shares
8001+        d = defer.succeed(None)
8002+        for i in xrange(6):
8003+            d.addCallback(lambda ignored, i=i:
8004+                mw0.put_block(self.block, i, self.salt))
8005+
8006+        # Try to write the share hash chain without writing the
8007+        # encrypted private key
8008+        d.addCallback(lambda ignored:
8009+            self.shouldFail(LayoutInvalid, "share hash chain before "
8010+                                           "private key",
8011+                            None,
8012+                            mw0.put_sharehashes, self.share_hash_chain))
8013+        # Write the private key.
8014+        d.addCallback(lambda ignored:
8015+            mw0.put_encprivkey(self.encprivkey))
8016+
8017+        # Now write the block hashes and try again
8018+        d.addCallback(lambda ignored:
8019+            mw0.put_blockhashes(self.block_hash_tree))
8020+
8021+        # We haven't yet put the root hash on the share, so we shouldn't
8022+        # be able to sign it.
8023+        d.addCallback(lambda ignored:
8024+            self.shouldFail(LayoutInvalid, "signature before root hash",
8025+                            None, mw0.put_signature, self.signature))
8026+
8027+        d.addCallback(lambda ignored:
8028+            self.failUnlessRaises(LayoutInvalid, mw0.get_signable))
8029+
8030+        # ..and, since that fails, we also shouldn't be able to put the
8031+        # verification key.
8032+        d.addCallback(lambda ignored:
8033+            self.shouldFail(LayoutInvalid, "key before signature",
8034+                            None, mw0.put_verification_key,
8035+                            self.verification_key))
8036+
8037+        # Now write the share hashes.
8038+        d.addCallback(lambda ignored:
8039+            mw0.put_sharehashes(self.share_hash_chain))
8040+        # We should be able to write the root hash now too
8041+        d.addCallback(lambda ignored:
8042+            mw0.put_root_hash(self.root_hash))
8043+
8044+        # We should still be unable to put the verification key
8045+        d.addCallback(lambda ignored:
8046+            self.shouldFail(LayoutInvalid, "key before signature",
8047+                            None, mw0.put_verification_key,
8048+                            self.verification_key))
8049+
8050+        d.addCallback(lambda ignored:
8051+            mw0.put_signature(self.signature))
8052+
8053+        # We shouldn't be able to write the offsets to the remote server
8054+        # until the offset table is finished; IOW, until we have written
8055+        # the verification key.
8056+        d.addCallback(lambda ignored:
8057+            self.shouldFail(LayoutInvalid, "offsets before verification key",
8058+                            None,
8059+                            mw0.finish_publishing))
8060+
8061+        d.addCallback(lambda ignored:
8062+            mw0.put_verification_key(self.verification_key))
8063+        return d
8064+
8065+
8066+    def test_end_to_end(self):
8067+        mw = self._make_new_mw("si1", 0)
8068+        # Write a share using the mutable writer, and make sure that the
8069+        # reader knows how to read everything back to us.
8070+        d = defer.succeed(None)
8071+        for i in xrange(6):
8072+            d.addCallback(lambda ignored, i=i:
8073+                mw.put_block(self.block, i, self.salt))
8074+        d.addCallback(lambda ignored:
8075+            mw.put_encprivkey(self.encprivkey))
8076+        d.addCallback(lambda ignored:
8077+            mw.put_blockhashes(self.block_hash_tree))
8078+        d.addCallback(lambda ignored:
8079+            mw.put_sharehashes(self.share_hash_chain))
8080+        d.addCallback(lambda ignored:
8081+            mw.put_root_hash(self.root_hash))
8082+        d.addCallback(lambda ignored:
8083+            mw.put_signature(self.signature))
8084+        d.addCallback(lambda ignored:
8085+            mw.put_verification_key(self.verification_key))
8086+        d.addCallback(lambda ignored:
8087+            mw.finish_publishing())
8088+
8089+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
8090+        def _check_block_and_salt((block, salt)):
8091+            self.failUnlessEqual(block, self.block)
8092+            self.failUnlessEqual(salt, self.salt)
8093+
8094+        for i in xrange(6):
8095+            d.addCallback(lambda ignored, i=i:
8096+                mr.get_block_and_salt(i))
8097+            d.addCallback(_check_block_and_salt)
8098+
8099+        d.addCallback(lambda ignored:
8100+            mr.get_encprivkey())
8101+        d.addCallback(lambda encprivkey:
8102+            self.failUnlessEqual(self.encprivkey, encprivkey))
8103+
8104+        d.addCallback(lambda ignored:
8105+            mr.get_blockhashes())
8106+        d.addCallback(lambda blockhashes:
8107+            self.failUnlessEqual(self.block_hash_tree, blockhashes))
8108+
8109+        d.addCallback(lambda ignored:
8110+            mr.get_sharehashes())
8111+        d.addCallback(lambda sharehashes:
8112+            self.failUnlessEqual(self.share_hash_chain, sharehashes))
8113+
8114+        d.addCallback(lambda ignored:
8115+            mr.get_signature())
8116+        d.addCallback(lambda signature:
8117+            self.failUnlessEqual(signature, self.signature))
8118+
8119+        d.addCallback(lambda ignored:
8120+            mr.get_verification_key())
8121+        d.addCallback(lambda verification_key:
8122+            self.failUnlessEqual(verification_key, self.verification_key))
8123+
8124+        d.addCallback(lambda ignored:
8125+            mr.get_seqnum())
8126+        d.addCallback(lambda seqnum:
8127+            self.failUnlessEqual(seqnum, 0))
8128+
8129+        d.addCallback(lambda ignored:
8130+            mr.get_root_hash())
8131+        d.addCallback(lambda root_hash:
8132+            self.failUnlessEqual(self.root_hash, root_hash))
8133+
8134+        d.addCallback(lambda ignored:
8135+            mr.get_encoding_parameters())
8136+        def _check_encoding_parameters((k, n, segsize, datalen)):
8137+            self.failUnlessEqual(k, 3)
8138+            self.failUnlessEqual(n, 10)
8139+            self.failUnlessEqual(segsize, 6)
8140+            self.failUnlessEqual(datalen, 36)
8141+        d.addCallback(_check_encoding_parameters)
8142+
8143+        d.addCallback(lambda ignored:
8144+            mr.get_checkstring())
8145+        d.addCallback(lambda checkstring:
8146+            self.failUnlessEqual(checkstring, mw.get_checkstring()))
8147+        return d
8148+
8149+
8150+    def test_is_sdmf(self):
8151+        # The MDMFSlotReadProxy should also know how to read SDMF files,
8152+        # since it will encounter them on the grid. Callers use the
8153+        # is_sdmf method to test this.
8154+        self.write_sdmf_share_to_server("si1")
8155+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
8156+        d = mr.is_sdmf()
8157+        d.addCallback(lambda issdmf:
8158+            self.failUnless(issdmf))
8159+        return d
8160+
8161+
8162+    def test_reads_sdmf(self):
8163+        # The slot read proxy should, naturally, know how to tell us
8164+        # about data in the SDMF format
8165+        self.write_sdmf_share_to_server("si1")
8166+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
8167+        d = defer.succeed(None)
8168+        d.addCallback(lambda ignored:
8169+            mr.is_sdmf())
8170+        d.addCallback(lambda issdmf:
8171+            self.failUnless(issdmf))
8172+
8173+        # What do we need to read?
8174+        #  - The sharedata
8175+        #  - The salt
8176+        d.addCallback(lambda ignored:
8177+            mr.get_block_and_salt(0))
8178+        def _check_block_and_salt(results):
8179+            block, salt = results
8180+            # Our original file is 36 bytes long. Then each share is 12
8181+            # bytes in size. The share is composed entirely of the
8182+            # letter a. self.block contains 2 as, so 6 * self.block is
8183+            # what we are looking for.
8184+            self.failUnlessEqual(block, self.block * 6)
8185+            self.failUnlessEqual(salt, self.salt)
8186+        d.addCallback(_check_block_and_salt)
8187+
8188+        #  - The blockhashes
8189+        d.addCallback(lambda ignored:
8190+            mr.get_blockhashes())
8191+        d.addCallback(lambda blockhashes:
8192+            self.failUnlessEqual(self.block_hash_tree,
8193+                                 blockhashes,
8194+                                 blockhashes))
8195+        #  - The sharehashes
8196+        d.addCallback(lambda ignored:
8197+            mr.get_sharehashes())
8198+        d.addCallback(lambda sharehashes:
8199+            self.failUnlessEqual(self.share_hash_chain,
8200+                                 sharehashes))
8201+        #  - The keys
8202+        d.addCallback(lambda ignored:
8203+            mr.get_encprivkey())
8204+        d.addCallback(lambda encprivkey:
8205+            self.failUnlessEqual(encprivkey, self.encprivkey, encprivkey))
8206+        d.addCallback(lambda ignored:
8207+            mr.get_verification_key())
8208+        d.addCallback(lambda verification_key:
8209+            self.failUnlessEqual(verification_key,
8210+                                 self.verification_key,
8211+                                 verification_key))
8212+        #  - The signature
8213+        d.addCallback(lambda ignored:
8214+            mr.get_signature())
8215+        d.addCallback(lambda signature:
8216+            self.failUnlessEqual(signature, self.signature, signature))
8217+
8218+        #  - The sequence number
8219+        d.addCallback(lambda ignored:
8220+            mr.get_seqnum())
8221+        d.addCallback(lambda seqnum:
8222+            self.failUnlessEqual(seqnum, 0, seqnum))
8223+
8224+        #  - The root hash
8225+        d.addCallback(lambda ignored:
8226+            mr.get_root_hash())
8227+        d.addCallback(lambda root_hash:
8228+            self.failUnlessEqual(root_hash, self.root_hash, root_hash))
8229+        return d
8230+
8231+
8232+    def test_only_reads_one_segment_sdmf(self):
8233+        # SDMF shares have only one segment, so it doesn't make sense to
8234+        # read more segments than that. The reader should know this and
8235+        # complain if we try to do that.
8236+        self.write_sdmf_share_to_server("si1")
8237+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
8238+        d = defer.succeed(None)
8239+        d.addCallback(lambda ignored:
8240+            mr.is_sdmf())
8241+        d.addCallback(lambda issdmf:
8242+            self.failUnless(issdmf))
8243+        d.addCallback(lambda ignored:
8244+            self.shouldFail(LayoutInvalid, "test bad segment",
8245+                            None,
8246+                            mr.get_block_and_salt, 1))
8247+        return d
8248+
8249+
8250+    def test_read_with_prefetched_mdmf_data(self):
8251+        # The MDMFSlotReadProxy will prefill certain fields if you pass
8252+        # it data that you have already fetched. This is useful for
8253+        # cases like the Servermap, which prefetches ~2kb of data while
8254+        # finding out which shares are on the remote peer so that it
8255+        # doesn't waste round trips.
8256+        mdmf_data = self.build_test_mdmf_share()
8257+        self.write_test_share_to_server("si1")
8258+        def _make_mr(ignored, length):
8259+            mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:length])
8260+            return mr
8261+
8262+        d = defer.succeed(None)
8263+        # This should be enough to fill in both the encoding parameters
8264+        # and the table of offsets, which will complete the version
8265+        # information tuple.
8266+        d.addCallback(_make_mr, 123)
8267+        d.addCallback(lambda mr:
8268+            mr.get_verinfo())
8269+        def _check_verinfo(verinfo):
8270+            self.failUnless(verinfo)
8271+            self.failUnlessEqual(len(verinfo), 9)
8272+            (seqnum,
8273+             root_hash,
8274+             salt_hash,
8275+             segsize,
8276+             datalen,
8277+             k,
8278+             n,
8279+             prefix,
8280+             offsets) = verinfo
8281+            self.failUnlessEqual(seqnum, 0)
8282+            self.failUnlessEqual(root_hash, self.root_hash)
8283+            self.failUnlessEqual(segsize, 6)
8284+            self.failUnlessEqual(datalen, 36)
8285+            self.failUnlessEqual(k, 3)
8286+            self.failUnlessEqual(n, 10)
8287+            expected_prefix = struct.pack(MDMFSIGNABLEHEADER,
8288+                                          1,
8289+                                          seqnum,
8290+                                          root_hash,
8291+                                          k,
8292+                                          n,
8293+                                          segsize,
8294+                                          datalen)
8295+            self.failUnlessEqual(expected_prefix, prefix)
8296+            self.failUnlessEqual(self.rref.read_count, 0)
8297+        d.addCallback(_check_verinfo)
8298+        # This is not enough data to read a block and a share, so the
8299+        # wrapper should attempt to read this from the remote server.
8300+        d.addCallback(_make_mr, 123)
8301+        d.addCallback(lambda mr:
8302+            mr.get_block_and_salt(0))
8303+        def _check_block_and_salt((block, salt)):
8304+            self.failUnlessEqual(block, self.block)
8305+            self.failUnlessEqual(salt, self.salt)
8306+            self.failUnlessEqual(self.rref.read_count, 1)
8307+        # This should be enough data to read one block.
8308+        d.addCallback(_make_mr, 123 + PRIVATE_KEY_SIZE + SIGNATURE_SIZE + VERIFICATION_KEY_SIZE + SHARE_HASH_CHAIN_SIZE + 140)
8309+        d.addCallback(lambda mr:
8310+            mr.get_block_and_salt(0))
8311+        d.addCallback(_check_block_and_salt)
8312+        return d
8313+
8314+
8315+    def test_read_with_prefetched_sdmf_data(self):
8316+        sdmf_data = self.build_test_sdmf_share()
8317+        self.write_sdmf_share_to_server("si1")
8318+        def _make_mr(ignored, length):
8319+            mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:length])
8320+            return mr
8321+
8322+        d = defer.succeed(None)
8323+        # This should be enough to get us the encoding parameters,
8324+        # offset table, and everything else we need to build a verinfo
8325+        # string.
8326+        d.addCallback(_make_mr, 123)
8327+        d.addCallback(lambda mr:
8328+            mr.get_verinfo())
8329+        def _check_verinfo(verinfo):
8330+            self.failUnless(verinfo)
8331+            self.failUnlessEqual(len(verinfo), 9)
8332+            (seqnum,
8333+             root_hash,
8334+             salt,
8335+             segsize,
8336+             datalen,
8337+             k,
8338+             n,
8339+             prefix,
8340+             offsets) = verinfo
8341+            self.failUnlessEqual(seqnum, 0)
8342+            self.failUnlessEqual(root_hash, self.root_hash)
8343+            self.failUnlessEqual(salt, self.salt)
8344+            self.failUnlessEqual(segsize, 36)
8345+            self.failUnlessEqual(datalen, 36)
8346+            self.failUnlessEqual(k, 3)
8347+            self.failUnlessEqual(n, 10)
8348+            expected_prefix = struct.pack(SIGNED_PREFIX,
8349+                                          0,
8350+                                          seqnum,
8351+                                          root_hash,
8352+                                          salt,
8353+                                          k,
8354+                                          n,
8355+                                          segsize,
8356+                                          datalen)
8357+            self.failUnlessEqual(expected_prefix, prefix)
8358+            self.failUnlessEqual(self.rref.read_count, 0)
8359+        d.addCallback(_check_verinfo)
8360+        # This shouldn't be enough to read any share data.
8361+        d.addCallback(_make_mr, 123)
8362+        d.addCallback(lambda mr:
8363+            mr.get_block_and_salt(0))
8364+        def _check_block_and_salt((block, salt)):
8365+            self.failUnlessEqual(block, self.block * 6)
8366+            self.failUnlessEqual(salt, self.salt)
8367+            # TODO: Fix the read routine so that it reads only the data
8368+            #       that it has cached if it can't read all of it.
8369+            self.failUnlessEqual(self.rref.read_count, 2)
8370+
8371+        # This should be enough to read share data.
8372+        d.addCallback(_make_mr, self.offsets['share_data'])
8373+        d.addCallback(lambda mr:
8374+            mr.get_block_and_salt(0))
8375+        d.addCallback(_check_block_and_salt)
8376+        return d
8377+
8378+
8379+    def test_read_with_empty_mdmf_file(self):
8380+        # Some tests upload a file with no contents to test things
8381+        # unrelated to the actual handling of the content of the file.
8382+        # The reader should behave intelligently in these cases.
8383+        self.write_test_share_to_server("si1", empty=True)
8384+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
8385+        # We should be able to get the encoding parameters, and they
8386+        # should be correct.
8387+        d = defer.succeed(None)
8388+        d.addCallback(lambda ignored:
8389+            mr.get_encoding_parameters())
8390+        def _check_encoding_parameters(params):
8391+            self.failUnlessEqual(len(params), 4)
8392+            k, n, segsize, datalen = params
8393+            self.failUnlessEqual(k, 3)
8394+            self.failUnlessEqual(n, 10)
8395+            self.failUnlessEqual(segsize, 0)
8396+            self.failUnlessEqual(datalen, 0)
8397+        d.addCallback(_check_encoding_parameters)
8398+
8399+        # We should not be able to fetch a block, since there are no
8400+        # blocks to fetch
8401+        d.addCallback(lambda ignored:
8402+            self.shouldFail(LayoutInvalid, "get block on empty file",
8403+                            None,
8404+                            mr.get_block_and_salt, 0))
8405+        return d
8406+
8407+
8408+    def test_read_with_empty_sdmf_file(self):
8409+        self.write_sdmf_share_to_server("si1", empty=True)
8410+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
8411+        # We should be able to get the encoding parameters, and they
8412+        # should be correct
8413+        d = defer.succeed(None)
8414+        d.addCallback(lambda ignored:
8415+            mr.get_encoding_parameters())
8416+        def _check_encoding_parameters(params):
8417+            self.failUnlessEqual(len(params), 4)
8418+            k, n, segsize, datalen = params
8419+            self.failUnlessEqual(k, 3)
8420+            self.failUnlessEqual(n, 10)
8421+            self.failUnlessEqual(segsize, 0)
8422+            self.failUnlessEqual(datalen, 0)
8423+        d.addCallback(_check_encoding_parameters)
8424+
8425+        # It does not make sense to get a block in this format, so we
8426+        # should not be able to.
8427+        d.addCallback(lambda ignored:
8428+            self.shouldFail(LayoutInvalid, "get block on an empty file",
8429+                            None,
8430+                            mr.get_block_and_salt, 0))
8431+        return d
8432+
8433+
8434+    def test_verinfo_with_sdmf_file(self):
8435+        self.write_sdmf_share_to_server("si1")
8436+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
8437+        # We should be able to get the version information.
8438+        d = defer.succeed(None)
8439+        d.addCallback(lambda ignored:
8440+            mr.get_verinfo())
8441+        def _check_verinfo(verinfo):
8442+            self.failUnless(verinfo)
8443+            self.failUnlessEqual(len(verinfo), 9)
8444+            (seqnum,
8445+             root_hash,
8446+             salt,
8447+             segsize,
8448+             datalen,
8449+             k,
8450+             n,
8451+             prefix,
8452+             offsets) = verinfo
8453+            self.failUnlessEqual(seqnum, 0)
8454+            self.failUnlessEqual(root_hash, self.root_hash)
8455+            self.failUnlessEqual(salt, self.salt)
8456+            self.failUnlessEqual(segsize, 36)
8457+            self.failUnlessEqual(datalen, 36)
8458+            self.failUnlessEqual(k, 3)
8459+            self.failUnlessEqual(n, 10)
8460+            expected_prefix = struct.pack(">BQ32s16s BBQQ",
8461+                                          0,
8462+                                          seqnum,
8463+                                          root_hash,
8464+                                          salt,
8465+                                          k,
8466+                                          n,
8467+                                          segsize,
8468+                                          datalen)
8469+            self.failUnlessEqual(prefix, expected_prefix)
8470+            self.failUnlessEqual(offsets, self.offsets)
8471+        d.addCallback(_check_verinfo)
8472+        return d
8473+
8474+
8475+    def test_verinfo_with_mdmf_file(self):
8476+        self.write_test_share_to_server("si1")
8477+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
8478+        d = defer.succeed(None)
8479+        d.addCallback(lambda ignored:
8480+            mr.get_verinfo())
8481+        def _check_verinfo(verinfo):
8482+            self.failUnless(verinfo)
8483+            self.failUnlessEqual(len(verinfo), 9)
8484+            (seqnum,
8485+             root_hash,
8486+             IV,
8487+             segsize,
8488+             datalen,
8489+             k,
8490+             n,
8491+             prefix,
8492+             offsets) = verinfo
8493+            self.failUnlessEqual(seqnum, 0)
8494+            self.failUnlessEqual(root_hash, self.root_hash)
8495+            self.failIf(IV)
8496+            self.failUnlessEqual(segsize, 6)
8497+            self.failUnlessEqual(datalen, 36)
8498+            self.failUnlessEqual(k, 3)
8499+            self.failUnlessEqual(n, 10)
8500+            expected_prefix = struct.pack(">BQ32s BBQQ",
8501+                                          1,
8502+                                          seqnum,
8503+                                          root_hash,
8504+                                          k,
8505+                                          n,
8506+                                          segsize,
8507+                                          datalen)
8508+            self.failUnlessEqual(prefix, expected_prefix)
8509+            self.failUnlessEqual(offsets, self.offsets)
8510+        d.addCallback(_check_verinfo)
8511+        return d
8512+
8513+
8514+    def test_reader_queue(self):
8515+        self.write_test_share_to_server('si1')
8516+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
8517+        d1 = mr.get_block_and_salt(0, queue=True)
8518+        d2 = mr.get_blockhashes(queue=True)
8519+        d3 = mr.get_sharehashes(queue=True)
8520+        d4 = mr.get_signature(queue=True)
8521+        d5 = mr.get_verification_key(queue=True)
8522+        dl = defer.DeferredList([d1, d2, d3, d4, d5])
8523+        mr.flush()
8524+        def _print(results):
8525+            self.failUnlessEqual(len(results), 5)
8526+            # We have one read for version information and offsets, and
8527+            # one for everything else.
8528+            self.failUnlessEqual(self.rref.read_count, 2)
8529+            block, salt = results[0][1] # results[0] is a boolean that says
8530+                                           # whether or not the operation
8531+                                           # worked.
8532+            self.failUnlessEqual(self.block, block)
8533+            self.failUnlessEqual(self.salt, salt)
8534+
8535+            blockhashes = results[1][1]
8536+            self.failUnlessEqual(self.block_hash_tree, blockhashes)
8537+
8538+            sharehashes = results[2][1]
8539+            self.failUnlessEqual(self.share_hash_chain, sharehashes)
8540+
8541+            signature = results[3][1]
8542+            self.failUnlessEqual(self.signature, signature)
8543+
8544+            verification_key = results[4][1]
8545+            self.failUnlessEqual(self.verification_key, verification_key)
8546+        dl.addCallback(_print)
8547+        return dl
8548+
8549+
8550+    def test_sdmf_writer(self):
8551+        # Go through the motions of writing an SDMF share to the storage
8552+        # server. Then read the storage server to see that the share got
8553+        # written in the way that we think it should have.
8554+
8555+        # We do this first so that the necessary instance variables get
8556+        # set the way we want them for the tests below.
8557+        data = self.build_test_sdmf_share()
8558+        sdmfr = SDMFSlotWriteProxy(0,
8559+                                   self.rref,
8560+                                   "si1",
8561+                                   self.secrets,
8562+                                   0, 3, 10, 36, 36)
8563+        # Put the block and salt.
8564+        sdmfr.put_block(self.blockdata, 0, self.salt)
8565+
8566+        # Put the encprivkey
8567+        sdmfr.put_encprivkey(self.encprivkey)
8568+
8569+        # Put the block and share hash chains
8570+        sdmfr.put_blockhashes(self.block_hash_tree)
8571+        sdmfr.put_sharehashes(self.share_hash_chain)
8572+        sdmfr.put_root_hash(self.root_hash)
8573+
8574+        # Put the signature
8575+        sdmfr.put_signature(self.signature)
8576+
8577+        # Put the verification key
8578+        sdmfr.put_verification_key(self.verification_key)
8579+
8580+        # Now check to make sure that nothing has been written yet.
8581+        self.failUnlessEqual(self.rref.write_count, 0)
8582+
8583+        # Now finish publishing
8584+        d = sdmfr.finish_publishing()
8585+        def _then(ignored):
8586+            self.failUnlessEqual(self.rref.write_count, 1)
8587+            read = self.ss.remote_slot_readv
8588+            self.failUnlessEqual(read("si1", [0], [(0, len(data))]),
8589+                                 {0: [data]})
8590+        d.addCallback(_then)
8591+        return d
8592+
8593+
8594+    def test_sdmf_writer_preexisting_share(self):
8595+        data = self.build_test_sdmf_share()
8596+        self.write_sdmf_share_to_server("si1")
8597+
8598+        # Now there is a share on the storage server. To successfully
8599+        # write, we need to set the checkstring correctly. When we
8600+        # don't, no write should occur.
8601+        sdmfw = SDMFSlotWriteProxy(0,
8602+                                   self.rref,
8603+                                   "si1",
8604+                                   self.secrets,
8605+                                   1, 3, 10, 36, 36)
8606+        sdmfw.put_block(self.blockdata, 0, self.salt)
8607+
8608+        # Put the encprivkey
8609+        sdmfw.put_encprivkey(self.encprivkey)
8610+
8611+        # Put the block and share hash chains
8612+        sdmfw.put_blockhashes(self.block_hash_tree)
8613+        sdmfw.put_sharehashes(self.share_hash_chain)
8614+
8615+        # Put the root hash
8616+        sdmfw.put_root_hash(self.root_hash)
8617+
8618+        # Put the signature
8619+        sdmfw.put_signature(self.signature)
8620+
8621+        # Put the verification key
8622+        sdmfw.put_verification_key(self.verification_key)
8623+
8624+        # We shouldn't have a checkstring yet
8625+        self.failUnlessEqual(sdmfw.get_checkstring(), "")
8626+
8627+        d = sdmfw.finish_publishing()
8628+        def _then(results):
8629+            self.failIf(results[0])
8630+            # this is the correct checkstring
8631+            self._expected_checkstring = results[1][0][0]
8632+            return self._expected_checkstring
8633+
8634+        d.addCallback(_then)
8635+        d.addCallback(sdmfw.set_checkstring)
8636+        d.addCallback(lambda ignored:
8637+            sdmfw.get_checkstring())
8638+        d.addCallback(lambda checkstring:
8639+            self.failUnlessEqual(checkstring, self._expected_checkstring))
8640+        d.addCallback(lambda ignored:
8641+            sdmfw.finish_publishing())
8642+        def _then_again(results):
8643+            self.failUnless(results[0])
8644+            read = self.ss.remote_slot_readv
8645+            self.failUnlessEqual(read("si1", [0], [(1, 8)]),
8646+                                 {0: [struct.pack(">Q", 1)]})
8647+            self.failUnlessEqual(read("si1", [0], [(9, len(data) - 9)]),
8648+                                 {0: [data[9:]]})
8649+        d.addCallback(_then_again)
8650+        return d
8651+
8652+
8653 class Stats(unittest.TestCase):
8654 
8655     def setUp(self):
8656}
8657[frontends/sftpd: Resolve incompatibilities between SFTP frontend and MDMF changes
8658Kevan Carstensen <kevan@isnotajoke.com>**20110802021207
8659 Ignore-this: 5e0f6e961048f71d4eed6d30210ffd2e
8660] {
8661hunk ./src/allmydata/frontends/sftpd.py 33
8662 from allmydata.interfaces import IFileNode, IDirectoryNode, ExistingChildError, \
8663      NoSuchChildError, ChildOfWrongTypeError
8664 from allmydata.mutable.common import NotWriteableError
8665+from allmydata.mutable.publish import MutableFileHandle
8666 from allmydata.immutable.upload import FileHandle
8667 from allmydata.dirnode import update_metadata
8668 from allmydata.util.fileutil import EncryptedTemporaryFile
8669hunk ./src/allmydata/frontends/sftpd.py 667
8670         else:
8671             assert IFileNode.providedBy(filenode), filenode
8672 
8673-            if filenode.is_mutable():
8674-                self.async.addCallback(lambda ign: filenode.download_best_version())
8675-                def _downloaded(data):
8676-                    self.consumer = OverwriteableFileConsumer(len(data), tempfile_maker)
8677-                    self.consumer.write(data)
8678-                    self.consumer.finish()
8679-                    return None
8680-                self.async.addCallback(_downloaded)
8681-            else:
8682-                download_size = filenode.get_size()
8683-                assert download_size is not None, "download_size is None"
8684+            self.async.addCallback(lambda ignored: filenode.get_best_readable_version())
8685+
8686+            def _read(version):
8687+                if noisy: self.log("_read", level=NOISY)
8688+                download_size = version.get_size()
8689+                assert download_size is not None
8690+
8691                 self.consumer = OverwriteableFileConsumer(download_size, tempfile_maker)
8692hunk ./src/allmydata/frontends/sftpd.py 675
8693-                def _read(ign):
8694-                    if noisy: self.log("_read immutable", level=NOISY)
8695-                    filenode.read(self.consumer, 0, None)
8696-                self.async.addCallback(_read)
8697+
8698+                version.read(self.consumer, 0, None)
8699+            self.async.addCallback(_read)
8700 
8701         eventually(self.async.callback, None)
8702 
8703hunk ./src/allmydata/frontends/sftpd.py 821
8704                     assert parent and childname, (parent, childname, self.metadata)
8705                     d2.addCallback(lambda ign: parent.set_metadata_for(childname, self.metadata))
8706 
8707-                d2.addCallback(lambda ign: self.consumer.get_current_size())
8708-                d2.addCallback(lambda size: self.consumer.read(0, size))
8709-                d2.addCallback(lambda new_contents: self.filenode.overwrite(new_contents))
8710+                d2.addCallback(lambda ign: self.filenode.overwrite(MutableFileHandle(self.consumer.get_file())))
8711             else:
8712                 def _add_file(ign):
8713                     self.log("_add_file childname=%r" % (childname,), level=OPERATIONAL)
8714hunk ./src/allmydata/test/test_sftp.py 32
8715 
8716 from allmydata.util.consumer import download_to_data
8717 from allmydata.immutable import upload
8718+from allmydata.mutable import publish
8719 from allmydata.test.no_network import GridTestMixin
8720 from allmydata.test.common import ShouldFailMixin
8721 from allmydata.test.common_util import ReallyEqualMixin
8722hunk ./src/allmydata/test/test_sftp.py 80
8723         return d
8724 
8725     def _set_up_tree(self):
8726-        d = self.client.create_mutable_file("mutable file contents")
8727+        u = publish.MutableData("mutable file contents")
8728+        d = self.client.create_mutable_file(u)
8729         d.addCallback(lambda node: self.root.set_node(u"mutable", node))
8730         def _created_mutable(n):
8731             self.mutable = n
8732}
8733[cli: teach CLI how to create MDMF mutable files
8734Kevan Carstensen <kevan@isnotajoke.com>**20110802021613
8735 Ignore-this: 18d0ff98e75be231eed3c53319e76936
8736 
8737 Specifically, 'tahoe mkdir' and 'tahoe put' now take a --mutable-type
8738 argument.
8739] {
8740hunk ./src/allmydata/scripts/cli.py 53
8741 
8742 
8743 class MakeDirectoryOptions(VDriveOptions):
8744+    optParameters = [
8745+        ("mutable-type", None, False, "Create a mutable file in the given format. Valid formats are 'sdmf' for SDMF and 'mdmf' for MDMF"),
8746+        ]
8747+
8748     def parseArgs(self, where=""):
8749         self.where = argv_to_unicode(where)
8750 
8751hunk ./src/allmydata/scripts/cli.py 60
8752+        if self['mutable-type'] and self['mutable-type'] not in ("sdmf", "mdmf"):
8753+            raise usage.UsageError("%s is an invalid format" % self['mutable-type'])
8754+
8755     def getSynopsis(self):
8756         return "Usage:  %s mkdir [options] [REMOTE_DIR]" % (self.command_name,)
8757 
8758hunk ./src/allmydata/scripts/cli.py 174
8759     optFlags = [
8760         ("mutable", "m", "Create a mutable file instead of an immutable one."),
8761         ]
8762+    optParameters = [
8763+        ("mutable-type", None, False, "Create a mutable file in the given format. Valid formats are 'sdmf' for SDMF and 'mdmf' for MDMF"),
8764+        ]
8765 
8766     def parseArgs(self, arg1=None, arg2=None):
8767         # see Examples below
8768hunk ./src/allmydata/scripts/cli.py 193
8769         if self.from_file == u"-":
8770             self.from_file = None
8771 
8772+        if self['mutable-type'] and self['mutable-type'] not in ("sdmf", "mdmf"):
8773+            raise usage.UsageError("%s is an invalid format" % self['mutable-type'])
8774+
8775+
8776     def getSynopsis(self):
8777         return "Usage:  %s put [options] LOCAL_FILE REMOTE_FILE" % (self.command_name,)
8778 
8779hunk ./src/allmydata/scripts/tahoe_mkdir.py 25
8780     if not where or not path:
8781         # create a new unlinked directory
8782         url = nodeurl + "uri?t=mkdir"
8783+        if options["mutable-type"]:
8784+            url += "&mutable-type=%s" % urllib.quote(options['mutable-type'])
8785         resp = do_http("POST", url)
8786         rc = check_http_error(resp, stderr)
8787         if rc:
8788hunk ./src/allmydata/scripts/tahoe_mkdir.py 42
8789     # path must be "/".join([s.encode("utf-8") for s in segments])
8790     url = nodeurl + "uri/%s/%s?t=mkdir" % (urllib.quote(rootcap),
8791                                            urllib.quote(path))
8792+    if options['mutable-type']:
8793+        url += "&mutable-type=%s" % urllib.quote(options['mutable-type'])
8794+
8795     resp = do_http("POST", url)
8796     check_http_error(resp, stderr)
8797     new_uri = resp.read().strip()
8798hunk ./src/allmydata/scripts/tahoe_put.py 21
8799     from_file = options.from_file
8800     to_file = options.to_file
8801     mutable = options['mutable']
8802+    mutable_type = False
8803+
8804+    if mutable:
8805+        mutable_type = options['mutable-type']
8806     if options['quiet']:
8807         verbosity = 0
8808     else:
8809hunk ./src/allmydata/scripts/tahoe_put.py 49
8810         #  DIRCAP:./subdir/foo : DIRCAP/subdir/foo
8811         #  MUTABLE-FILE-WRITECAP : filecap
8812 
8813-        # FIXME: this shouldn't rely on a particular prefix.
8814-        if to_file.startswith("URI:SSK:"):
8815+        # FIXME: don't hardcode cap format.
8816+        if to_file.startswith("URI:MDMF:") or to_file.startswith("URI:SSK:"):
8817             url = nodeurl + "uri/%s" % urllib.quote(to_file)
8818         else:
8819             try:
8820hunk ./src/allmydata/scripts/tahoe_put.py 71
8821         url = nodeurl + "uri"
8822     if mutable:
8823         url += "?mutable=true"
8824+    if mutable_type:
8825+        assert mutable
8826+        url += "&mutable-type=%s" % mutable_type
8827+
8828     if from_file:
8829         infileobj = open(os.path.expanduser(from_file), "rb")
8830     else:
8831hunk ./src/allmydata/test/test_cli.py 31
8832 from allmydata.test.common_util import StallMixin, ReallyEqualMixin
8833 from allmydata.test.no_network import GridTestMixin
8834 from twisted.internet import threads # CLI tests use deferToThread
8835+from twisted.internet import defer # List uses a DeferredList in one place.
8836 from twisted.python import usage
8837 
8838 from allmydata.util.assertutil import precondition
8839hunk ./src/allmydata/test/test_cli.py 1012
8840         d.addCallback(lambda (rc,out,err): self.failUnlessReallyEqual(out, DATA2))
8841         return d
8842 
8843+    def _check_mdmf_json(self, (rc, json, err)):
8844+         self.failUnlessEqual(rc, 0)
8845+         self.failUnlessEqual(err, "")
8846+         self.failUnlessIn('"mutable-type": "mdmf"', json)
8847+         # We also want a valid MDMF cap to be in the json.
8848+         self.failUnlessIn("URI:MDMF", json)
8849+         self.failUnlessIn("URI:MDMF-RO", json)
8850+         self.failUnlessIn("URI:MDMF-Verifier", json)
8851+
8852+    def _check_sdmf_json(self, (rc, json, err)):
8853+        self.failUnlessEqual(rc, 0)
8854+        self.failUnlessEqual(err, "")
8855+        self.failUnlessIn('"mutable-type": "sdmf"', json)
8856+        # We also want to see the appropriate SDMF caps.
8857+        self.failUnlessIn("URI:SSK", json)
8858+        self.failUnlessIn("URI:SSK-RO", json)
8859+        self.failUnlessIn("URI:SSK-Verifier", json)
8860+
8861+    def test_mutable_type(self):
8862+        self.basedir = "cli/Put/mutable_type"
8863+        self.set_up_grid()
8864+        data = "data" * 100000
8865+        fn1 = os.path.join(self.basedir, "data")
8866+        fileutil.write(fn1, data)
8867+        d = self.do_cli("create-alias", "tahoe")
8868+        d.addCallback(lambda ignored:
8869+            self.do_cli("put", "--mutable", "--mutable-type=mdmf",
8870+                        fn1, "tahoe:uploaded.txt"))
8871+        d.addCallback(lambda ignored:
8872+            self.do_cli("ls", "--json", "tahoe:uploaded.txt"))
8873+        d.addCallback(self._check_mdmf_json)
8874+        d.addCallback(lambda ignored:
8875+            self.do_cli("put", "--mutable", "--mutable-type=sdmf",
8876+                        fn1, "tahoe:uploaded2.txt"))
8877+        d.addCallback(lambda ignored:
8878+            self.do_cli("ls", "--json", "tahoe:uploaded2.txt"))
8879+        d.addCallback(self._check_sdmf_json)
8880+        return d
8881+
8882+    def test_mutable_type_unlinked(self):
8883+        self.basedir = "cli/Put/mutable_type_unlinked"
8884+        self.set_up_grid()
8885+        data = "data" * 100000
8886+        fn1 = os.path.join(self.basedir, "data")
8887+        fileutil.write(fn1, data)
8888+        d = self.do_cli("put", "--mutable", "--mutable-type=mdmf", fn1)
8889+        d.addCallback(lambda (rc, cap, err):
8890+            self.do_cli("ls", "--json", cap))
8891+        d.addCallback(self._check_mdmf_json)
8892+        d.addCallback(lambda ignored:
8893+            self.do_cli("put", "--mutable", "--mutable-type=sdmf", fn1))
8894+        d.addCallback(lambda (rc, cap, err):
8895+            self.do_cli("ls", "--json", cap))
8896+        d.addCallback(self._check_sdmf_json)
8897+        return d
8898+
8899+    def test_put_to_mdmf_cap(self):
8900+        self.basedir = "cli/Put/put_to_mdmf_cap"
8901+        self.set_up_grid()
8902+        data = "data" * 100000
8903+        fn1 = os.path.join(self.basedir, "data")
8904+        fileutil.write(fn1, data)
8905+        d = self.do_cli("put", "--mutable", "--mutable-type=mdmf", fn1)
8906+        def _got_cap((rc, out, err)):
8907+            self.failUnlessEqual(rc, 0)
8908+            self.cap = out
8909+        d.addCallback(_got_cap)
8910+        # Now try to write something to the cap using put.
8911+        data2 = "data2" * 100000
8912+        fn2 = os.path.join(self.basedir, "data2")
8913+        fileutil.write(fn2, data2)
8914+        d.addCallback(lambda ignored:
8915+            self.do_cli("put", fn2, self.cap))
8916+        def _got_put((rc, out, err)):
8917+            self.failUnlessEqual(rc, 0)
8918+            self.failUnlessIn(self.cap, out)
8919+        d.addCallback(_got_put)
8920+        # Now get the cap. We should see the data we just put there.
8921+        d.addCallback(lambda ignored:
8922+            self.do_cli("get", self.cap))
8923+        def _got_data((rc, out, err)):
8924+            self.failUnlessEqual(rc, 0)
8925+            self.failUnlessEqual(out, data2)
8926+        d.addCallback(_got_data)
8927+        # Now strip the extension information off of the cap and try
8928+        # to put something to it.
8929+        def _make_bare_cap(ignored):
8930+            cap = self.cap.split(":")
8931+            cap = ":".join(cap[:len(cap) - 2])
8932+            self.cap = cap
8933+        d.addCallback(_make_bare_cap)
8934+        data3 = "data3" * 100000
8935+        fn3 = os.path.join(self.basedir, "data3")
8936+        fileutil.write(fn3, data3)
8937+        d.addCallback(lambda ignored:
8938+            self.do_cli("put", fn3, self.cap))
8939+        d.addCallback(lambda ignored:
8940+            self.do_cli("get", self.cap))
8941+        def _got_data3((rc, out, err)):
8942+            self.failUnlessEqual(rc, 0)
8943+            self.failUnlessEqual(out, data3)
8944+        d.addCallback(_got_data3)
8945+        return d
8946+
8947+    def test_put_to_sdmf_cap(self):
8948+        self.basedir = "cli/Put/put_to_sdmf_cap"
8949+        self.set_up_grid()
8950+        data = "data" * 100000
8951+        fn1 = os.path.join(self.basedir, "data")
8952+        fileutil.write(fn1, data)
8953+        d = self.do_cli("put", "--mutable", "--mutable-type=sdmf", fn1)
8954+        def _got_cap((rc, out, err)):
8955+            self.failUnlessEqual(rc, 0)
8956+            self.cap = out
8957+        d.addCallback(_got_cap)
8958+        # Now try to write something to the cap using put.
8959+        data2 = "data2" * 100000
8960+        fn2 = os.path.join(self.basedir, "data2")
8961+        fileutil.write(fn2, data2)
8962+        d.addCallback(lambda ignored:
8963+            self.do_cli("put", fn2, self.cap))
8964+        def _got_put((rc, out, err)):
8965+            self.failUnlessEqual(rc, 0)
8966+            self.failUnlessIn(self.cap, out)
8967+        d.addCallback(_got_put)
8968+        # Now get the cap. We should see the data we just put there.
8969+        d.addCallback(lambda ignored:
8970+            self.do_cli("get", self.cap))
8971+        def _got_data((rc, out, err)):
8972+            self.failUnlessEqual(rc, 0)
8973+            self.failUnlessEqual(out, data2)
8974+        d.addCallback(_got_data)
8975+        return d
8976+
8977+    def test_mutable_type_invalid_format(self):
8978+        o = cli.PutOptions()
8979+        self.failUnlessRaises(usage.UsageError,
8980+                              o.parseOptions,
8981+                              ["--mutable", "--mutable-type=ldmf"])
8982+
8983     def test_put_with_nonexistent_alias(self):
8984         # when invoked with an alias that doesn't exist, 'tahoe put'
8985         # should output a useful error message, not a stack trace
8986hunk ./src/allmydata/test/test_cli.py 3052
8987 
8988         return d
8989 
8990+    def test_mkdir_mutable_type(self):
8991+        self.basedir = os.path.dirname(self.mktemp())
8992+        self.set_up_grid()
8993+        d = self.do_cli("create-alias", "tahoe")
8994+        d.addCallback(lambda ignored:
8995+            self.do_cli("mkdir", "--mutable-type=sdmf", "tahoe:foo"))
8996+        def _check((rc, out, err), st):
8997+            self.failUnlessReallyEqual(rc, 0)
8998+            self.failUnlessReallyEqual(err, "")
8999+            self.failUnlessIn(st, out)
9000+            return out
9001+        def _stash_dircap(cap):
9002+            self._dircap = cap
9003+            u = uri.from_string(cap)
9004+            fn_uri = u.get_filenode_cap()
9005+            self._filecap = fn_uri.to_string()
9006+        d.addCallback(_check, "URI:DIR2")
9007+        d.addCallback(_stash_dircap)
9008+        d.addCallback(lambda ignored:
9009+            self.do_cli("ls", "--json", "tahoe:foo"))
9010+        d.addCallback(_check, "URI:DIR2")
9011+        d.addCallback(lambda ignored:
9012+            self.do_cli("ls", "--json", self._filecap))
9013+        d.addCallback(_check, '"mutable-type": "sdmf"')
9014+        d.addCallback(lambda ignored:
9015+            self.do_cli("mkdir", "--mutable-type=mdmf", "tahoe:bar"))
9016+        d.addCallback(_check, "URI:DIR2-MDMF")
9017+        d.addCallback(_stash_dircap)
9018+        d.addCallback(lambda ignored:
9019+            self.do_cli("ls", "--json", "tahoe:bar"))
9020+        d.addCallback(_check, "URI:DIR2-MDMF")
9021+        d.addCallback(lambda ignored:
9022+            self.do_cli("ls", "--json", self._filecap))
9023+        d.addCallback(_check, '"mutable-type": "mdmf"')
9024+        return d
9025+
9026+    def test_mkdir_mutable_type_unlinked(self):
9027+        self.basedir = os.path.dirname(self.mktemp())
9028+        self.set_up_grid()
9029+        d = self.do_cli("mkdir", "--mutable-type=sdmf")
9030+        def _check((rc, out, err), st):
9031+            self.failUnlessReallyEqual(rc, 0)
9032+            self.failUnlessReallyEqual(err, "")
9033+            self.failUnlessIn(st, out)
9034+            return out
9035+        d.addCallback(_check, "URI:DIR2")
9036+        def _stash_dircap(cap):
9037+            self._dircap = cap
9038+            # Now we're going to feed the cap into uri.from_string...
9039+            u = uri.from_string(cap)
9040+            # ...grab the underlying filenode uri.
9041+            fn_uri = u.get_filenode_cap()
9042+            # ...and stash that.
9043+            self._filecap = fn_uri.to_string()
9044+        d.addCallback(_stash_dircap)
9045+        d.addCallback(lambda res: self.do_cli("ls", "--json",
9046+                                              self._filecap))
9047+        d.addCallback(_check, '"mutable-type": "sdmf"')
9048+        d.addCallback(lambda res: self.do_cli("mkdir", "--mutable-type=mdmf"))
9049+        d.addCallback(_check, "URI:DIR2-MDMF")
9050+        d.addCallback(_stash_dircap)
9051+        d.addCallback(lambda res: self.do_cli("ls", "--json",
9052+                                              self._filecap))
9053+        d.addCallback(_check, '"mutable-type": "mdmf"')
9054+        return d
9055+
9056+    def test_mkdir_bad_mutable_type(self):
9057+        o = cli.MakeDirectoryOptions()
9058+        self.failUnlessRaises(usage.UsageError,
9059+                              o.parseOptions,
9060+                              ["--mutable", "--mutable-type=ldmf"])
9061+
9062     def test_mkdir_unicode(self):
9063         self.basedir = os.path.dirname(self.mktemp())
9064         self.set_up_grid()
9065}
9066[docs: amend configuration, webapi documentation to talk about MDMF
9067Kevan Carstensen <kevan@isnotajoke.com>**20110802022056
9068 Ignore-this: 4cab9b7e4ab79cc1efdabe2d457f27a6
9069] {
9070hunk ./docs/configuration.rst 328
9071     (Mutable files use a different share placement algorithm that does not
9072     currently consider this parameter.)
9073 
9074+``mutable.format = sdmf or mdmf``
9075+
9076+    This value tells Tahoe-LAFS what the default mutable file format should
9077+    be. If ``mutable.format=sdmf``, then newly created mutable files will be
9078+    in the old SDMF format. This is desirable for clients that operate on
9079+    grids where some peers run older versions of Tahoe-LAFS, as these older
9080+    versions cannot read the new MDMF mutable file format. If
9081+    ``mutable.format`` is ``mdmf``, then newly created mutable files will use
9082+    the new MDMF format, which supports efficient in-place modification and
9083+    streaming downloads. You can overwrite this value using a special
9084+    mutable-type parameter in the webapi. If you do not specify a value here,
9085+    Tahoe-LAFS will use SDMF for all newly-created mutable files.
9086+
9087+    Note that this parameter only applies to mutable files. Mutable
9088+    directories, which are stored as mutable files, are not controlled by
9089+    this parameter and will always use SDMF. We may revisit this decision
9090+    in future versions of Tahoe-LAFS.
9091 
9092 Frontend Configuration
9093 ======================
9094hunk ./docs/frontends/webapi.rst 368
9095  To use the /uri/$FILECAP form, $FILECAP must be a write-cap for a mutable file.
9096 
9097  In the /uri/$DIRCAP/[SUBDIRS../]FILENAME form, if the target file is a
9098- writeable mutable file, that file's contents will be overwritten in-place. If
9099- it is a read-cap for a mutable file, an error will occur. If it is an
9100- immutable file, the old file will be discarded, and a new one will be put in
9101- its place.
9102+ writeable mutable file, that file's contents will be overwritten
9103+ in-place. If it is a read-cap for a mutable file, an error will occur.
9104+ If it is an immutable file, the old file will be discarded, and a new
9105+ one will be put in its place. If the target file is a writable mutable
9106+ file, you may also specify an "offset" parameter -- a byte offset that
9107+ determines where in the mutable file the data from the HTTP request
9108+ body is placed. This operation is relatively efficient for MDMF mutable
9109+ files, and is relatively inefficient (but still supported) for SDMF
9110+ mutable files. If no offset parameter is specified, then the entire
9111+ file is replaced with the data from the HTTP request body. For an
9112+ immutable file, the "offset" parameter is not valid.
9113 
9114  When creating a new file, if "mutable=true" is in the query arguments, the
9115  operation will create a mutable file instead of an immutable one.
9116hunk ./docs/frontends/webapi.rst 399
9117 
9118  If "mutable=true" is in the query arguments, the operation will create a
9119  mutable file, and return its write-cap in the HTTP respose. The default is
9120- to create an immutable file, returning the read-cap as a response.
9121+ to create an immutable file, returning the read-cap as a response. If
9122+ you create a mutable file, you can also use the "mutable-type" query
9123+ parameter. If "mutable-type=sdmf", then the mutable file will be created
9124+ in the old SDMF mutable file format. This is desirable for files that
9125+ need to be read by old clients. If "mutable-type=mdmf", then the file
9126+ will be created in the new MDMF mutable file format. MDMF mutable files
9127+ can be downloaded more efficiently, and modified in-place efficiently,
9128+ but are not compatible with older versions of Tahoe-LAFS. If no
9129+ "mutable-type" argument is given, the file is created in whatever
9130+ format was configured in tahoe.cfg.
9131 
9132 
9133 Creating A New Directory
9134hunk ./docs/frontends/webapi.rst 1101
9135  If a "mutable=true" argument is provided, the operation will create a
9136  mutable file, and the response body will contain the write-cap instead of
9137  the upload results page. The default is to create an immutable file,
9138- returning the upload results page as a response.
9139+ returning the upload results page as a response. If you create a
9140+ mutable file, you may choose to specify the format of that mutable file
9141+ with the "mutable-type" parameter. If "mutable-type=mdmf", then the
9142+ file will be created as an MDMF mutable file. If "mutable-type=sdmf",
9143+ then the file will be created as an SDMF mutable file. If no value is
9144+ specified, the file will be created in whatever format is specified in
9145+ tahoe.cfg.
9146 
9147 
9148 ``POST /uri/$DIRCAP/[SUBDIRS../]?t=upload``
9149}
9150[Fix some test failures caused by #393 patch.
9151david-sarah@jacaranda.org**20110802032810
9152 Ignore-this: 7f65e5adb5c859af289cea7011216fef
9153] {
9154hunk ./src/allmydata/test/test_immutable.py 291
9155         return d
9156 
9157     def test_download_to_data(self):
9158-        d = self.n.download_to_data()
9159+        d = self.startup("download_to_data")
9160+        d.addCallback(lambda ign: self.filenode.download_to_data())
9161         d.addCallback(lambda data:
9162             self.failUnlessEqual(data, common.TEST_DATA))
9163         return d
9164hunk ./src/allmydata/test/test_immutable.py 299
9165 
9166 
9167     def test_download_best_version(self):
9168-        d = self.n.download_best_version()
9169+        d = self.startup("download_best_version")
9170+        d.addCallback(lambda ign: self.filenode.download_best_version())
9171         d.addCallback(lambda data:
9172             self.failUnlessEqual(data, common.TEST_DATA))
9173         return d
9174hunk ./src/allmydata/test/test_immutable.py 307
9175 
9176 
9177     def test_get_best_readable_version(self):
9178-        d = self.n.get_best_readable_version()
9179+        d = self.startup("get_best_readable_version")
9180+        d.addCallback(lambda ign: self.filenode.get_best_readable_version())
9181         d.addCallback(lambda n2:
9182hunk ./src/allmydata/test/test_immutable.py 310
9183-            self.failUnlessEqual(n2, self.n))
9184+            self.failUnlessEqual(n2, self.filenode))
9185         return d
9186 
9187     def test_get_size_of_best_version(self):
9188hunk ./src/allmydata/test/test_immutable.py 314
9189-        d = self.n.get_size_of_best_version()
9190+        d = self.startup("get_size_of_best_version")
9191+        d.addCallback(lambda ign: self.filenode.get_size_of_best_version())
9192         d.addCallback(lambda size:
9193             self.failUnlessEqual(size, len(common.TEST_DATA)))
9194         return d
9195}
9196[dirnode: teach dirnode to make MDMF directories
9197Kevan Carstensen <kevan@isnotajoke.com>**20110807004224
9198 Ignore-this: 765ccd6a07ff752bf6057a3dab9e5abd
9199] {
9200hunk ./src/allmydata/dirnode.py 616
9201         d.addCallback(lambda res: deleter.old_child)
9202         return d
9203 
9204+    # XXX: Too many arguments? Worthwhile to break into mutable/immutable?
9205     def create_subdirectory(self, namex, initial_children={}, overwrite=True,
9206hunk ./src/allmydata/dirnode.py 618
9207-                            mutable=True, metadata=None):
9208+                            mutable=True, mutable_version=None, metadata=None):
9209         name = normalize(namex)
9210         if self.is_readonly():
9211             return defer.fail(NotWriteableError())
9212hunk ./src/allmydata/dirnode.py 623
9213         if mutable:
9214-            d = self._nodemaker.create_new_mutable_directory(initial_children)
9215+            if mutable_version:
9216+                d = self._nodemaker.create_new_mutable_directory(initial_children,
9217+                                                                 version=mutable_version)
9218+            else:
9219+                d = self._nodemaker.create_new_mutable_directory(initial_children)
9220         else:
9221hunk ./src/allmydata/dirnode.py 629
9222+            # mutable version doesn't make sense for immmutable directories.
9223+            assert mutable_version is None
9224             d = self._nodemaker.create_immutable_directory(initial_children)
9225         def _created(child):
9226             entries = {name: (child, metadata)}
9227hunk ./src/allmydata/test/test_dirnode.py 14
9228 from allmydata.interfaces import IImmutableFileNode, IMutableFileNode, \
9229      ExistingChildError, NoSuchChildError, MustNotBeUnknownRWError, \
9230      MustBeDeepImmutableError, MustBeReadonlyError, \
9231-     IDeepCheckResults, IDeepCheckAndRepairResults
9232+     IDeepCheckResults, IDeepCheckAndRepairResults, \
9233+     MDMF_VERSION, SDMF_VERSION
9234 from allmydata.mutable.filenode import MutableFileNode
9235 from allmydata.mutable.common import UncoordinatedWriteError
9236 from allmydata.util import hashutil, base32
9237hunk ./src/allmydata/test/test_dirnode.py 61
9238               testutil.ReallyEqualMixin, testutil.ShouldFailMixin, testutil.StallMixin, ErrorMixin):
9239     timeout = 480 # It occasionally takes longer than 240 seconds on Francois's arm box.
9240 
9241-    def test_basic(self):
9242-        self.basedir = "dirnode/Dirnode/test_basic"
9243-        self.set_up_grid()
9244+    def _do_create_test(self, mdmf=False):
9245         c = self.g.clients[0]
9246hunk ./src/allmydata/test/test_dirnode.py 63
9247-        d = c.create_dirnode()
9248-        def _done(res):
9249-            self.failUnless(isinstance(res, dirnode.DirectoryNode))
9250-            self.failUnless(res.is_mutable())
9251-            self.failIf(res.is_readonly())
9252-            self.failIf(res.is_unknown())
9253-            self.failIf(res.is_allowed_in_immutable_directory())
9254-            res.raise_error()
9255-            rep = str(res)
9256-            self.failUnless("RW-MUT" in rep)
9257-        d.addCallback(_done)
9258+
9259+        self.expected_manifest = []
9260+        self.expected_verifycaps = set()
9261+        self.expected_storage_indexes = set()
9262+
9263+        d = None
9264+        if mdmf:
9265+            d = c.create_dirnode(version=MDMF_VERSION)
9266+        else:
9267+            d = c.create_dirnode()
9268+        def _then(n):
9269+            # /
9270+            self.rootnode = n
9271+            backing_node = n._node
9272+            if mdmf:
9273+                self.failUnlessEqual(backing_node.get_version(),
9274+                                     MDMF_VERSION)
9275+            else:
9276+                self.failUnlessEqual(backing_node.get_version(),
9277+                                     SDMF_VERSION)
9278+            self.failUnless(n.is_mutable())
9279+            u = n.get_uri()
9280+            self.failUnless(u)
9281+            cap_formats = []
9282+            if mdmf:
9283+                cap_formats = ["URI:DIR2-MDMF:",
9284+                               "URI:DIR2-MDMF-RO:",
9285+                               "URI:DIR2-MDMF-Verifier:"]
9286+            else:
9287+                cap_formats = ["URI:DIR2:",
9288+                               "URI:DIR2-RO",
9289+                               "URI:DIR2-Verifier:"]
9290+            rw, ro, v = cap_formats
9291+            self.failUnless(u.startswith(rw), u)
9292+            u_ro = n.get_readonly_uri()
9293+            self.failUnless(u_ro.startswith(ro), u_ro)
9294+            u_v = n.get_verify_cap().to_string()
9295+            self.failUnless(u_v.startswith(v), u_v)
9296+            u_r = n.get_repair_cap().to_string()
9297+            self.failUnlessReallyEqual(u_r, u)
9298+            self.expected_manifest.append( ((), u) )
9299+            self.expected_verifycaps.add(u_v)
9300+            si = n.get_storage_index()
9301+            self.expected_storage_indexes.add(base32.b2a(si))
9302+            expected_si = n._uri.get_storage_index()
9303+            self.failUnlessReallyEqual(si, expected_si)
9304+
9305+            d = n.list()
9306+            d.addCallback(lambda res: self.failUnlessEqual(res, {}))
9307+            d.addCallback(lambda res: n.has_child(u"missing"))
9308+            d.addCallback(lambda res: self.failIf(res))
9309+
9310+            fake_file_uri = make_mutable_file_uri()
9311+            other_file_uri = make_mutable_file_uri()
9312+            m = c.nodemaker.create_from_cap(fake_file_uri)
9313+            ffu_v = m.get_verify_cap().to_string()
9314+            self.expected_manifest.append( ((u"child",) , m.get_uri()) )
9315+            self.expected_verifycaps.add(ffu_v)
9316+            self.expected_storage_indexes.add(base32.b2a(m.get_storage_index()))
9317+            d.addCallback(lambda res: n.set_uri(u"child",
9318+                                                fake_file_uri, fake_file_uri))
9319+            d.addCallback(lambda res:
9320+                          self.shouldFail(ExistingChildError, "set_uri-no",
9321+                                          "child 'child' already exists",
9322+                                          n.set_uri, u"child",
9323+                                          other_file_uri, other_file_uri,
9324+                                          overwrite=False))
9325+            # /
9326+            # /child = mutable
9327+
9328+            d.addCallback(lambda res: n.create_subdirectory(u"subdir"))
9329+
9330+            # /
9331+            # /child = mutable
9332+            # /subdir = directory
9333+            def _created(subdir):
9334+                self.failUnless(isinstance(subdir, dirnode.DirectoryNode))
9335+                self.subdir = subdir
9336+                new_v = subdir.get_verify_cap().to_string()
9337+                assert isinstance(new_v, str)
9338+                self.expected_manifest.append( ((u"subdir",), subdir.get_uri()) )
9339+                self.expected_verifycaps.add(new_v)
9340+                si = subdir.get_storage_index()
9341+                self.expected_storage_indexes.add(base32.b2a(si))
9342+            d.addCallback(_created)
9343+
9344+            d.addCallback(lambda res:
9345+                          self.shouldFail(ExistingChildError, "mkdir-no",
9346+                                          "child 'subdir' already exists",
9347+                                          n.create_subdirectory, u"subdir",
9348+                                          overwrite=False))
9349+
9350+            d.addCallback(lambda res: n.list())
9351+            d.addCallback(lambda children:
9352+                          self.failUnlessReallyEqual(set(children.keys()),
9353+                                                     set([u"child", u"subdir"])))
9354+
9355+            d.addCallback(lambda res: n.start_deep_stats().when_done())
9356+            def _check_deepstats(stats):
9357+                self.failUnless(isinstance(stats, dict))
9358+                expected = {"count-immutable-files": 0,
9359+                            "count-mutable-files": 1,
9360+                            "count-literal-files": 0,
9361+                            "count-files": 1,
9362+                            "count-directories": 2,
9363+                            "size-immutable-files": 0,
9364+                            "size-literal-files": 0,
9365+                            #"size-directories": 616, # varies
9366+                            #"largest-directory": 616,
9367+                            "largest-directory-children": 2,
9368+                            "largest-immutable-file": 0,
9369+                            }
9370+                for k,v in expected.iteritems():
9371+                    self.failUnlessReallyEqual(stats[k], v,
9372+                                               "stats[%s] was %s, not %s" %
9373+                                               (k, stats[k], v))
9374+                self.failUnless(stats["size-directories"] > 500,
9375+                                stats["size-directories"])
9376+                self.failUnless(stats["largest-directory"] > 500,
9377+                                stats["largest-directory"])
9378+                self.failUnlessReallyEqual(stats["size-files-histogram"], [])
9379+            d.addCallback(_check_deepstats)
9380+
9381+            d.addCallback(lambda res: n.build_manifest().when_done())
9382+            def _check_manifest(res):
9383+                manifest = res["manifest"]
9384+                self.failUnlessReallyEqual(sorted(manifest),
9385+                                           sorted(self.expected_manifest))
9386+                stats = res["stats"]
9387+                _check_deepstats(stats)
9388+                self.failUnlessReallyEqual(self.expected_verifycaps,
9389+                                           res["verifycaps"])
9390+                self.failUnlessReallyEqual(self.expected_storage_indexes,
9391+                                           res["storage-index"])
9392+            d.addCallback(_check_manifest)
9393+
9394+            def _add_subsubdir(res):
9395+                return self.subdir.create_subdirectory(u"subsubdir")
9396+            d.addCallback(_add_subsubdir)
9397+            # /
9398+            # /child = mutable
9399+            # /subdir = directory
9400+            # /subdir/subsubdir = directory
9401+            d.addCallback(lambda res: n.get_child_at_path(u"subdir/subsubdir"))
9402+            d.addCallback(lambda subsubdir:
9403+                          self.failUnless(isinstance(subsubdir,
9404+                                                     dirnode.DirectoryNode)))
9405+            d.addCallback(lambda res: n.get_child_at_path(u""))
9406+            d.addCallback(lambda res: self.failUnlessReallyEqual(res.get_uri(),
9407+                                                                 n.get_uri()))
9408+
9409+            d.addCallback(lambda res: n.get_metadata_for(u"child"))
9410+            d.addCallback(lambda metadata:
9411+                          self.failUnlessEqual(set(metadata.keys()),
9412+                                               set(["tahoe"])))
9413+
9414+            d.addCallback(lambda res:
9415+                          self.shouldFail(NoSuchChildError, "gcamap-no",
9416+                                          "nope",
9417+                                          n.get_child_and_metadata_at_path,
9418+                                          u"subdir/nope"))
9419+            d.addCallback(lambda res:
9420+                          n.get_child_and_metadata_at_path(u""))
9421+            def _check_child_and_metadata1(res):
9422+                child, metadata = res
9423+                self.failUnless(isinstance(child, dirnode.DirectoryNode))
9424+                # edge-metadata needs at least one path segment
9425+                self.failUnlessEqual(set(metadata.keys()), set([]))
9426+            d.addCallback(_check_child_and_metadata1)
9427+            d.addCallback(lambda res:
9428+                          n.get_child_and_metadata_at_path(u"child"))
9429+
9430+            def _check_child_and_metadata2(res):
9431+                child, metadata = res
9432+                self.failUnlessReallyEqual(child.get_uri(),
9433+                                           fake_file_uri)
9434+                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
9435+            d.addCallback(_check_child_and_metadata2)
9436+
9437+            d.addCallback(lambda res:
9438+                          n.get_child_and_metadata_at_path(u"subdir/subsubdir"))
9439+            def _check_child_and_metadata3(res):
9440+                child, metadata = res
9441+                self.failUnless(isinstance(child, dirnode.DirectoryNode))
9442+                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
9443+            d.addCallback(_check_child_and_metadata3)
9444+
9445+            # set_uri + metadata
9446+            # it should be possible to add a child without any metadata
9447+            d.addCallback(lambda res: n.set_uri(u"c2",
9448+                                                fake_file_uri, fake_file_uri,
9449+                                                {}))
9450+            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
9451+            d.addCallback(lambda metadata:
9452+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
9453+
9454+            # You can't override the link timestamps.
9455+            d.addCallback(lambda res: n.set_uri(u"c2",
9456+                                                fake_file_uri, fake_file_uri,
9457+                                                { 'tahoe': {'linkcrtime': "bogus"}}))
9458+            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
9459+            def _has_good_linkcrtime(metadata):
9460+                self.failUnless(metadata.has_key('tahoe'))
9461+                self.failUnless(metadata['tahoe'].has_key('linkcrtime'))
9462+                self.failIfEqual(metadata['tahoe']['linkcrtime'], 'bogus')
9463+            d.addCallback(_has_good_linkcrtime)
9464+
9465+            # if we don't set any defaults, the child should get timestamps
9466+            d.addCallback(lambda res: n.set_uri(u"c3",
9467+                                                fake_file_uri, fake_file_uri))
9468+            d.addCallback(lambda res: n.get_metadata_for(u"c3"))
9469+            d.addCallback(lambda metadata:
9470+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
9471+
9472+            # we can also add specific metadata at set_uri() time
9473+            d.addCallback(lambda res: n.set_uri(u"c4",
9474+                                                fake_file_uri, fake_file_uri,
9475+                                                {"key": "value"}))
9476+            d.addCallback(lambda res: n.get_metadata_for(u"c4"))
9477+            d.addCallback(lambda metadata:
9478+                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
9479+                                              (metadata['key'] == "value"), metadata))
9480+
9481+            d.addCallback(lambda res: n.delete(u"c2"))
9482+            d.addCallback(lambda res: n.delete(u"c3"))
9483+            d.addCallback(lambda res: n.delete(u"c4"))
9484+
9485+            # set_node + metadata
9486+            # it should be possible to add a child without any metadata except for timestamps
9487+            d.addCallback(lambda res: n.set_node(u"d2", n, {}))
9488+            d.addCallback(lambda res: c.create_dirnode())
9489+            d.addCallback(lambda n2:
9490+                          self.shouldFail(ExistingChildError, "set_node-no",
9491+                                          "child 'd2' already exists",
9492+                                          n.set_node, u"d2", n2,
9493+                                          overwrite=False))
9494+            d.addCallback(lambda res: n.get_metadata_for(u"d2"))
9495+            d.addCallback(lambda metadata:
9496+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
9497+
9498+            # if we don't set any defaults, the child should get timestamps
9499+            d.addCallback(lambda res: n.set_node(u"d3", n))
9500+            d.addCallback(lambda res: n.get_metadata_for(u"d3"))
9501+            d.addCallback(lambda metadata:
9502+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
9503+
9504+            # we can also add specific metadata at set_node() time
9505+            d.addCallback(lambda res: n.set_node(u"d4", n,
9506+                                                {"key": "value"}))
9507+            d.addCallback(lambda res: n.get_metadata_for(u"d4"))
9508+            d.addCallback(lambda metadata:
9509+                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
9510+                                          (metadata["key"] == "value"), metadata))
9511+
9512+            d.addCallback(lambda res: n.delete(u"d2"))
9513+            d.addCallback(lambda res: n.delete(u"d3"))
9514+            d.addCallback(lambda res: n.delete(u"d4"))
9515+
9516+            # metadata through set_children()
9517+            d.addCallback(lambda res:
9518+                          n.set_children({
9519+                              u"e1": (fake_file_uri, fake_file_uri),
9520+                              u"e2": (fake_file_uri, fake_file_uri, {}),
9521+                              u"e3": (fake_file_uri, fake_file_uri,
9522+                                      {"key": "value"}),
9523+                              }))
9524+            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
9525+            d.addCallback(lambda res:
9526+                          self.shouldFail(ExistingChildError, "set_children-no",
9527+                                          "child 'e1' already exists",
9528+                                          n.set_children,
9529+                                          { u"e1": (other_file_uri,
9530+                                                    other_file_uri),
9531+                                            u"new": (other_file_uri,
9532+                                                     other_file_uri),
9533+                                            },
9534+                                          overwrite=False))
9535+            # and 'new' should not have been created
9536+            d.addCallback(lambda res: n.list())
9537+            d.addCallback(lambda children: self.failIf(u"new" in children))
9538+            d.addCallback(lambda res: n.get_metadata_for(u"e1"))
9539+            d.addCallback(lambda metadata:
9540+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
9541+            d.addCallback(lambda res: n.get_metadata_for(u"e2"))
9542+            d.addCallback(lambda metadata:
9543+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
9544+            d.addCallback(lambda res: n.get_metadata_for(u"e3"))
9545+            d.addCallback(lambda metadata:
9546+                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
9547+                                          (metadata["key"] == "value"), metadata))
9548+
9549+            d.addCallback(lambda res: n.delete(u"e1"))
9550+            d.addCallback(lambda res: n.delete(u"e2"))
9551+            d.addCallback(lambda res: n.delete(u"e3"))
9552+
9553+            # metadata through set_nodes()
9554+            d.addCallback(lambda res:
9555+                          n.set_nodes({ u"f1": (n, None),
9556+                                        u"f2": (n, {}),
9557+                                        u"f3": (n, {"key": "value"}),
9558+                                        }))
9559+            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
9560+            d.addCallback(lambda res:
9561+                          self.shouldFail(ExistingChildError, "set_nodes-no",
9562+                                          "child 'f1' already exists",
9563+                                          n.set_nodes, { u"f1": (n, None),
9564+                                                         u"new": (n, None), },
9565+                                          overwrite=False))
9566+            # and 'new' should not have been created
9567+            d.addCallback(lambda res: n.list())
9568+            d.addCallback(lambda children: self.failIf(u"new" in children))
9569+            d.addCallback(lambda res: n.get_metadata_for(u"f1"))
9570+            d.addCallback(lambda metadata:
9571+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
9572+            d.addCallback(lambda res: n.get_metadata_for(u"f2"))
9573+            d.addCallback(lambda metadata:
9574+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
9575+            d.addCallback(lambda res: n.get_metadata_for(u"f3"))
9576+            d.addCallback(lambda metadata:
9577+                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
9578+                                          (metadata["key"] == "value"), metadata))
9579+
9580+            d.addCallback(lambda res: n.delete(u"f1"))
9581+            d.addCallback(lambda res: n.delete(u"f2"))
9582+            d.addCallback(lambda res: n.delete(u"f3"))
9583+
9584+
9585+            d.addCallback(lambda res:
9586+                          n.set_metadata_for(u"child",
9587+                                             {"tags": ["web2.0-compatible"], "tahoe": {"bad": "mojo"}}))
9588+            d.addCallback(lambda n1: n1.get_metadata_for(u"child"))
9589+            d.addCallback(lambda metadata:
9590+                          self.failUnless((set(metadata.keys()) == set(["tags", "tahoe"])) and
9591+                                          metadata["tags"] == ["web2.0-compatible"] and
9592+                                          "bad" not in metadata["tahoe"], metadata))
9593+
9594+            d.addCallback(lambda res:
9595+                          self.shouldFail(NoSuchChildError, "set_metadata_for-nosuch", "",
9596+                                          n.set_metadata_for, u"nosuch", {}))
9597+
9598+
9599+            def _start(res):
9600+                self._start_timestamp = time.time()
9601+            d.addCallback(_start)
9602+            # simplejson-1.7.1 (as shipped on Ubuntu 'gutsy') rounds all
9603+            # floats to hundredeths (it uses str(num) instead of repr(num)).
9604+            # simplejson-1.7.3 does not have this bug. To prevent this bug
9605+            # from causing the test to fail, stall for more than a few
9606+            # hundrededths of a second.
9607+            d.addCallback(self.stall, 0.1)
9608+            d.addCallback(lambda res: n.add_file(u"timestamps",
9609+                                                 upload.Data("stamp me", convergence="some convergence string")))
9610+            d.addCallback(self.stall, 0.1)
9611+            def _stop(res):
9612+                self._stop_timestamp = time.time()
9613+            d.addCallback(_stop)
9614+
9615+            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
9616+            def _check_timestamp1(metadata):
9617+                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
9618+                tahoe_md = metadata["tahoe"]
9619+                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
9620+
9621+                self.failUnlessGreaterOrEqualThan(tahoe_md["linkcrtime"],
9622+                                                  self._start_timestamp)
9623+                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
9624+                                                  tahoe_md["linkcrtime"])
9625+                self.failUnlessGreaterOrEqualThan(tahoe_md["linkmotime"],
9626+                                                  self._start_timestamp)
9627+                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
9628+                                                  tahoe_md["linkmotime"])
9629+                # Our current timestamp rules say that replacing an existing
9630+                # child should preserve the 'linkcrtime' but update the
9631+                # 'linkmotime'
9632+                self._old_linkcrtime = tahoe_md["linkcrtime"]
9633+                self._old_linkmotime = tahoe_md["linkmotime"]
9634+            d.addCallback(_check_timestamp1)
9635+            d.addCallback(self.stall, 2.0) # accomodate low-res timestamps
9636+            d.addCallback(lambda res: n.set_node(u"timestamps", n))
9637+            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
9638+            def _check_timestamp2(metadata):
9639+                self.failUnlessIn("tahoe", metadata)
9640+                tahoe_md = metadata["tahoe"]
9641+                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
9642+
9643+                self.failUnlessReallyEqual(tahoe_md["linkcrtime"], self._old_linkcrtime)
9644+                self.failUnlessGreaterThan(tahoe_md["linkmotime"], self._old_linkmotime)
9645+                return n.delete(u"timestamps")
9646+            d.addCallback(_check_timestamp2)
9647+
9648+            d.addCallback(lambda res: n.delete(u"subdir"))
9649+            d.addCallback(lambda old_child:
9650+                          self.failUnlessReallyEqual(old_child.get_uri(),
9651+                                                     self.subdir.get_uri()))
9652+
9653+            d.addCallback(lambda res: n.list())
9654+            d.addCallback(lambda children:
9655+                          self.failUnlessReallyEqual(set(children.keys()),
9656+                                                     set([u"child"])))
9657+
9658+            uploadable1 = upload.Data("some data", convergence="converge")
9659+            d.addCallback(lambda res: n.add_file(u"newfile", uploadable1))
9660+            d.addCallback(lambda newnode:
9661+                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
9662+            uploadable2 = upload.Data("some data", convergence="stuff")
9663+            d.addCallback(lambda res:
9664+                          self.shouldFail(ExistingChildError, "add_file-no",
9665+                                          "child 'newfile' already exists",
9666+                                          n.add_file, u"newfile",
9667+                                          uploadable2,
9668+                                          overwrite=False))
9669+            d.addCallback(lambda res: n.list())
9670+            d.addCallback(lambda children:
9671+                          self.failUnlessReallyEqual(set(children.keys()),
9672+                                                     set([u"child", u"newfile"])))
9673+            d.addCallback(lambda res: n.get_metadata_for(u"newfile"))
9674+            d.addCallback(lambda metadata:
9675+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
9676+
9677+            uploadable3 = upload.Data("some data", convergence="converge")
9678+            d.addCallback(lambda res: n.add_file(u"newfile-metadata",
9679+                                                 uploadable3,
9680+                                                 {"key": "value"}))
9681+            d.addCallback(lambda newnode:
9682+                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
9683+            d.addCallback(lambda res: n.get_metadata_for(u"newfile-metadata"))
9684+            d.addCallback(lambda metadata:
9685+                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
9686+                                              (metadata['key'] == "value"), metadata))
9687+            d.addCallback(lambda res: n.delete(u"newfile-metadata"))
9688+
9689+            d.addCallback(lambda res: n.create_subdirectory(u"subdir2"))
9690+            def _created2(subdir2):
9691+                self.subdir2 = subdir2
9692+                # put something in the way, to make sure it gets overwritten
9693+                return subdir2.add_file(u"child", upload.Data("overwrite me",
9694+                                                              "converge"))
9695+            d.addCallback(_created2)
9696+
9697+            d.addCallback(lambda res:
9698+                          n.move_child_to(u"child", self.subdir2))
9699+            d.addCallback(lambda res: n.list())
9700+            d.addCallback(lambda children:
9701+                          self.failUnlessReallyEqual(set(children.keys()),
9702+                                                     set([u"newfile", u"subdir2"])))
9703+            d.addCallback(lambda res: self.subdir2.list())
9704+            d.addCallback(lambda children:
9705+                          self.failUnlessReallyEqual(set(children.keys()),
9706+                                                     set([u"child"])))
9707+            d.addCallback(lambda res: self.subdir2.get(u"child"))
9708+            d.addCallback(lambda child:
9709+                          self.failUnlessReallyEqual(child.get_uri(),
9710+                                                     fake_file_uri))
9711+
9712+            # move it back, using new_child_name=
9713+            d.addCallback(lambda res:
9714+                          self.subdir2.move_child_to(u"child", n, u"newchild"))
9715+            d.addCallback(lambda res: n.list())
9716+            d.addCallback(lambda children:
9717+                          self.failUnlessReallyEqual(set(children.keys()),
9718+                                                     set([u"newchild", u"newfile",
9719+                                                          u"subdir2"])))
9720+            d.addCallback(lambda res: self.subdir2.list())
9721+            d.addCallback(lambda children:
9722+                          self.failUnlessReallyEqual(set(children.keys()), set([])))
9723+
9724+            # now make sure that we honor overwrite=False
9725+            d.addCallback(lambda res:
9726+                          self.subdir2.set_uri(u"newchild",
9727+                                               other_file_uri, other_file_uri))
9728+
9729+            d.addCallback(lambda res:
9730+                          self.shouldFail(ExistingChildError, "move_child_to-no",
9731+                                          "child 'newchild' already exists",
9732+                                          n.move_child_to, u"newchild",
9733+                                          self.subdir2,
9734+                                          overwrite=False))
9735+            d.addCallback(lambda res: self.subdir2.get(u"newchild"))
9736+            d.addCallback(lambda child:
9737+                          self.failUnlessReallyEqual(child.get_uri(),
9738+                                                     other_file_uri))
9739+
9740+
9741+            # Setting the no-write field should diminish a mutable cap to read-only
9742+            # (for both files and directories).
9743+
9744+            d.addCallback(lambda ign: n.set_uri(u"mutable", other_file_uri, other_file_uri))
9745+            d.addCallback(lambda ign: n.get(u"mutable"))
9746+            d.addCallback(lambda mutable: self.failIf(mutable.is_readonly(), mutable))
9747+            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
9748+            d.addCallback(lambda ign: n.get(u"mutable"))
9749+            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
9750+            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
9751+            d.addCallback(lambda ign: n.get(u"mutable"))
9752+            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
9753+
9754+            d.addCallback(lambda ign: n.get(u"subdir2"))
9755+            d.addCallback(lambda subdir2: self.failIf(subdir2.is_readonly()))
9756+            d.addCallback(lambda ign: n.set_metadata_for(u"subdir2", {"no-write": True}))
9757+            d.addCallback(lambda ign: n.get(u"subdir2"))
9758+            d.addCallback(lambda subdir2: self.failUnless(subdir2.is_readonly(), subdir2))
9759+
9760+            d.addCallback(lambda ign: n.set_uri(u"mutable_ro", other_file_uri, other_file_uri,
9761+                                                metadata={"no-write": True}))
9762+            d.addCallback(lambda ign: n.get(u"mutable_ro"))
9763+            d.addCallback(lambda mutable_ro: self.failUnless(mutable_ro.is_readonly(), mutable_ro))
9764+
9765+            d.addCallback(lambda ign: n.create_subdirectory(u"subdir_ro", metadata={"no-write": True}))
9766+            d.addCallback(lambda ign: n.get(u"subdir_ro"))
9767+            d.addCallback(lambda subdir_ro: self.failUnless(subdir_ro.is_readonly(), subdir_ro))
9768+
9769+            return d
9770+
9771+        d.addCallback(_then)
9772+
9773+        d.addErrback(self.explain_error)
9774         return d
9775 
9776hunk ./src/allmydata/test/test_dirnode.py 581
9777-    def test_initial_children(self):
9778-        self.basedir = "dirnode/Dirnode/test_initial_children"
9779-        self.set_up_grid()
9780+
9781+    def _do_initial_children_test(self, mdmf=False):
9782         c = self.g.clients[0]
9783         nm = c.nodemaker
9784 
9785hunk ./src/allmydata/test/test_dirnode.py 597
9786                 u"empty_litdir": (nm.create_from_cap(empty_litdir_uri), {}),
9787                 u"tiny_litdir": (nm.create_from_cap(tiny_litdir_uri), {}),
9788                 }
9789-        d = c.create_dirnode(kids)
9790-       
9791+        d = None
9792+        if mdmf:
9793+            d = c.create_dirnode(kids, version=MDMF_VERSION)
9794+        else:
9795+            d = c.create_dirnode(kids)
9796         def _created(dn):
9797             self.failUnless(isinstance(dn, dirnode.DirectoryNode))
9798hunk ./src/allmydata/test/test_dirnode.py 604
9799+            backing_node = dn._node
9800+            if mdmf:
9801+                self.failUnlessEqual(backing_node.get_version(),
9802+                                     MDMF_VERSION)
9803+            else:
9804+                self.failUnlessEqual(backing_node.get_version(),
9805+                                     SDMF_VERSION)
9806             self.failUnless(dn.is_mutable())
9807             self.failIf(dn.is_readonly())
9808             self.failIf(dn.is_unknown())
9809hunk ./src/allmydata/test/test_dirnode.py 619
9810             rep = str(dn)
9811             self.failUnless("RW-MUT" in rep)
9812             return dn.list()
9813-        d.addCallback(_created)
9814-       
9815+
9816         def _check_kids(children):
9817             self.failUnlessReallyEqual(set(children.keys()),
9818                                        set([one_nfc, u"two", u"mut", u"fut", u"fro",
9819hunk ./src/allmydata/test/test_dirnode.py 623
9820-                                            u"fut-unic", u"fro-unic", u"empty_litdir", u"tiny_litdir"]))
9821+                                        u"fut-unic", u"fro-unic", u"empty_litdir", u"tiny_litdir"]))
9822             one_node, one_metadata = children[one_nfc]
9823             two_node, two_metadata = children[u"two"]
9824             mut_node, mut_metadata = children[u"mut"]
9825hunk ./src/allmydata/test/test_dirnode.py 683
9826             d2.addCallback(lambda children: children[u"short"][0].read(MemAccum()))
9827             d2.addCallback(lambda accum: self.failUnlessReallyEqual(accum.data, "The end."))
9828             return d2
9829-
9830+        d.addCallback(_created)
9831         d.addCallback(_check_kids)
9832 
9833         d.addCallback(lambda ign: nm.create_new_mutable_directory(kids))
9834hunk ./src/allmydata/test/test_dirnode.py 707
9835                                       bad_kids2))
9836         return d
9837 
9838+    def _do_basic_test(self, mdmf=False):
9839+        c = self.g.clients[0]
9840+        d = None
9841+        if mdmf:
9842+            d = c.create_dirnode(version=MDMF_VERSION)
9843+        else:
9844+            d = c.create_dirnode()
9845+        def _done(res):
9846+            self.failUnless(isinstance(res, dirnode.DirectoryNode))
9847+            self.failUnless(res.is_mutable())
9848+            self.failIf(res.is_readonly())
9849+            self.failIf(res.is_unknown())
9850+            self.failIf(res.is_allowed_in_immutable_directory())
9851+            res.raise_error()
9852+            rep = str(res)
9853+            self.failUnless("RW-MUT" in rep)
9854+        d.addCallback(_done)
9855+        return d
9856+
9857+    def test_basic(self):
9858+        self.basedir = "dirnode/Dirnode/test_basic"
9859+        self.set_up_grid()
9860+        return self._do_basic_test()
9861+
9862+    def test_basic_mdmf(self):
9863+        self.basedir = "dirnode/Dirnode/test_basic_mdmf"
9864+        self.set_up_grid()
9865+        return self._do_basic_test(mdmf=True)
9866+
9867+    def test_initial_children(self):
9868+        self.basedir = "dirnode/Dirnode/test_initial_children"
9869+        self.set_up_grid()
9870+        return self._do_initial_children_test()
9871+
9872     def test_immutable(self):
9873         self.basedir = "dirnode/Dirnode/test_immutable"
9874         self.set_up_grid()
9875hunk ./src/allmydata/test/test_dirnode.py 1025
9876         d.addCallback(_done)
9877         return d
9878 
9879-    def _test_deepcheck_create(self):
9880+    def _test_deepcheck_create(self, version=SDMF_VERSION):
9881         # create a small tree with a loop, and some non-directories
9882         #  root/
9883         #  root/subdir/
9884hunk ./src/allmydata/test/test_dirnode.py 1033
9885         #  root/subdir/link -> root
9886         #  root/rodir
9887         c = self.g.clients[0]
9888-        d = c.create_dirnode()
9889+        d = c.create_dirnode(version=version)
9890         def _created_root(rootnode):
9891             self._rootnode = rootnode
9892hunk ./src/allmydata/test/test_dirnode.py 1036
9893+            self.failUnlessEqual(rootnode._node.get_version(), version)
9894             return rootnode.create_subdirectory(u"subdir")
9895         d.addCallback(_created_root)
9896         def _created_subdir(subdir):
9897hunk ./src/allmydata/test/test_dirnode.py 1075
9898         d.addCallback(_check_results)
9899         return d
9900 
9901+    def test_deepcheck_mdmf(self):
9902+        self.basedir = "dirnode/Dirnode/test_deepcheck_mdmf"
9903+        self.set_up_grid()
9904+        d = self._test_deepcheck_create(MDMF_VERSION)
9905+        d.addCallback(lambda rootnode: rootnode.start_deep_check().when_done())
9906+        def _check_results(r):
9907+            self.failUnless(IDeepCheckResults.providedBy(r))
9908+            c = r.get_counters()
9909+            self.failUnlessReallyEqual(c,
9910+                                       {"count-objects-checked": 4,
9911+                                        "count-objects-healthy": 4,
9912+                                        "count-objects-unhealthy": 0,
9913+                                        "count-objects-unrecoverable": 0,
9914+                                        "count-corrupt-shares": 0,
9915+                                        })
9916+            self.failIf(r.get_corrupt_shares())
9917+            self.failUnlessReallyEqual(len(r.get_all_results()), 4)
9918+        d.addCallback(_check_results)
9919+        return d
9920+
9921     def test_deepcheck_and_repair(self):
9922         self.basedir = "dirnode/Dirnode/test_deepcheck_and_repair"
9923         self.set_up_grid()
9924hunk ./src/allmydata/test/test_dirnode.py 1124
9925         d.addCallback(_check_results)
9926         return d
9927 
9928+    def test_deepcheck_and_repair_mdmf(self):
9929+        self.basedir = "dirnode/Dirnode/test_deepcheck_and_repair_mdmf"
9930+        self.set_up_grid()
9931+        d = self._test_deepcheck_create(version=MDMF_VERSION)
9932+        d.addCallback(lambda rootnode:
9933+                      rootnode.start_deep_check_and_repair().when_done())
9934+        def _check_results(r):
9935+            self.failUnless(IDeepCheckAndRepairResults.providedBy(r))
9936+            c = r.get_counters()
9937+            self.failUnlessReallyEqual(c,
9938+                                       {"count-objects-checked": 4,
9939+                                        "count-objects-healthy-pre-repair": 4,
9940+                                        "count-objects-unhealthy-pre-repair": 0,
9941+                                        "count-objects-unrecoverable-pre-repair": 0,
9942+                                        "count-corrupt-shares-pre-repair": 0,
9943+                                        "count-objects-healthy-post-repair": 4,
9944+                                        "count-objects-unhealthy-post-repair": 0,
9945+                                        "count-objects-unrecoverable-post-repair": 0,
9946+                                        "count-corrupt-shares-post-repair": 0,
9947+                                        "count-repairs-attempted": 0,
9948+                                        "count-repairs-successful": 0,
9949+                                        "count-repairs-unsuccessful": 0,
9950+                                        })
9951+            self.failIf(r.get_corrupt_shares())
9952+            self.failIf(r.get_remaining_corrupt_shares())
9953+            self.failUnlessReallyEqual(len(r.get_all_results()), 4)
9954+        d.addCallback(_check_results)
9955+        return d
9956+
9957     def _mark_file_bad(self, rootnode):
9958         self.delete_shares_numbered(rootnode.get_uri(), [0])
9959         return rootnode
9960hunk ./src/allmydata/test/test_dirnode.py 1176
9961         d.addCallback(_check_results)
9962         return d
9963 
9964-    def test_readonly(self):
9965-        self.basedir = "dirnode/Dirnode/test_readonly"
9966+    def test_deepcheck_problems_mdmf(self):
9967+        self.basedir = "dirnode/Dirnode/test_deepcheck_problems_mdmf"
9968         self.set_up_grid()
9969hunk ./src/allmydata/test/test_dirnode.py 1179
9970+        d = self._test_deepcheck_create(version=MDMF_VERSION)
9971+        d.addCallback(lambda rootnode: self._mark_file_bad(rootnode))
9972+        d.addCallback(lambda rootnode: rootnode.start_deep_check().when_done())
9973+        def _check_results(r):
9974+            c = r.get_counters()
9975+            self.failUnlessReallyEqual(c,
9976+                                       {"count-objects-checked": 4,
9977+                                        "count-objects-healthy": 3,
9978+                                        "count-objects-unhealthy": 1,
9979+                                        "count-objects-unrecoverable": 0,
9980+                                        "count-corrupt-shares": 0,
9981+                                        })
9982+            #self.failUnlessReallyEqual(len(r.get_problems()), 1) # TODO
9983+        d.addCallback(_check_results)
9984+        return d
9985+
9986+    def _do_readonly_test(self, version=SDMF_VERSION):
9987         c = self.g.clients[0]
9988         nm = c.nodemaker
9989         filecap = make_chk_file_uri(1234)
9990hunk ./src/allmydata/test/test_dirnode.py 1202
9991         filenode = nm.create_from_cap(filecap)
9992         uploadable = upload.Data("some data", convergence="some convergence string")
9993 
9994-        d = c.create_dirnode()
9995+        d = c.create_dirnode(version=version)
9996         def _created(rw_dn):
9997hunk ./src/allmydata/test/test_dirnode.py 1204
9998+            backing_node = rw_dn._node
9999+            self.failUnlessEqual(backing_node.get_version(), version)
10000             d2 = rw_dn.set_uri(u"child", filecap, filecap)
10001             d2.addCallback(lambda res: rw_dn)
10002             return d2
10003hunk ./src/allmydata/test/test_dirnode.py 1245
10004         d.addCallback(_listed)
10005         return d
10006 
10007+    def test_readonly(self):
10008+        self.basedir = "dirnode/Dirnode/test_readonly"
10009+        self.set_up_grid()
10010+        return self._do_readonly_test()
10011+
10012+    def test_readonly_mdmf(self):
10013+        self.basedir = "dirnode/Dirnode/test_readonly_mdmf"
10014+        self.set_up_grid()
10015+        return self._do_readonly_test(version=MDMF_VERSION)
10016+
10017     def failUnlessGreaterThan(self, a, b):
10018         self.failUnless(a > b, "%r should be > %r" % (a, b))
10019 
10020hunk ./src/allmydata/test/test_dirnode.py 1264
10021     def test_create(self):
10022         self.basedir = "dirnode/Dirnode/test_create"
10023         self.set_up_grid()
10024-        c = self.g.clients[0]
10025-
10026-        self.expected_manifest = []
10027-        self.expected_verifycaps = set()
10028-        self.expected_storage_indexes = set()
10029-
10030-        d = c.create_dirnode()
10031-        def _then(n):
10032-            # /
10033-            self.rootnode = n
10034-            self.failUnless(n.is_mutable())
10035-            u = n.get_uri()
10036-            self.failUnless(u)
10037-            self.failUnless(u.startswith("URI:DIR2:"), u)
10038-            u_ro = n.get_readonly_uri()
10039-            self.failUnless(u_ro.startswith("URI:DIR2-RO:"), u_ro)
10040-            u_v = n.get_verify_cap().to_string()
10041-            self.failUnless(u_v.startswith("URI:DIR2-Verifier:"), u_v)
10042-            u_r = n.get_repair_cap().to_string()
10043-            self.failUnlessReallyEqual(u_r, u)
10044-            self.expected_manifest.append( ((), u) )
10045-            self.expected_verifycaps.add(u_v)
10046-            si = n.get_storage_index()
10047-            self.expected_storage_indexes.add(base32.b2a(si))
10048-            expected_si = n._uri.get_storage_index()
10049-            self.failUnlessReallyEqual(si, expected_si)
10050-
10051-            d = n.list()
10052-            d.addCallback(lambda res: self.failUnlessEqual(res, {}))
10053-            d.addCallback(lambda res: n.has_child(u"missing"))
10054-            d.addCallback(lambda res: self.failIf(res))
10055-
10056-            fake_file_uri = make_mutable_file_uri()
10057-            other_file_uri = make_mutable_file_uri()
10058-            m = c.nodemaker.create_from_cap(fake_file_uri)
10059-            ffu_v = m.get_verify_cap().to_string()
10060-            self.expected_manifest.append( ((u"child",) , m.get_uri()) )
10061-            self.expected_verifycaps.add(ffu_v)
10062-            self.expected_storage_indexes.add(base32.b2a(m.get_storage_index()))
10063-            d.addCallback(lambda res: n.set_uri(u"child",
10064-                                                fake_file_uri, fake_file_uri))
10065-            d.addCallback(lambda res:
10066-                          self.shouldFail(ExistingChildError, "set_uri-no",
10067-                                          "child 'child' already exists",
10068-                                          n.set_uri, u"child",
10069-                                          other_file_uri, other_file_uri,
10070-                                          overwrite=False))
10071-            # /
10072-            # /child = mutable
10073-
10074-            d.addCallback(lambda res: n.create_subdirectory(u"subdir"))
10075-
10076-            # /
10077-            # /child = mutable
10078-            # /subdir = directory
10079-            def _created(subdir):
10080-                self.failUnless(isinstance(subdir, dirnode.DirectoryNode))
10081-                self.subdir = subdir
10082-                new_v = subdir.get_verify_cap().to_string()
10083-                assert isinstance(new_v, str)
10084-                self.expected_manifest.append( ((u"subdir",), subdir.get_uri()) )
10085-                self.expected_verifycaps.add(new_v)
10086-                si = subdir.get_storage_index()
10087-                self.expected_storage_indexes.add(base32.b2a(si))
10088-            d.addCallback(_created)
10089-
10090-            d.addCallback(lambda res:
10091-                          self.shouldFail(ExistingChildError, "mkdir-no",
10092-                                          "child 'subdir' already exists",
10093-                                          n.create_subdirectory, u"subdir",
10094-                                          overwrite=False))
10095-
10096-            d.addCallback(lambda res: n.list())
10097-            d.addCallback(lambda children:
10098-                          self.failUnlessReallyEqual(set(children.keys()),
10099-                                                     set([u"child", u"subdir"])))
10100-
10101-            d.addCallback(lambda res: n.start_deep_stats().when_done())
10102-            def _check_deepstats(stats):
10103-                self.failUnless(isinstance(stats, dict))
10104-                expected = {"count-immutable-files": 0,
10105-                            "count-mutable-files": 1,
10106-                            "count-literal-files": 0,
10107-                            "count-files": 1,
10108-                            "count-directories": 2,
10109-                            "size-immutable-files": 0,
10110-                            "size-literal-files": 0,
10111-                            #"size-directories": 616, # varies
10112-                            #"largest-directory": 616,
10113-                            "largest-directory-children": 2,
10114-                            "largest-immutable-file": 0,
10115-                            }
10116-                for k,v in expected.iteritems():
10117-                    self.failUnlessReallyEqual(stats[k], v,
10118-                                               "stats[%s] was %s, not %s" %
10119-                                               (k, stats[k], v))
10120-                self.failUnless(stats["size-directories"] > 500,
10121-                                stats["size-directories"])
10122-                self.failUnless(stats["largest-directory"] > 500,
10123-                                stats["largest-directory"])
10124-                self.failUnlessReallyEqual(stats["size-files-histogram"], [])
10125-            d.addCallback(_check_deepstats)
10126-
10127-            d.addCallback(lambda res: n.build_manifest().when_done())
10128-            def _check_manifest(res):
10129-                manifest = res["manifest"]
10130-                self.failUnlessReallyEqual(sorted(manifest),
10131-                                           sorted(self.expected_manifest))
10132-                stats = res["stats"]
10133-                _check_deepstats(stats)
10134-                self.failUnlessReallyEqual(self.expected_verifycaps,
10135-                                           res["verifycaps"])
10136-                self.failUnlessReallyEqual(self.expected_storage_indexes,
10137-                                           res["storage-index"])
10138-            d.addCallback(_check_manifest)
10139-
10140-            def _add_subsubdir(res):
10141-                return self.subdir.create_subdirectory(u"subsubdir")
10142-            d.addCallback(_add_subsubdir)
10143-            # /
10144-            # /child = mutable
10145-            # /subdir = directory
10146-            # /subdir/subsubdir = directory
10147-            d.addCallback(lambda res: n.get_child_at_path(u"subdir/subsubdir"))
10148-            d.addCallback(lambda subsubdir:
10149-                          self.failUnless(isinstance(subsubdir,
10150-                                                     dirnode.DirectoryNode)))
10151-            d.addCallback(lambda res: n.get_child_at_path(u""))
10152-            d.addCallback(lambda res: self.failUnlessReallyEqual(res.get_uri(),
10153-                                                                 n.get_uri()))
10154-
10155-            d.addCallback(lambda res: n.get_metadata_for(u"child"))
10156-            d.addCallback(lambda metadata:
10157-                          self.failUnlessEqual(set(metadata.keys()),
10158-                                               set(["tahoe"])))
10159-
10160-            d.addCallback(lambda res:
10161-                          self.shouldFail(NoSuchChildError, "gcamap-no",
10162-                                          "nope",
10163-                                          n.get_child_and_metadata_at_path,
10164-                                          u"subdir/nope"))
10165-            d.addCallback(lambda res:
10166-                          n.get_child_and_metadata_at_path(u""))
10167-            def _check_child_and_metadata1(res):
10168-                child, metadata = res
10169-                self.failUnless(isinstance(child, dirnode.DirectoryNode))
10170-                # edge-metadata needs at least one path segment
10171-                self.failUnlessEqual(set(metadata.keys()), set([]))
10172-            d.addCallback(_check_child_and_metadata1)
10173-            d.addCallback(lambda res:
10174-                          n.get_child_and_metadata_at_path(u"child"))
10175-
10176-            def _check_child_and_metadata2(res):
10177-                child, metadata = res
10178-                self.failUnlessReallyEqual(child.get_uri(),
10179-                                           fake_file_uri)
10180-                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
10181-            d.addCallback(_check_child_and_metadata2)
10182-
10183-            d.addCallback(lambda res:
10184-                          n.get_child_and_metadata_at_path(u"subdir/subsubdir"))
10185-            def _check_child_and_metadata3(res):
10186-                child, metadata = res
10187-                self.failUnless(isinstance(child, dirnode.DirectoryNode))
10188-                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
10189-            d.addCallback(_check_child_and_metadata3)
10190-
10191-            # set_uri + metadata
10192-            # it should be possible to add a child without any metadata
10193-            d.addCallback(lambda res: n.set_uri(u"c2",
10194-                                                fake_file_uri, fake_file_uri,
10195-                                                {}))
10196-            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
10197-            d.addCallback(lambda metadata:
10198-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
10199-
10200-            # You can't override the link timestamps.
10201-            d.addCallback(lambda res: n.set_uri(u"c2",
10202-                                                fake_file_uri, fake_file_uri,
10203-                                                { 'tahoe': {'linkcrtime': "bogus"}}))
10204-            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
10205-            def _has_good_linkcrtime(metadata):
10206-                self.failUnless(metadata.has_key('tahoe'))
10207-                self.failUnless(metadata['tahoe'].has_key('linkcrtime'))
10208-                self.failIfEqual(metadata['tahoe']['linkcrtime'], 'bogus')
10209-            d.addCallback(_has_good_linkcrtime)
10210-
10211-            # if we don't set any defaults, the child should get timestamps
10212-            d.addCallback(lambda res: n.set_uri(u"c3",
10213-                                                fake_file_uri, fake_file_uri))
10214-            d.addCallback(lambda res: n.get_metadata_for(u"c3"))
10215-            d.addCallback(lambda metadata:
10216-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
10217-
10218-            # we can also add specific metadata at set_uri() time
10219-            d.addCallback(lambda res: n.set_uri(u"c4",
10220-                                                fake_file_uri, fake_file_uri,
10221-                                                {"key": "value"}))
10222-            d.addCallback(lambda res: n.get_metadata_for(u"c4"))
10223-            d.addCallback(lambda metadata:
10224-                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
10225-                                              (metadata['key'] == "value"), metadata))
10226-
10227-            d.addCallback(lambda res: n.delete(u"c2"))
10228-            d.addCallback(lambda res: n.delete(u"c3"))
10229-            d.addCallback(lambda res: n.delete(u"c4"))
10230-
10231-            # set_node + metadata
10232-            # it should be possible to add a child without any metadata except for timestamps
10233-            d.addCallback(lambda res: n.set_node(u"d2", n, {}))
10234-            d.addCallback(lambda res: c.create_dirnode())
10235-            d.addCallback(lambda n2:
10236-                          self.shouldFail(ExistingChildError, "set_node-no",
10237-                                          "child 'd2' already exists",
10238-                                          n.set_node, u"d2", n2,
10239-                                          overwrite=False))
10240-            d.addCallback(lambda res: n.get_metadata_for(u"d2"))
10241-            d.addCallback(lambda metadata:
10242-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
10243-
10244-            # if we don't set any defaults, the child should get timestamps
10245-            d.addCallback(lambda res: n.set_node(u"d3", n))
10246-            d.addCallback(lambda res: n.get_metadata_for(u"d3"))
10247-            d.addCallback(lambda metadata:
10248-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
10249-
10250-            # we can also add specific metadata at set_node() time
10251-            d.addCallback(lambda res: n.set_node(u"d4", n,
10252-                                                {"key": "value"}))
10253-            d.addCallback(lambda res: n.get_metadata_for(u"d4"))
10254-            d.addCallback(lambda metadata:
10255-                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
10256-                                          (metadata["key"] == "value"), metadata))
10257-
10258-            d.addCallback(lambda res: n.delete(u"d2"))
10259-            d.addCallback(lambda res: n.delete(u"d3"))
10260-            d.addCallback(lambda res: n.delete(u"d4"))
10261-
10262-            # metadata through set_children()
10263-            d.addCallback(lambda res:
10264-                          n.set_children({
10265-                              u"e1": (fake_file_uri, fake_file_uri),
10266-                              u"e2": (fake_file_uri, fake_file_uri, {}),
10267-                              u"e3": (fake_file_uri, fake_file_uri,
10268-                                      {"key": "value"}),
10269-                              }))
10270-            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
10271-            d.addCallback(lambda res:
10272-                          self.shouldFail(ExistingChildError, "set_children-no",
10273-                                          "child 'e1' already exists",
10274-                                          n.set_children,
10275-                                          { u"e1": (other_file_uri,
10276-                                                    other_file_uri),
10277-                                            u"new": (other_file_uri,
10278-                                                     other_file_uri),
10279-                                            },
10280-                                          overwrite=False))
10281-            # and 'new' should not have been created
10282-            d.addCallback(lambda res: n.list())
10283-            d.addCallback(lambda children: self.failIf(u"new" in children))
10284-            d.addCallback(lambda res: n.get_metadata_for(u"e1"))
10285-            d.addCallback(lambda metadata:
10286-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
10287-            d.addCallback(lambda res: n.get_metadata_for(u"e2"))
10288-            d.addCallback(lambda metadata:
10289-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
10290-            d.addCallback(lambda res: n.get_metadata_for(u"e3"))
10291-            d.addCallback(lambda metadata:
10292-                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
10293-                                          (metadata["key"] == "value"), metadata))
10294-
10295-            d.addCallback(lambda res: n.delete(u"e1"))
10296-            d.addCallback(lambda res: n.delete(u"e2"))
10297-            d.addCallback(lambda res: n.delete(u"e3"))
10298-
10299-            # metadata through set_nodes()
10300-            d.addCallback(lambda res:
10301-                          n.set_nodes({ u"f1": (n, None),
10302-                                        u"f2": (n, {}),
10303-                                        u"f3": (n, {"key": "value"}),
10304-                                        }))
10305-            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
10306-            d.addCallback(lambda res:
10307-                          self.shouldFail(ExistingChildError, "set_nodes-no",
10308-                                          "child 'f1' already exists",
10309-                                          n.set_nodes, { u"f1": (n, None),
10310-                                                         u"new": (n, None), },
10311-                                          overwrite=False))
10312-            # and 'new' should not have been created
10313-            d.addCallback(lambda res: n.list())
10314-            d.addCallback(lambda children: self.failIf(u"new" in children))
10315-            d.addCallback(lambda res: n.get_metadata_for(u"f1"))
10316-            d.addCallback(lambda metadata:
10317-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
10318-            d.addCallback(lambda res: n.get_metadata_for(u"f2"))
10319-            d.addCallback(lambda metadata:
10320-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
10321-            d.addCallback(lambda res: n.get_metadata_for(u"f3"))
10322-            d.addCallback(lambda metadata:
10323-                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
10324-                                          (metadata["key"] == "value"), metadata))
10325-
10326-            d.addCallback(lambda res: n.delete(u"f1"))
10327-            d.addCallback(lambda res: n.delete(u"f2"))
10328-            d.addCallback(lambda res: n.delete(u"f3"))
10329-
10330-
10331-            d.addCallback(lambda res:
10332-                          n.set_metadata_for(u"child",
10333-                                             {"tags": ["web2.0-compatible"], "tahoe": {"bad": "mojo"}}))
10334-            d.addCallback(lambda n1: n1.get_metadata_for(u"child"))
10335-            d.addCallback(lambda metadata:
10336-                          self.failUnless((set(metadata.keys()) == set(["tags", "tahoe"])) and
10337-                                          metadata["tags"] == ["web2.0-compatible"] and
10338-                                          "bad" not in metadata["tahoe"], metadata))
10339-
10340-            d.addCallback(lambda res:
10341-                          self.shouldFail(NoSuchChildError, "set_metadata_for-nosuch", "",
10342-                                          n.set_metadata_for, u"nosuch", {}))
10343-
10344-
10345-            def _start(res):
10346-                self._start_timestamp = time.time()
10347-            d.addCallback(_start)
10348-            # simplejson-1.7.1 (as shipped on Ubuntu 'gutsy') rounds all
10349-            # floats to hundredeths (it uses str(num) instead of repr(num)).
10350-            # simplejson-1.7.3 does not have this bug. To prevent this bug
10351-            # from causing the test to fail, stall for more than a few
10352-            # hundrededths of a second.
10353-            d.addCallback(self.stall, 0.1)
10354-            d.addCallback(lambda res: n.add_file(u"timestamps",
10355-                                                 upload.Data("stamp me", convergence="some convergence string")))
10356-            d.addCallback(self.stall, 0.1)
10357-            def _stop(res):
10358-                self._stop_timestamp = time.time()
10359-            d.addCallback(_stop)
10360-
10361-            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
10362-            def _check_timestamp1(metadata):
10363-                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
10364-                tahoe_md = metadata["tahoe"]
10365-                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
10366-
10367-                self.failUnlessGreaterOrEqualThan(tahoe_md["linkcrtime"],
10368-                                                  self._start_timestamp)
10369-                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
10370-                                                  tahoe_md["linkcrtime"])
10371-                self.failUnlessGreaterOrEqualThan(tahoe_md["linkmotime"],
10372-                                                  self._start_timestamp)
10373-                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
10374-                                                  tahoe_md["linkmotime"])
10375-                # Our current timestamp rules say that replacing an existing
10376-                # child should preserve the 'linkcrtime' but update the
10377-                # 'linkmotime'
10378-                self._old_linkcrtime = tahoe_md["linkcrtime"]
10379-                self._old_linkmotime = tahoe_md["linkmotime"]
10380-            d.addCallback(_check_timestamp1)
10381-            d.addCallback(self.stall, 2.0) # accomodate low-res timestamps
10382-            d.addCallback(lambda res: n.set_node(u"timestamps", n))
10383-            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
10384-            def _check_timestamp2(metadata):
10385-                self.failUnlessIn("tahoe", metadata)
10386-                tahoe_md = metadata["tahoe"]
10387-                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
10388-
10389-                self.failUnlessReallyEqual(tahoe_md["linkcrtime"], self._old_linkcrtime)
10390-                self.failUnlessGreaterThan(tahoe_md["linkmotime"], self._old_linkmotime)
10391-                return n.delete(u"timestamps")
10392-            d.addCallback(_check_timestamp2)
10393-
10394-            d.addCallback(lambda res: n.delete(u"subdir"))
10395-            d.addCallback(lambda old_child:
10396-                          self.failUnlessReallyEqual(old_child.get_uri(),
10397-                                                     self.subdir.get_uri()))
10398-
10399-            d.addCallback(lambda res: n.list())
10400-            d.addCallback(lambda children:
10401-                          self.failUnlessReallyEqual(set(children.keys()),
10402-                                                     set([u"child"])))
10403-
10404-            uploadable1 = upload.Data("some data", convergence="converge")
10405-            d.addCallback(lambda res: n.add_file(u"newfile", uploadable1))
10406-            d.addCallback(lambda newnode:
10407-                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
10408-            uploadable2 = upload.Data("some data", convergence="stuff")
10409-            d.addCallback(lambda res:
10410-                          self.shouldFail(ExistingChildError, "add_file-no",
10411-                                          "child 'newfile' already exists",
10412-                                          n.add_file, u"newfile",
10413-                                          uploadable2,
10414-                                          overwrite=False))
10415-            d.addCallback(lambda res: n.list())
10416-            d.addCallback(lambda children:
10417-                          self.failUnlessReallyEqual(set(children.keys()),
10418-                                                     set([u"child", u"newfile"])))
10419-            d.addCallback(lambda res: n.get_metadata_for(u"newfile"))
10420-            d.addCallback(lambda metadata:
10421-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
10422-
10423-            uploadable3 = upload.Data("some data", convergence="converge")
10424-            d.addCallback(lambda res: n.add_file(u"newfile-metadata",
10425-                                                 uploadable3,
10426-                                                 {"key": "value"}))
10427-            d.addCallback(lambda newnode:
10428-                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
10429-            d.addCallback(lambda res: n.get_metadata_for(u"newfile-metadata"))
10430-            d.addCallback(lambda metadata:
10431-                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
10432-                                              (metadata['key'] == "value"), metadata))
10433-            d.addCallback(lambda res: n.delete(u"newfile-metadata"))
10434-
10435-            d.addCallback(lambda res: n.create_subdirectory(u"subdir2"))
10436-            def _created2(subdir2):
10437-                self.subdir2 = subdir2
10438-                # put something in the way, to make sure it gets overwritten
10439-                return subdir2.add_file(u"child", upload.Data("overwrite me",
10440-                                                              "converge"))
10441-            d.addCallback(_created2)
10442-
10443-            d.addCallback(lambda res:
10444-                          n.move_child_to(u"child", self.subdir2))
10445-            d.addCallback(lambda res: n.list())
10446-            d.addCallback(lambda children:
10447-                          self.failUnlessReallyEqual(set(children.keys()),
10448-                                                     set([u"newfile", u"subdir2"])))
10449-            d.addCallback(lambda res: self.subdir2.list())
10450-            d.addCallback(lambda children:
10451-                          self.failUnlessReallyEqual(set(children.keys()),
10452-                                                     set([u"child"])))
10453-            d.addCallback(lambda res: self.subdir2.get(u"child"))
10454-            d.addCallback(lambda child:
10455-                          self.failUnlessReallyEqual(child.get_uri(),
10456-                                                     fake_file_uri))
10457-
10458-            # move it back, using new_child_name=
10459-            d.addCallback(lambda res:
10460-                          self.subdir2.move_child_to(u"child", n, u"newchild"))
10461-            d.addCallback(lambda res: n.list())
10462-            d.addCallback(lambda children:
10463-                          self.failUnlessReallyEqual(set(children.keys()),
10464-                                                     set([u"newchild", u"newfile",
10465-                                                          u"subdir2"])))
10466-            d.addCallback(lambda res: self.subdir2.list())
10467-            d.addCallback(lambda children:
10468-                          self.failUnlessReallyEqual(set(children.keys()), set([])))
10469-
10470-            # now make sure that we honor overwrite=False
10471-            d.addCallback(lambda res:
10472-                          self.subdir2.set_uri(u"newchild",
10473-                                               other_file_uri, other_file_uri))
10474-
10475-            d.addCallback(lambda res:
10476-                          self.shouldFail(ExistingChildError, "move_child_to-no",
10477-                                          "child 'newchild' already exists",
10478-                                          n.move_child_to, u"newchild",
10479-                                          self.subdir2,
10480-                                          overwrite=False))
10481-            d.addCallback(lambda res: self.subdir2.get(u"newchild"))
10482-            d.addCallback(lambda child:
10483-                          self.failUnlessReallyEqual(child.get_uri(),
10484-                                                     other_file_uri))
10485-
10486-
10487-            # Setting the no-write field should diminish a mutable cap to read-only
10488-            # (for both files and directories).
10489-
10490-            d.addCallback(lambda ign: n.set_uri(u"mutable", other_file_uri, other_file_uri))
10491-            d.addCallback(lambda ign: n.get(u"mutable"))
10492-            d.addCallback(lambda mutable: self.failIf(mutable.is_readonly(), mutable))
10493-            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
10494-            d.addCallback(lambda ign: n.get(u"mutable"))
10495-            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
10496-            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
10497-            d.addCallback(lambda ign: n.get(u"mutable"))
10498-            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
10499-
10500-            d.addCallback(lambda ign: n.get(u"subdir2"))
10501-            d.addCallback(lambda subdir2: self.failIf(subdir2.is_readonly()))
10502-            d.addCallback(lambda ign: n.set_metadata_for(u"subdir2", {"no-write": True}))
10503-            d.addCallback(lambda ign: n.get(u"subdir2"))
10504-            d.addCallback(lambda subdir2: self.failUnless(subdir2.is_readonly(), subdir2))
10505-
10506-            d.addCallback(lambda ign: n.set_uri(u"mutable_ro", other_file_uri, other_file_uri,
10507-                                                metadata={"no-write": True}))
10508-            d.addCallback(lambda ign: n.get(u"mutable_ro"))
10509-            d.addCallback(lambda mutable_ro: self.failUnless(mutable_ro.is_readonly(), mutable_ro))
10510-
10511-            d.addCallback(lambda ign: n.create_subdirectory(u"subdir_ro", metadata={"no-write": True}))
10512-            d.addCallback(lambda ign: n.get(u"subdir_ro"))
10513-            d.addCallback(lambda subdir_ro: self.failUnless(subdir_ro.is_readonly(), subdir_ro))
10514-
10515-            return d
10516-
10517-        d.addCallback(_then)
10518-
10519-        d.addErrback(self.explain_error)
10520-        return d
10521+        return self._do_create_test()
10522 
10523     def test_update_metadata(self):
10524         (t1, t2, t3) = (626644800.0, 634745640.0, 892226160.0)
10525hunk ./src/allmydata/test/test_dirnode.py 1283
10526         self.failUnlessEqual(md4, {"bool": True, "number": 42,
10527                                    "tahoe":{"linkcrtime": t1, "linkmotime": t1}})
10528 
10529-    def test_create_subdirectory(self):
10530-        self.basedir = "dirnode/Dirnode/test_create_subdirectory"
10531-        self.set_up_grid()
10532+    def _do_create_subdirectory_test(self, version=SDMF_VERSION):
10533         c = self.g.clients[0]
10534         nm = c.nodemaker
10535 
10536hunk ./src/allmydata/test/test_dirnode.py 1287
10537-        d = c.create_dirnode()
10538+        d = c.create_dirnode(version=version)
10539         def _then(n):
10540             # /
10541             self.rootnode = n
10542hunk ./src/allmydata/test/test_dirnode.py 1297
10543             kids = {u"kid1": (nm.create_from_cap(fake_file_uri), {}),
10544                     u"kid2": (nm.create_from_cap(other_file_uri), md),
10545                     }
10546-            d = n.create_subdirectory(u"subdir", kids)
10547+            d = n.create_subdirectory(u"subdir", kids,
10548+                                      mutable_version=version)
10549             def _check(sub):
10550                 d = n.get_child_at_path(u"subdir")
10551                 d.addCallback(lambda sub2: self.failUnlessReallyEqual(sub2.get_uri(),
10552hunk ./src/allmydata/test/test_dirnode.py 1314
10553         d.addCallback(_then)
10554         return d
10555 
10556+    def test_create_subdirectory(self):
10557+        self.basedir = "dirnode/Dirnode/test_create_subdirectory"
10558+        self.set_up_grid()
10559+        return self._do_create_subdirectory_test()
10560+
10561+    def test_create_subdirectory_mdmf(self):
10562+        self.basedir = "dirnode/Dirnode/test_create_subdirectory_mdmf"
10563+        self.set_up_grid()
10564+        return self._do_create_subdirectory_test(version=MDMF_VERSION)
10565+
10566+    def test_create_mdmf(self):
10567+        self.basedir = "dirnode/Dirnode/test_mdmf"
10568+        self.set_up_grid()
10569+        return self._do_create_test(mdmf=True)
10570+
10571+    def test_mdmf_initial_children(self):
10572+        self.basedir = "dirnode/Dirnode/test_mdmf"
10573+        self.set_up_grid()
10574+        return self._do_initial_children_test(mdmf=True)
10575+
10576 class MinimalFakeMutableFile:
10577     def get_writekey(self):
10578         return "writekey"
10579hunk ./src/allmydata/test/test_dirnode.py 1452
10580     implements(IMutableFileNode)
10581     counter = 0
10582     def __init__(self, initial_contents=""):
10583-        self.data = self._get_initial_contents(initial_contents)
10584+        data = self._get_initial_contents(initial_contents)
10585+        self.data = data.read(data.get_size())
10586+        self.data = "".join(self.data)
10587+
10588         counter = FakeMutableFile.counter
10589         FakeMutableFile.counter += 1
10590         writekey = hashutil.ssk_writekey_hash(str(counter))
10591hunk ./src/allmydata/test/test_dirnode.py 1502
10592         pass
10593 
10594     def modify(self, modifier):
10595-        self.data = modifier(self.data, None, True)
10596+        data = modifier(self.data, None, True)
10597+        self.data = data
10598         return defer.succeed(None)
10599 
10600 class FakeNodeMaker(NodeMaker):
10601hunk ./src/allmydata/test/test_dirnode.py 1507
10602-    def create_mutable_file(self, contents="", keysize=None):
10603+    def create_mutable_file(self, contents="", keysize=None, version=None):
10604         return defer.succeed(FakeMutableFile(contents))
10605 
10606 class FakeClient2(Client):
10607hunk ./src/allmydata/test/test_dirnode.py 1706
10608             self.failUnless(n.get_readonly_uri().startswith("imm."), i)
10609 
10610 
10611+
10612 class DeepStats(testutil.ReallyEqualMixin, unittest.TestCase):
10613     timeout = 240 # It takes longer than 120 seconds on Francois's arm box.
10614     def test_stats(self):
10615}
10616[mutable/servermap: Rework the servermap to work with MDMF mutable files
10617Kevan Carstensen <kevan@isnotajoke.com>**20110807004259
10618 Ignore-this: 154b987fa0af716c41185b88ff7ee2e1
10619] {
10620hunk ./src/allmydata/mutable/servermap.py 7
10621 from itertools import count
10622 from twisted.internet import defer
10623 from twisted.python import failure
10624-from foolscap.api import DeadReferenceError, RemoteException, eventually
10625-from allmydata.util import base32, hashutil, idlib, log
10626+from foolscap.api import DeadReferenceError, RemoteException, eventually, \
10627+                         fireEventually
10628+from allmydata.util import base32, hashutil, idlib, log, deferredutil
10629 from allmydata.util.dictutil import DictOfSets
10630 from allmydata.storage.server import si_b2a
10631 from allmydata.interfaces import IServermapUpdaterStatus
10632hunk ./src/allmydata/mutable/servermap.py 16
10633 from pycryptopp.publickey import rsa
10634 
10635 from allmydata.mutable.common import MODE_CHECK, MODE_ANYTHING, MODE_WRITE, MODE_READ, \
10636-     CorruptShareError, NeedMoreDataError
10637-from allmydata.mutable.layout import unpack_prefix_and_signature, unpack_header, unpack_share, \
10638-     SIGNED_PREFIX_LENGTH
10639+     CorruptShareError
10640+from allmydata.mutable.layout import SIGNED_PREFIX_LENGTH, MDMFSlotReadProxy
10641 
10642 class UpdateStatus:
10643     implements(IServermapUpdaterStatus)
10644hunk ./src/allmydata/mutable/servermap.py 124
10645         self.bad_shares = {} # maps (peerid,shnum) to old checkstring
10646         self.last_update_mode = None
10647         self.last_update_time = 0
10648+        self.update_data = {} # (verinfo,shnum) => data
10649 
10650     def copy(self):
10651         s = ServerMap()
10652hunk ./src/allmydata/mutable/servermap.py 255
10653         """Return a set of versionids, one for each version that is currently
10654         recoverable."""
10655         versionmap = self.make_versionmap()
10656-
10657         recoverable_versions = set()
10658         for (verinfo, shares) in versionmap.items():
10659             (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
10660hunk ./src/allmydata/mutable/servermap.py 340
10661         return False
10662 
10663 
10664+    def get_update_data_for_share_and_verinfo(self, shnum, verinfo):
10665+        """
10666+        I return the update data for the given shnum
10667+        """
10668+        update_data = self.update_data[shnum]
10669+        update_datum = [i[1] for i in update_data if i[0] == verinfo][0]
10670+        return update_datum
10671+
10672+
10673+    def set_update_data_for_share_and_verinfo(self, shnum, verinfo, data):
10674+        """
10675+        I record the block hash tree for the given shnum.
10676+        """
10677+        self.update_data.setdefault(shnum , []).append((verinfo, data))
10678+
10679+
10680 class ServermapUpdater:
10681     def __init__(self, filenode, storage_broker, monitor, servermap,
10682hunk ./src/allmydata/mutable/servermap.py 358
10683-                 mode=MODE_READ, add_lease=False):
10684+                 mode=MODE_READ, add_lease=False, update_range=None):
10685         """I update a servermap, locating a sufficient number of useful
10686         shares and remembering where they are located.
10687 
10688hunk ./src/allmydata/mutable/servermap.py 383
10689         self._servers_responded = set()
10690 
10691         # how much data should we read?
10692+        # SDMF:
10693         #  * if we only need the checkstring, then [0:75]
10694         #  * if we need to validate the checkstring sig, then [543ish:799ish]
10695         #  * if we need the verification key, then [107:436ish]
10696hunk ./src/allmydata/mutable/servermap.py 391
10697         #  * if we need the encrypted private key, we want [-1216ish:]
10698         #   * but we can't read from negative offsets
10699         #   * the offset table tells us the 'ish', also the positive offset
10700-        # A future version of the SMDF slot format should consider using
10701-        # fixed-size slots so we can retrieve less data. For now, we'll just
10702-        # read 4000 bytes, which also happens to read enough actual data to
10703-        # pre-fetch an 18-entry dirnode.
10704+        # MDMF:
10705+        #  * Checkstring? [0:72]
10706+        #  * If we want to validate the checkstring, then [0:72], [143:?] --
10707+        #    the offset table will tell us for sure.
10708+        #  * If we need the verification key, we have to consult the offset
10709+        #    table as well.
10710+        # At this point, we don't know which we are. Our filenode can
10711+        # tell us, but it might be lying -- in some cases, we're
10712+        # responsible for telling it which kind of file it is.
10713         self._read_size = 4000
10714         if mode == MODE_CHECK:
10715             # we use unpack_prefix_and_signature, so we need 1k
10716hunk ./src/allmydata/mutable/servermap.py 405
10717             self._read_size = 1000
10718         self._need_privkey = False
10719+
10720         if mode == MODE_WRITE and not self._node.get_privkey():
10721             self._need_privkey = True
10722         # check+repair: repair requires the privkey, so if we didn't happen
10723hunk ./src/allmydata/mutable/servermap.py 412
10724         # to ask for it during the check, we'll have problems doing the
10725         # publish.
10726 
10727+        self.fetch_update_data = False
10728+        if mode == MODE_WRITE and update_range:
10729+            # We're updating the servermap in preparation for an
10730+            # in-place file update, so we need to fetch some additional
10731+            # data from each share that we find.
10732+            assert len(update_range) == 2
10733+
10734+            self.start_segment = update_range[0]
10735+            self.end_segment = update_range[1]
10736+            self.fetch_update_data = True
10737+
10738         prefix = si_b2a(self._storage_index)[:5]
10739         self._log_number = log.msg(format="SharemapUpdater(%(si)s): starting (%(mode)s)",
10740                                    si=prefix, mode=mode)
10741hunk ./src/allmydata/mutable/servermap.py 461
10742         self._queries_completed = 0
10743 
10744         sb = self._storage_broker
10745+        # All of the peers, permuted by the storage index, as usual.
10746         full_peerlist = [(s.get_serverid(), s.get_rref())
10747                          for s in sb.get_servers_for_psi(self._storage_index)]
10748         self.full_peerlist = full_peerlist # for use later, immutable
10749hunk ./src/allmydata/mutable/servermap.py 469
10750         self._good_peers = set() # peers who had some shares
10751         self._empty_peers = set() # peers who don't have any shares
10752         self._bad_peers = set() # peers to whom our queries failed
10753+        self._readers = {} # peerid -> dict(sharewriters), filled in
10754+                           # after responses come in.
10755 
10756         k = self._node.get_required_shares()
10757hunk ./src/allmydata/mutable/servermap.py 473
10758+        # For what cases can these conditions work?
10759         if k is None:
10760             # make a guess
10761             k = 3
10762hunk ./src/allmydata/mutable/servermap.py 486
10763         self.num_peers_to_query = k + self.EPSILON
10764 
10765         if self.mode == MODE_CHECK:
10766+            # We want to query all of the peers.
10767             initial_peers_to_query = dict(full_peerlist)
10768             must_query = set(initial_peers_to_query.keys())
10769             self.extra_peers = []
10770hunk ./src/allmydata/mutable/servermap.py 494
10771             # we're planning to replace all the shares, so we want a good
10772             # chance of finding them all. We will keep searching until we've
10773             # seen epsilon that don't have a share.
10774+            # We don't query all of the peers because that could take a while.
10775             self.num_peers_to_query = N + self.EPSILON
10776             initial_peers_to_query, must_query = self._build_initial_querylist()
10777             self.required_num_empty_peers = self.EPSILON
10778hunk ./src/allmydata/mutable/servermap.py 504
10779             # might also avoid the round trip required to read the encrypted
10780             # private key.
10781 
10782-        else:
10783+        else: # MODE_READ, MODE_ANYTHING
10784+            # 2k peers is good enough.
10785             initial_peers_to_query, must_query = self._build_initial_querylist()
10786 
10787         # this is a set of peers that we are required to get responses from:
10788hunk ./src/allmydata/mutable/servermap.py 520
10789         # before we can consider ourselves finished, and self.extra_peers
10790         # contains the overflow (peers that we should tap if we don't get
10791         # enough responses)
10792+        # I guess that self._must_query is a subset of
10793+        # initial_peers_to_query?
10794+        assert set(must_query).issubset(set(initial_peers_to_query))
10795 
10796         self._send_initial_requests(initial_peers_to_query)
10797         self._status.timings["initial_queries"] = time.time() - self._started
10798hunk ./src/allmydata/mutable/servermap.py 579
10799         # errors that aren't handled by _query_failed (and errors caused by
10800         # _query_failed) get logged, but we still want to check for doneness.
10801         d.addErrback(log.err)
10802-        d.addBoth(self._check_for_done)
10803         d.addErrback(self._fatal_error)
10804hunk ./src/allmydata/mutable/servermap.py 580
10805+        d.addCallback(self._check_for_done)
10806         return d
10807 
10808     def _do_read(self, ss, peerid, storage_index, shnums, readv):
10809hunk ./src/allmydata/mutable/servermap.py 599
10810         d = ss.callRemote("slot_readv", storage_index, shnums, readv)
10811         return d
10812 
10813+
10814+    def _got_corrupt_share(self, e, shnum, peerid, data, lp):
10815+        """
10816+        I am called when a remote server returns a corrupt share in
10817+        response to one of our queries. By corrupt, I mean a share
10818+        without a valid signature. I then record the failure, notify the
10819+        server of the corruption, and record the share as bad.
10820+        """
10821+        f = failure.Failure(e)
10822+        self.log(format="bad share: %(f_value)s", f_value=str(f),
10823+                 failure=f, parent=lp, level=log.WEIRD, umid="h5llHg")
10824+        # Notify the server that its share is corrupt.
10825+        self.notify_server_corruption(peerid, shnum, str(e))
10826+        # By flagging this as a bad peer, we won't count any of
10827+        # the other shares on that peer as valid, though if we
10828+        # happen to find a valid version string amongst those
10829+        # shares, we'll keep track of it so that we don't need
10830+        # to validate the signature on those again.
10831+        self._bad_peers.add(peerid)
10832+        self._last_failure = f
10833+        # XXX: Use the reader for this?
10834+        checkstring = data[:SIGNED_PREFIX_LENGTH]
10835+        self._servermap.mark_bad_share(peerid, shnum, checkstring)
10836+        self._servermap.problems.append(f)
10837+
10838+
10839+    def _cache_good_sharedata(self, verinfo, shnum, now, data):
10840+        """
10841+        If one of my queries returns successfully (which means that we
10842+        were able to and successfully did validate the signature), I
10843+        cache the data that we initially fetched from the storage
10844+        server. This will help reduce the number of roundtrips that need
10845+        to occur when the file is downloaded, or when the file is
10846+        updated.
10847+        """
10848+        if verinfo:
10849+            self._node._add_to_cache(verinfo, shnum, 0, data)
10850+
10851+
10852     def _got_results(self, datavs, peerid, readsize, stuff, started):
10853         lp = self.log(format="got result from [%(peerid)s], %(numshares)d shares",
10854                       peerid=idlib.shortnodeid_b2a(peerid),
10855hunk ./src/allmydata/mutable/servermap.py 641
10856-                      numshares=len(datavs),
10857-                      level=log.NOISY)
10858+                      numshares=len(datavs))
10859         now = time.time()
10860         elapsed = now - started
10861hunk ./src/allmydata/mutable/servermap.py 644
10862-        self._queries_outstanding.discard(peerid)
10863-        self._servermap.reachable_peers.add(peerid)
10864-        self._must_query.discard(peerid)
10865-        self._queries_completed += 1
10866+        def _done_processing(ignored=None):
10867+            self._queries_outstanding.discard(peerid)
10868+            self._servermap.reachable_peers.add(peerid)
10869+            self._must_query.discard(peerid)
10870+            self._queries_completed += 1
10871         if not self._running:
10872hunk ./src/allmydata/mutable/servermap.py 650
10873-            self.log("but we're not running, so we'll ignore it", parent=lp,
10874-                     level=log.NOISY)
10875+            self.log("but we're not running, so we'll ignore it", parent=lp)
10876+            _done_processing()
10877             self._status.add_per_server_time(peerid, "late", started, elapsed)
10878             return
10879         self._status.add_per_server_time(peerid, "query", started, elapsed)
10880hunk ./src/allmydata/mutable/servermap.py 661
10881         else:
10882             self._empty_peers.add(peerid)
10883 
10884-        last_verinfo = None
10885-        last_shnum = None
10886+        ss, storage_index = stuff
10887+        ds = []
10888+
10889         for shnum,datav in datavs.items():
10890             data = datav[0]
10891hunk ./src/allmydata/mutable/servermap.py 666
10892-            try:
10893-                verinfo = self._got_results_one_share(shnum, data, peerid, lp)
10894-                last_verinfo = verinfo
10895-                last_shnum = shnum
10896-                self._node._add_to_cache(verinfo, shnum, 0, data)
10897-            except CorruptShareError, e:
10898-                # log it and give the other shares a chance to be processed
10899-                f = failure.Failure()
10900-                self.log(format="bad share: %(f_value)s", f_value=str(f.value),
10901-                         failure=f, parent=lp, level=log.WEIRD, umid="h5llHg")
10902-                self.notify_server_corruption(peerid, shnum, str(e))
10903-                self._bad_peers.add(peerid)
10904-                self._last_failure = f
10905-                checkstring = data[:SIGNED_PREFIX_LENGTH]
10906-                self._servermap.mark_bad_share(peerid, shnum, checkstring)
10907-                self._servermap.problems.append(f)
10908-                pass
10909+            reader = MDMFSlotReadProxy(ss,
10910+                                       storage_index,
10911+                                       shnum,
10912+                                       data)
10913+            self._readers.setdefault(peerid, dict())[shnum] = reader
10914+            # our goal, with each response, is to validate the version
10915+            # information and share data as best we can at this point --
10916+            # we do this by validating the signature. To do this, we
10917+            # need to do the following:
10918+            #   - If we don't already have the public key, fetch the
10919+            #     public key. We use this to validate the signature.
10920+            if not self._node.get_pubkey():
10921+                # fetch and set the public key.
10922+                d = reader.get_verification_key(queue=True)
10923+                d.addCallback(lambda results, shnum=shnum, peerid=peerid:
10924+                    self._try_to_set_pubkey(results, peerid, shnum, lp))
10925+                # XXX: Make self._pubkey_query_failed?
10926+                d.addErrback(lambda error, shnum=shnum, peerid=peerid:
10927+                    self._got_corrupt_share(error, shnum, peerid, data, lp))
10928+            else:
10929+                # we already have the public key.
10930+                d = defer.succeed(None)
10931 
10932hunk ./src/allmydata/mutable/servermap.py 689
10933-        self._status.timings["cumulative_verify"] += (time.time() - now)
10934+            # Neither of these two branches return anything of
10935+            # consequence, so the first entry in our deferredlist will
10936+            # be None.
10937 
10938hunk ./src/allmydata/mutable/servermap.py 693
10939-        if self._need_privkey and last_verinfo:
10940-            # send them a request for the privkey. We send one request per
10941-            # server.
10942-            lp2 = self.log("sending privkey request",
10943-                           parent=lp, level=log.NOISY)
10944-            (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
10945-             offsets_tuple) = last_verinfo
10946-            o = dict(offsets_tuple)
10947+            # - Next, we need the version information. We almost
10948+            #   certainly got this by reading the first thousand or so
10949+            #   bytes of the share on the storage server, so we
10950+            #   shouldn't need to fetch anything at this step.
10951+            d2 = reader.get_verinfo()
10952+            d2.addErrback(lambda error, shnum=shnum, peerid=peerid:
10953+                self._got_corrupt_share(error, shnum, peerid, data, lp))
10954+            # - Next, we need the signature. For an SDMF share, it is
10955+            #   likely that we fetched this when doing our initial fetch
10956+            #   to get the version information. In MDMF, this lives at
10957+            #   the end of the share, so unless the file is quite small,
10958+            #   we'll need to do a remote fetch to get it.
10959+            d3 = reader.get_signature(queue=True)
10960+            d3.addErrback(lambda error, shnum=shnum, peerid=peerid:
10961+                self._got_corrupt_share(error, shnum, peerid, data, lp))
10962+            #  Once we have all three of these responses, we can move on
10963+            #  to validating the signature
10964+
10965+            # Does the node already have a privkey? If not, we'll try to
10966+            # fetch it here.
10967+            if self._need_privkey:
10968+                d4 = reader.get_encprivkey(queue=True)
10969+                d4.addCallback(lambda results, shnum=shnum, peerid=peerid:
10970+                    self._try_to_validate_privkey(results, peerid, shnum, lp))
10971+                d4.addErrback(lambda error, shnum=shnum, peerid=peerid:
10972+                    self._privkey_query_failed(error, shnum, data, lp))
10973+            else:
10974+                d4 = defer.succeed(None)
10975 
10976hunk ./src/allmydata/mutable/servermap.py 722
10977-            self._queries_outstanding.add(peerid)
10978-            readv = [ (o['enc_privkey'], (o['EOF'] - o['enc_privkey'])) ]
10979-            ss = self._servermap.connections[peerid]
10980-            privkey_started = time.time()
10981-            d = self._do_read(ss, peerid, self._storage_index,
10982-                              [last_shnum], readv)
10983-            d.addCallback(self._got_privkey_results, peerid, last_shnum,
10984-                          privkey_started, lp2)
10985-            d.addErrback(self._privkey_query_failed, peerid, last_shnum, lp2)
10986-            d.addErrback(log.err)
10987-            d.addCallback(self._check_for_done)
10988-            d.addErrback(self._fatal_error)
10989 
10990hunk ./src/allmydata/mutable/servermap.py 723
10991+            if self.fetch_update_data:
10992+                # fetch the block hash tree and first + last segment, as
10993+                # configured earlier.
10994+                # Then set them in wherever we happen to want to set
10995+                # them.
10996+                ds = []
10997+                # XXX: We do this above, too. Is there a good way to
10998+                # make the two routines share the value without
10999+                # introducing more roundtrips?
11000+                ds.append(reader.get_verinfo())
11001+                ds.append(reader.get_blockhashes(queue=True))
11002+                ds.append(reader.get_block_and_salt(self.start_segment,
11003+                                                    queue=True))
11004+                ds.append(reader.get_block_and_salt(self.end_segment,
11005+                                                    queue=True))
11006+                d5 = deferredutil.gatherResults(ds)
11007+                d5.addCallback(self._got_update_results_one_share, shnum)
11008+            else:
11009+                d5 = defer.succeed(None)
11010+
11011+            dl = defer.DeferredList([d, d2, d3, d4, d5])
11012+            dl.addBoth(self._turn_barrier)
11013+            reader.flush()
11014+            dl.addCallback(lambda results, shnum=shnum, peerid=peerid:
11015+                self._got_signature_one_share(results, shnum, peerid, lp))
11016+            dl.addErrback(lambda error, shnum=shnum, data=data:
11017+               self._got_corrupt_share(error, shnum, peerid, data, lp))
11018+            dl.addCallback(lambda verinfo, shnum=shnum, peerid=peerid, data=data:
11019+                self._cache_good_sharedata(verinfo, shnum, now, data))
11020+            ds.append(dl)
11021+        # dl is a deferred list that will fire when all of the shares
11022+        # that we found on this peer are done processing. When dl fires,
11023+        # we know that processing is done, so we can decrement the
11024+        # semaphore-like thing that we incremented earlier.
11025+        dl = defer.DeferredList(ds, fireOnOneErrback=True)
11026+        # Are we done? Done means that there are no more queries to
11027+        # send, that there are no outstanding queries, and that we
11028+        # haven't received any queries that are still processing. If we
11029+        # are done, self._check_for_done will cause the done deferred
11030+        # that we returned to our caller to fire, which tells them that
11031+        # they have a complete servermap, and that we won't be touching
11032+        # the servermap anymore.
11033+        dl.addCallback(_done_processing)
11034+        dl.addCallback(self._check_for_done)
11035+        dl.addErrback(self._fatal_error)
11036         # all done!
11037         self.log("_got_results done", parent=lp, level=log.NOISY)
11038hunk ./src/allmydata/mutable/servermap.py 770
11039+        return dl
11040+
11041+
11042+    def _turn_barrier(self, result):
11043+        """
11044+        I help the servermap updater avoid the recursion limit issues
11045+        discussed in #237.
11046+        """
11047+        return fireEventually(result)
11048+
11049+
11050+    def _try_to_set_pubkey(self, pubkey_s, peerid, shnum, lp):
11051+        if self._node.get_pubkey():
11052+            return # don't go through this again if we don't have to
11053+        fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
11054+        assert len(fingerprint) == 32
11055+        if fingerprint != self._node.get_fingerprint():
11056+            raise CorruptShareError(peerid, shnum,
11057+                                "pubkey doesn't match fingerprint")
11058+        self._node._populate_pubkey(self._deserialize_pubkey(pubkey_s))
11059+        assert self._node.get_pubkey()
11060+
11061 
11062     def notify_server_corruption(self, peerid, shnum, reason):
11063         ss = self._servermap.connections[peerid]
11064hunk ./src/allmydata/mutable/servermap.py 798
11065         ss.callRemoteOnly("advise_corrupt_share",
11066                           "mutable", self._storage_index, shnum, reason)
11067 
11068-    def _got_results_one_share(self, shnum, data, peerid, lp):
11069+
11070+    def _got_signature_one_share(self, results, shnum, peerid, lp):
11071+        # It is our job to give versioninfo to our caller. We need to
11072+        # raise CorruptShareError if the share is corrupt for any
11073+        # reason, something that our caller will handle.
11074         self.log(format="_got_results: got shnum #%(shnum)d from peerid %(peerid)s",
11075                  shnum=shnum,
11076                  peerid=idlib.shortnodeid_b2a(peerid),
11077hunk ./src/allmydata/mutable/servermap.py 808
11078                  level=log.NOISY,
11079                  parent=lp)
11080+        if not self._running:
11081+            # We can't process the results, since we can't touch the
11082+            # servermap anymore.
11083+            self.log("but we're not running anymore.")
11084+            return None
11085 
11086hunk ./src/allmydata/mutable/servermap.py 814
11087-        # this might raise NeedMoreDataError, if the pubkey and signature
11088-        # live at some weird offset. That shouldn't happen, so I'm going to
11089-        # treat it as a bad share.
11090-        (seqnum, root_hash, IV, k, N, segsize, datalength,
11091-         pubkey_s, signature, prefix) = unpack_prefix_and_signature(data)
11092-
11093-        if not self._node.get_pubkey():
11094-            fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
11095-            assert len(fingerprint) == 32
11096-            if fingerprint != self._node.get_fingerprint():
11097-                raise CorruptShareError(peerid, shnum,
11098-                                        "pubkey doesn't match fingerprint")
11099-            self._node._populate_pubkey(self._deserialize_pubkey(pubkey_s))
11100-
11101-        if self._need_privkey:
11102-            self._try_to_extract_privkey(data, peerid, shnum, lp)
11103-
11104-        (ig_version, ig_seqnum, ig_root_hash, ig_IV, ig_k, ig_N,
11105-         ig_segsize, ig_datalen, offsets) = unpack_header(data)
11106+        _, verinfo, signature, __, ___ = results
11107+        (seqnum,
11108+         root_hash,
11109+         saltish,
11110+         segsize,
11111+         datalen,
11112+         k,
11113+         n,
11114+         prefix,
11115+         offsets) = verinfo[1]
11116         offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
11117 
11118hunk ./src/allmydata/mutable/servermap.py 826
11119-        verinfo = (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
11120+        # XXX: This should be done for us in the method, so
11121+        # presumably you can go in there and fix it.
11122+        verinfo = (seqnum,
11123+                   root_hash,
11124+                   saltish,
11125+                   segsize,
11126+                   datalen,
11127+                   k,
11128+                   n,
11129+                   prefix,
11130                    offsets_tuple)
11131hunk ./src/allmydata/mutable/servermap.py 837
11132+        # This tuple uniquely identifies a share on the grid; we use it
11133+        # to keep track of the ones that we've already seen.
11134 
11135         if verinfo not in self._valid_versions:
11136hunk ./src/allmydata/mutable/servermap.py 841
11137-            # it's a new pair. Verify the signature.
11138-            valid = self._node.get_pubkey().verify(prefix, signature)
11139+            # This is a new version tuple, and we need to validate it
11140+            # against the public key before keeping track of it.
11141+            assert self._node.get_pubkey()
11142+            valid = self._node.get_pubkey().verify(prefix, signature[1])
11143             if not valid:
11144hunk ./src/allmydata/mutable/servermap.py 846
11145-                raise CorruptShareError(peerid, shnum, "signature is invalid")
11146+                raise CorruptShareError(peerid, shnum,
11147+                                        "signature is invalid")
11148 
11149hunk ./src/allmydata/mutable/servermap.py 849
11150-            # ok, it's a valid verinfo. Add it to the list of validated
11151-            # versions.
11152-            self.log(" found valid version %d-%s from %s-sh%d: %d-%d/%d/%d"
11153-                     % (seqnum, base32.b2a(root_hash)[:4],
11154-                        idlib.shortnodeid_b2a(peerid), shnum,
11155-                        k, N, segsize, datalength),
11156-                     parent=lp)
11157-            self._valid_versions.add(verinfo)
11158-        # We now know that this is a valid candidate verinfo.
11159+        # ok, it's a valid verinfo. Add it to the list of validated
11160+        # versions.
11161+        self.log(" found valid version %d-%s from %s-sh%d: %d-%d/%d/%d"
11162+                 % (seqnum, base32.b2a(root_hash)[:4],
11163+                    idlib.shortnodeid_b2a(peerid), shnum,
11164+                    k, n, segsize, datalen),
11165+                    parent=lp)
11166+        self._valid_versions.add(verinfo)
11167+        # We now know that this is a valid candidate verinfo. Whether or
11168+        # not this instance of it is valid is a matter for the next
11169+        # statement; at this point, we just know that if we see this
11170+        # version info again, that its signature checks out and that
11171+        # we're okay to skip the signature-checking step.
11172 
11173hunk ./src/allmydata/mutable/servermap.py 863
11174+        # (peerid, shnum) are bound in the method invocation.
11175         if (peerid, shnum) in self._servermap.bad_shares:
11176             # we've been told that the rest of the data in this share is
11177             # unusable, so don't add it to the servermap.
11178hunk ./src/allmydata/mutable/servermap.py 876
11179         self._servermap.add_new_share(peerid, shnum, verinfo, timestamp)
11180         # and the versionmap
11181         self.versionmap.add(verinfo, (shnum, peerid, timestamp))
11182+
11183         return verinfo
11184 
11185hunk ./src/allmydata/mutable/servermap.py 879
11186-    def _deserialize_pubkey(self, pubkey_s):
11187-        verifier = rsa.create_verifying_key_from_string(pubkey_s)
11188-        return verifier
11189 
11190hunk ./src/allmydata/mutable/servermap.py 880
11191-    def _try_to_extract_privkey(self, data, peerid, shnum, lp):
11192-        try:
11193-            r = unpack_share(data)
11194-        except NeedMoreDataError, e:
11195-            # this share won't help us. oh well.
11196-            offset = e.encprivkey_offset
11197-            length = e.encprivkey_length
11198-            self.log("shnum %d on peerid %s: share was too short (%dB) "
11199-                     "to get the encprivkey; [%d:%d] ought to hold it" %
11200-                     (shnum, idlib.shortnodeid_b2a(peerid), len(data),
11201-                      offset, offset+length),
11202-                     parent=lp)
11203-            # NOTE: if uncoordinated writes are taking place, someone might
11204-            # change the share (and most probably move the encprivkey) before
11205-            # we get a chance to do one of these reads and fetch it. This
11206-            # will cause us to see a NotEnoughSharesError(unable to fetch
11207-            # privkey) instead of an UncoordinatedWriteError . This is a
11208-            # nuisance, but it will go away when we move to DSA-based mutable
11209-            # files (since the privkey will be small enough to fit in the
11210-            # write cap).
11211+    def _got_update_results_one_share(self, results, share):
11212+        """
11213+        I record the update results in results.
11214+        """
11215+        assert len(results) == 4
11216+        verinfo, blockhashes, start, end = results
11217+        (seqnum,
11218+         root_hash,
11219+         saltish,
11220+         segsize,
11221+         datalen,
11222+         k,
11223+         n,
11224+         prefix,
11225+         offsets) = verinfo
11226+        offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
11227 
11228hunk ./src/allmydata/mutable/servermap.py 897
11229-            return
11230+        # XXX: This should be done for us in the method, so
11231+        # presumably you can go in there and fix it.
11232+        verinfo = (seqnum,
11233+                   root_hash,
11234+                   saltish,
11235+                   segsize,
11236+                   datalen,
11237+                   k,
11238+                   n,
11239+                   prefix,
11240+                   offsets_tuple)
11241 
11242hunk ./src/allmydata/mutable/servermap.py 909
11243-        (seqnum, root_hash, IV, k, N, segsize, datalen,
11244-         pubkey, signature, share_hash_chain, block_hash_tree,
11245-         share_data, enc_privkey) = r
11246+        update_data = (blockhashes, start, end)
11247+        self._servermap.set_update_data_for_share_and_verinfo(share,
11248+                                                              verinfo,
11249+                                                              update_data)
11250 
11251hunk ./src/allmydata/mutable/servermap.py 914
11252-        return self._try_to_validate_privkey(enc_privkey, peerid, shnum, lp)
11253 
11254hunk ./src/allmydata/mutable/servermap.py 915
11255-    def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
11256+    def _deserialize_pubkey(self, pubkey_s):
11257+        verifier = rsa.create_verifying_key_from_string(pubkey_s)
11258+        return verifier
11259 
11260hunk ./src/allmydata/mutable/servermap.py 919
11261+
11262+    def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
11263+        """
11264+        Given a writekey from a remote server, I validate it against the
11265+        writekey stored in my node. If it is valid, then I set the
11266+        privkey and encprivkey properties of the node.
11267+        """
11268         alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
11269         alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
11270         if alleged_writekey != self._node.get_writekey():
11271hunk ./src/allmydata/mutable/servermap.py 998
11272         self._queries_completed += 1
11273         self._last_failure = f
11274 
11275-    def _got_privkey_results(self, datavs, peerid, shnum, started, lp):
11276-        now = time.time()
11277-        elapsed = now - started
11278-        self._status.add_per_server_time(peerid, "privkey", started, elapsed)
11279-        self._queries_outstanding.discard(peerid)
11280-        if not self._need_privkey:
11281-            return
11282-        if shnum not in datavs:
11283-            self.log("privkey wasn't there when we asked it",
11284-                     level=log.WEIRD, umid="VA9uDQ")
11285-            return
11286-        datav = datavs[shnum]
11287-        enc_privkey = datav[0]
11288-        self._try_to_validate_privkey(enc_privkey, peerid, shnum, lp)
11289 
11290     def _privkey_query_failed(self, f, peerid, shnum, lp):
11291         self._queries_outstanding.discard(peerid)
11292hunk ./src/allmydata/mutable/servermap.py 1012
11293         self._servermap.problems.append(f)
11294         self._last_failure = f
11295 
11296+
11297     def _check_for_done(self, res):
11298         # exit paths:
11299         #  return self._send_more_queries(outstanding) : send some more queries
11300hunk ./src/allmydata/mutable/servermap.py 1018
11301         #  return self._done() : all done
11302         #  return : keep waiting, no new queries
11303-
11304         lp = self.log(format=("_check_for_done, mode is '%(mode)s', "
11305                               "%(outstanding)d queries outstanding, "
11306                               "%(extra)d extra peers available, "
11307hunk ./src/allmydata/mutable/servermap.py 1209
11308 
11309     def _done(self):
11310         if not self._running:
11311+            self.log("not running; we're already done")
11312             return
11313         self._running = False
11314         now = time.time()
11315hunk ./src/allmydata/mutable/servermap.py 1224
11316         self._servermap.last_update_time = self._started
11317         # the servermap will not be touched after this
11318         self.log("servermap: %s" % self._servermap.summarize_versions())
11319+
11320         eventually(self._done_deferred.callback, self._servermap)
11321 
11322     def _fatal_error(self, f):
11323}
11324[webapi changes for MDMF
11325Kevan Carstensen <kevan@isnotajoke.com>**20110807004348
11326 Ignore-this: d6d4dac680baa4c99b05882b3828796c
11327 
11328     - Learn how to create MDMF files and directories through the
11329       mutable-type argument.
11330     - Operate with the interface changes associated with MDMF and #993.
11331     - Learn how to do partial updates of mutable files.
11332] {
11333hunk ./src/allmydata/test/test_web.py 27
11334 from allmydata.util.netstring import split_netstring
11335 from allmydata.util.encodingutil import to_str
11336 from allmydata.test.common import FakeCHKFileNode, FakeMutableFileNode, \
11337-     create_chk_filenode, WebErrorMixin, ShouldFailMixin, make_mutable_file_uri
11338-from allmydata.interfaces import IMutableFileNode
11339+     create_chk_filenode, WebErrorMixin, ShouldFailMixin, \
11340+     make_mutable_file_uri, create_mutable_filenode
11341+from allmydata.interfaces import IMutableFileNode, SDMF_VERSION, MDMF_VERSION
11342 from allmydata.mutable import servermap, publish, retrieve
11343 import allmydata.test.common_util as testutil
11344 from allmydata.test.no_network import GridTestMixin
11345hunk ./src/allmydata/test/test_web.py 52
11346         return stats
11347 
11348 class FakeNodeMaker(NodeMaker):
11349+    encoding_params = {
11350+        'k': 3,
11351+        'n': 10,
11352+        'happy': 7,
11353+        'max_segment_size':128*1024 # 1024=KiB
11354+    }
11355     def _create_lit(self, cap):
11356         return FakeCHKFileNode(cap)
11357     def _create_immutable(self, cap):
11358hunk ./src/allmydata/test/test_web.py 63
11359         return FakeCHKFileNode(cap)
11360     def _create_mutable(self, cap):
11361-        return FakeMutableFileNode(None, None, None, None).init_from_cap(cap)
11362-    def create_mutable_file(self, contents="", keysize=None):
11363-        n = FakeMutableFileNode(None, None, None, None)
11364-        return n.create(contents)
11365+        return FakeMutableFileNode(None,
11366+                                   None,
11367+                                   self.encoding_params, None).init_from_cap(cap)
11368+    def create_mutable_file(self, contents="", keysize=None,
11369+                            version=SDMF_VERSION):
11370+        n = FakeMutableFileNode(None, None, self.encoding_params, None)
11371+        return n.create(contents, version=version)
11372 
11373 class FakeUploader(service.Service):
11374     name = "uploader"
11375hunk ./src/allmydata/test/test_web.py 177
11376         self.nodemaker = FakeNodeMaker(None, self._secret_holder, None,
11377                                        self.uploader, None,
11378                                        None, None)
11379+        self.mutable_file_default = SDMF_VERSION
11380 
11381     def startService(self):
11382         return service.MultiService.startService(self)
11383hunk ./src/allmydata/test/test_web.py 222
11384             foo.set_uri(u"bar.txt", self._bar_txt_uri, self._bar_txt_uri)
11385             self._bar_txt_verifycap = n.get_verify_cap().to_string()
11386 
11387+            # sdmf
11388+            # XXX: Do we ever use this?
11389+            self.BAZ_CONTENTS, n, self._baz_txt_uri, self._baz_txt_readonly_uri = self.makefile_mutable(0)
11390+
11391+            foo.set_uri(u"baz.txt", self._baz_txt_uri, self._baz_txt_readonly_uri)
11392+
11393+            # mdmf
11394+            self.QUUX_CONTENTS, n, self._quux_txt_uri, self._quux_txt_readonly_uri = self.makefile_mutable(0, mdmf=True)
11395+            assert self._quux_txt_uri.startswith("URI:MDMF")
11396+            foo.set_uri(u"quux.txt", self._quux_txt_uri, self._quux_txt_readonly_uri)
11397+
11398             foo.set_uri(u"empty", res[3][1].get_uri(),
11399                         res[3][1].get_readonly_uri())
11400             sub_uri = res[4][1].get_uri()
11401hunk ./src/allmydata/test/test_web.py 264
11402             # public/
11403             # public/foo/
11404             # public/foo/bar.txt
11405+            # public/foo/baz.txt
11406+            # public/foo/quux.txt
11407             # public/foo/blockingfile
11408             # public/foo/empty/
11409             # public/foo/sub/
11410hunk ./src/allmydata/test/test_web.py 286
11411         n = create_chk_filenode(contents)
11412         return contents, n, n.get_uri()
11413 
11414+    def makefile_mutable(self, number, mdmf=False):
11415+        contents = "contents of mutable file %s\n" % number
11416+        n = create_mutable_filenode(contents, mdmf)
11417+        return contents, n, n.get_uri(), n.get_readonly_uri()
11418+
11419     def tearDown(self):
11420         return self.s.stopService()
11421 
11422hunk ./src/allmydata/test/test_web.py 297
11423     def failUnlessIsBarDotTxt(self, res):
11424         self.failUnlessReallyEqual(res, self.BAR_CONTENTS, res)
11425 
11426+    def failUnlessIsQuuxDotTxt(self, res):
11427+        self.failUnlessReallyEqual(res, self.QUUX_CONTENTS, res)
11428+
11429+    def failUnlessIsBazDotTxt(self, res):
11430+        self.failUnlessReallyEqual(res, self.BAZ_CONTENTS, res)
11431+
11432     def failUnlessIsBarJSON(self, res):
11433         data = simplejson.loads(res)
11434         self.failUnless(isinstance(data, list))
11435hunk ./src/allmydata/test/test_web.py 314
11436         self.failUnlessReallyEqual(to_str(data[1]["verify_uri"]), self._bar_txt_verifycap)
11437         self.failUnlessReallyEqual(data[1]["size"], len(self.BAR_CONTENTS))
11438 
11439+    def failUnlessIsQuuxJSON(self, res, readonly=False):
11440+        data = simplejson.loads(res)
11441+        self.failUnless(isinstance(data, list))
11442+        self.failUnlessEqual(data[0], "filenode")
11443+        self.failUnless(isinstance(data[1], dict))
11444+        metadata = data[1]
11445+        return self.failUnlessIsQuuxDotTxtMetadata(metadata, readonly)
11446+
11447+    def failUnlessIsQuuxDotTxtMetadata(self, metadata, readonly):
11448+        self.failUnless(metadata['mutable'])
11449+        if readonly:
11450+            self.failIf("rw_uri" in metadata)
11451+        else:
11452+            self.failUnless("rw_uri" in metadata)
11453+            self.failUnlessEqual(metadata['rw_uri'], self._quux_txt_uri)
11454+        self.failUnless("ro_uri" in metadata)
11455+        self.failUnlessEqual(metadata['ro_uri'], self._quux_txt_readonly_uri)
11456+        self.failUnlessReallyEqual(metadata['size'], len(self.QUUX_CONTENTS))
11457+
11458     def failUnlessIsFooJSON(self, res):
11459         data = simplejson.loads(res)
11460         self.failUnless(isinstance(data, list))
11461hunk ./src/allmydata/test/test_web.py 346
11462 
11463         kidnames = sorted([unicode(n) for n in data[1]["children"]])
11464         self.failUnlessEqual(kidnames,
11465-                             [u"bar.txt", u"blockingfile", u"empty",
11466-                              u"n\u00fc.txt", u"sub"])
11467+                             [u"bar.txt", u"baz.txt", u"blockingfile",
11468+                              u"empty", u"n\u00fc.txt", u"quux.txt", u"sub"])
11469         kids = dict( [(unicode(name),value)
11470                       for (name,value)
11471                       in data[1]["children"].iteritems()] )
11472hunk ./src/allmydata/test/test_web.py 368
11473                                    self._bar_txt_metadata["tahoe"]["linkcrtime"])
11474         self.failUnlessReallyEqual(to_str(kids[u"n\u00fc.txt"][1]["ro_uri"]),
11475                                    self._bar_txt_uri)
11476+        self.failUnlessIn("quux.txt", kids)
11477+        self.failUnlessReallyEqual(kids[u"quux.txt"][1]["rw_uri"],
11478+                                   self._quux_txt_uri)
11479+        self.failUnlessReallyEqual(kids[u"quux.txt"][1]["ro_uri"],
11480+                                   self._quux_txt_readonly_uri)
11481 
11482     def GET(self, urlpath, followRedirect=False, return_response=False,
11483             **kwargs):
11484hunk ./src/allmydata/test/test_web.py 845
11485                              self.PUT, base + "/@@name=/blah.txt", "")
11486         return d
11487 
11488+
11489     def test_GET_DIRURL_named_bad(self):
11490         base = "/file/%s" % urllib.quote(self._foo_uri)
11491         d = self.shouldFail2(error.Error, "test_PUT_DIRURL_named_bad",
11492hunk ./src/allmydata/test/test_web.py 888
11493         d.addCallback(self.failUnlessIsBarDotTxt)
11494         return d
11495 
11496+    def test_GET_FILE_URI_mdmf(self):
11497+        base = "/uri/%s" % urllib.quote(self._quux_txt_uri)
11498+        d = self.GET(base)
11499+        d.addCallback(self.failUnlessIsQuuxDotTxt)
11500+        return d
11501+
11502+    def test_GET_FILE_URI_mdmf_extensions(self):
11503+        base = "/uri/%s" % urllib.quote("%s:3:131073" % self._quux_txt_uri)
11504+        d = self.GET(base)
11505+        d.addCallback(self.failUnlessIsQuuxDotTxt)
11506+        return d
11507+
11508+    def test_GET_FILE_URI_mdmf_bare_cap(self):
11509+        cap_elements = self._quux_txt_uri.split(":")
11510+        # 6 == expected cap length with two extensions.
11511+        self.failUnlessEqual(len(cap_elements), 6)
11512+
11513+        # Now lop off the extension parameters and stitch everything
11514+        # back together
11515+        quux_uri = ":".join(cap_elements[:len(cap_elements) - 2])
11516+
11517+        # Now GET that. We should get back quux.
11518+        base = "/uri/%s" % urllib.quote(quux_uri)
11519+        d = self.GET(base)
11520+        d.addCallback(self.failUnlessIsQuuxDotTxt)
11521+        return d
11522+
11523+    def test_GET_FILE_URI_mdmf_readonly(self):
11524+        base = "/uri/%s" % urllib.quote(self._quux_txt_readonly_uri)
11525+        d = self.GET(base)
11526+        d.addCallback(self.failUnlessIsQuuxDotTxt)
11527+        return d
11528+
11529     def test_GET_FILE_URI_badchild(self):
11530         base = "/uri/%s/boguschild" % urllib.quote(self._bar_txt_uri)
11531         errmsg = "Files have no children, certainly not named 'boguschild'"
11532hunk ./src/allmydata/test/test_web.py 937
11533                              self.PUT, base, "")
11534         return d
11535 
11536+    def test_PUT_FILE_URI_mdmf(self):
11537+        base = "/uri/%s" % urllib.quote(self._quux_txt_uri)
11538+        self._quux_new_contents = "new_contents"
11539+        d = self.GET(base)
11540+        d.addCallback(lambda res:
11541+            self.failUnlessIsQuuxDotTxt(res))
11542+        d.addCallback(lambda ignored:
11543+            self.PUT(base, self._quux_new_contents))
11544+        d.addCallback(lambda ignored:
11545+            self.GET(base))
11546+        d.addCallback(lambda res:
11547+            self.failUnlessReallyEqual(res, self._quux_new_contents))
11548+        return d
11549+
11550+    def test_PUT_FILE_URI_mdmf_extensions(self):
11551+        base = "/uri/%s" % urllib.quote("%s:3:131073" % self._quux_txt_uri)
11552+        self._quux_new_contents = "new_contents"
11553+        d = self.GET(base)
11554+        d.addCallback(lambda res: self.failUnlessIsQuuxDotTxt(res))
11555+        d.addCallback(lambda ignored: self.PUT(base, self._quux_new_contents))
11556+        d.addCallback(lambda ignored: self.GET(base))
11557+        d.addCallback(lambda res: self.failUnlessEqual(self._quux_new_contents,
11558+                                                       res))
11559+        return d
11560+
11561+    def test_PUT_FILE_URI_mdmf_bare_cap(self):
11562+        elements = self._quux_txt_uri.split(":")
11563+        self.failUnlessEqual(len(elements), 6)
11564+
11565+        quux_uri = ":".join(elements[:len(elements) - 2])
11566+        base = "/uri/%s" % urllib.quote(quux_uri)
11567+        self._quux_new_contents = "new_contents" * 50000
11568+
11569+        d = self.GET(base)
11570+        d.addCallback(self.failUnlessIsQuuxDotTxt)
11571+        d.addCallback(lambda ignored: self.PUT(base, self._quux_new_contents))
11572+        d.addCallback(lambda ignored: self.GET(base))
11573+        d.addCallback(lambda res:
11574+            self.failUnlessEqual(res, self._quux_new_contents))
11575+        return d
11576+
11577+    def test_PUT_FILE_URI_mdmf_readonly(self):
11578+        # We're not allowed to PUT things to a readonly cap.
11579+        base = "/uri/%s" % self._quux_txt_readonly_uri
11580+        d = self.GET(base)
11581+        d.addCallback(lambda res:
11582+            self.failUnlessIsQuuxDotTxt(res))
11583+        # What should we get here? We get a 500 error now; that's not right.
11584+        d.addCallback(lambda ignored:
11585+            self.shouldFail2(error.Error, "test_PUT_FILE_URI_mdmf_readonly",
11586+                             "400 Bad Request", "read-only cap",
11587+                             self.PUT, base, "new data"))
11588+        return d
11589+
11590+    def test_PUT_FILE_URI_sdmf_readonly(self):
11591+        # We're not allowed to put things to a readonly cap.
11592+        base = "/uri/%s" % self._baz_txt_readonly_uri
11593+        d = self.GET(base)
11594+        d.addCallback(lambda res:
11595+            self.failUnlessIsBazDotTxt(res))
11596+        d.addCallback(lambda ignored:
11597+            self.shouldFail2(error.Error, "test_PUT_FILE_URI_sdmf_readonly",
11598+                             "400 Bad Request", "read-only cap",
11599+                             self.PUT, base, "new_data"))
11600+        return d
11601+
11602     # TODO: version of this with a Unicode filename
11603     def test_GET_FILEURL_save(self):
11604         d = self.GET(self.public_url + "/foo/bar.txt?filename=bar.txt&save=true",
11605hunk ./src/allmydata/test/test_web.py 1019
11606         d.addBoth(self.should404, "test_GET_FILEURL_missing")
11607         return d
11608 
11609+    def test_GET_FILEURL_info_mdmf(self):
11610+        d = self.GET("/uri/%s?t=info" % self._quux_txt_uri)
11611+        def _got(res):
11612+            self.failUnlessIn("mutable file (mdmf)", res)
11613+            self.failUnlessIn(self._quux_txt_uri, res)
11614+            self.failUnlessIn(self._quux_txt_readonly_uri, res)
11615+        d.addCallback(_got)
11616+        return d
11617+
11618+    def test_GET_FILEURL_info_mdmf_readonly(self):
11619+        d = self.GET("/uri/%s?t=info" % self._quux_txt_readonly_uri)
11620+        def _got(res):
11621+            self.failUnlessIn("mutable file (mdmf)", res)
11622+            self.failIfIn(self._quux_txt_uri, res)
11623+            self.failUnlessIn(self._quux_txt_readonly_uri, res)
11624+        d.addCallback(_got)
11625+        return d
11626+
11627+    def test_GET_FILEURL_info_sdmf(self):
11628+        d = self.GET("/uri/%s?t=info" % self._baz_txt_uri)
11629+        def _got(res):
11630+            self.failUnlessIn("mutable file (sdmf)", res)
11631+            self.failUnlessIn(self._baz_txt_uri, res)
11632+        d.addCallback(_got)
11633+        return d
11634+
11635+    def test_GET_FILEURL_info_mdmf_extensions(self):
11636+        d = self.GET("/uri/%s:3:131073?t=info" % self._quux_txt_uri)
11637+        def _got(res):
11638+            self.failUnlessIn("mutable file (mdmf)", res)
11639+            self.failUnlessIn(self._quux_txt_uri, res)
11640+            self.failUnlessIn(self._quux_txt_readonly_uri, res)
11641+        d.addCallback(_got)
11642+        return d
11643+
11644+    def test_GET_FILEURL_info_mdmf_bare_cap(self):
11645+        elements = self._quux_txt_uri.split(":")
11646+        self.failUnlessEqual(len(elements), 6)
11647+
11648+        quux_uri = ":".join(elements[:len(elements) - 2])
11649+        base = "/uri/%s?t=info" % urllib.quote(quux_uri)
11650+        d = self.GET(base)
11651+        def _got(res):
11652+            self.failUnlessIn("mutable file (mdmf)", res)
11653+            self.failUnlessIn(quux_uri, res)
11654+        d.addCallback(_got)
11655+        return d
11656+
11657     def test_PUT_overwrite_only_files(self):
11658         # create a directory, put a file in that directory.
11659         contents, n, filecap = self.makefile(8)
11660hunk ./src/allmydata/test/test_web.py 1108
11661                                                       self.NEWFILE_CONTENTS))
11662         return d
11663 
11664+    def test_PUT_NEWFILEURL_unlinked_mdmf(self):
11665+        # this should get us a few segments of an MDMF mutable file,
11666+        # which we can then test for.
11667+        contents = self.NEWFILE_CONTENTS * 300000
11668+        d = self.PUT("/uri?mutable=true&mutable-type=mdmf",
11669+                     contents)
11670+        def _got_filecap(filecap):
11671+            self.failUnless(filecap.startswith("URI:MDMF"))
11672+            return filecap
11673+        d.addCallback(_got_filecap)
11674+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
11675+        d.addCallback(lambda json: self.failUnlessIn("mdmf", json))
11676+        return d
11677+
11678+    def test_PUT_NEWFILEURL_unlinked_sdmf(self):
11679+        contents = self.NEWFILE_CONTENTS * 300000
11680+        d = self.PUT("/uri?mutable=true&mutable-type=sdmf",
11681+                     contents)
11682+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
11683+        d.addCallback(lambda json: self.failUnlessIn("sdmf", json))
11684+        return d
11685+
11686+    def test_PUT_NEWFILEURL_unlinked_bad_mutable_type(self):
11687+        contents = self.NEWFILE_CONTENTS * 300000
11688+        return self.shouldHTTPError("test bad mutable type",
11689+                                    400, "Bad Request", "Unknown type: foo",
11690+                                    self.PUT, "/uri?mutable=true&mutable-type=foo",
11691+                                    contents)
11692+
11693     def test_PUT_NEWFILEURL_range_bad(self):
11694         headers = {"content-range": "bytes 1-10/%d" % len(self.NEWFILE_CONTENTS)}
11695         target = self.public_url + "/foo/new.txt"
11696hunk ./src/allmydata/test/test_web.py 1169
11697         return d
11698 
11699     def test_PUT_NEWFILEURL_mutable_toobig(self):
11700-        d = self.shouldFail2(error.Error, "test_PUT_NEWFILEURL_mutable_toobig",
11701-                             "413 Request Entity Too Large",
11702-                             "SDMF is limited to one segment, and 10001 > 10000",
11703-                             self.PUT,
11704-                             self.public_url + "/foo/new.txt?mutable=true",
11705-                             "b" * (self.s.MUTABLE_SIZELIMIT+1))
11706+        # It is okay to upload large mutable files, so we should be able
11707+        # to do that.
11708+        d = self.PUT(self.public_url + "/foo/new.txt?mutable=true",
11709+                     "b" * (self.s.MUTABLE_SIZELIMIT + 1))
11710         return d
11711 
11712     def test_PUT_NEWFILEURL_replace(self):
11713hunk ./src/allmydata/test/test_web.py 1267
11714         d.addCallback(_check1)
11715         return d
11716 
11717+    def test_GET_FILEURL_json_mutable_type(self):
11718+        # The JSON should include mutable-type, which says whether the
11719+        # file is SDMF or MDMF
11720+        d = self.PUT("/uri?mutable=true&mutable-type=mdmf",
11721+                     self.NEWFILE_CONTENTS * 300000)
11722+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
11723+        def _got_json(json, version):
11724+            data = simplejson.loads(json)
11725+            assert "filenode" == data[0]
11726+            data = data[1]
11727+            assert isinstance(data, dict)
11728+
11729+            self.failUnlessIn("mutable-type", data)
11730+            self.failUnlessEqual(data['mutable-type'], version)
11731+
11732+        d.addCallback(_got_json, "mdmf")
11733+        # Now make an SDMF file and check that it is reported correctly.
11734+        d.addCallback(lambda ignored:
11735+            self.PUT("/uri?mutable=true&mutable-type=sdmf",
11736+                      self.NEWFILE_CONTENTS * 300000))
11737+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
11738+        d.addCallback(_got_json, "sdmf")
11739+        return d
11740+
11741+    def test_GET_FILEURL_json_mdmf_extensions(self):
11742+        # A GET invoked against a URL that includes an MDMF cap with
11743+        # extensions should fetch the same JSON information as a GET
11744+        # invoked against a bare cap.
11745+        self._quux_txt_uri = "%s:3:131073" % self._quux_txt_uri
11746+        self._quux_txt_readonly_uri = "%s:3:131073" % self._quux_txt_readonly_uri
11747+        d = self.GET("/uri/%s?t=json" % urllib.quote(self._quux_txt_uri))
11748+        d.addCallback(self.failUnlessIsQuuxJSON)
11749+        return d
11750+
11751+    def test_GET_FILEURL_json_mdmf_bare_cap(self):
11752+        elements = self._quux_txt_uri.split(":")
11753+        self.failUnlessEqual(len(elements), 6)
11754+
11755+        quux_uri = ":".join(elements[:len(elements) - 2])
11756+        # so failUnlessIsQuuxJSON will work.
11757+        self._quux_txt_uri = quux_uri
11758+
11759+        # we need to alter the readonly URI in the same way, again so
11760+        # failUnlessIsQuuxJSON will work
11761+        elements = self._quux_txt_readonly_uri.split(":")
11762+        self.failUnlessEqual(len(elements), 6)
11763+        quux_ro_uri = ":".join(elements[:len(elements) - 2])
11764+        self._quux_txt_readonly_uri = quux_ro_uri
11765+
11766+        base = "/uri/%s?t=json" % urllib.quote(quux_uri)
11767+        d = self.GET(base)
11768+        d.addCallback(self.failUnlessIsQuuxJSON)
11769+        return d
11770+
11771+    def test_GET_FILEURL_json_mdmf_bare_readonly_cap(self):
11772+        elements = self._quux_txt_readonly_uri.split(":")
11773+        self.failUnlessEqual(len(elements), 6)
11774+
11775+        quux_readonly_uri = ":".join(elements[:len(elements) - 2])
11776+        # so failUnlessIsQuuxJSON will work
11777+        self._quux_txt_readonly_uri = quux_readonly_uri
11778+        base = "/uri/%s?t=json" % quux_readonly_uri
11779+        d = self.GET(base)
11780+        # XXX: We may need to make a method that knows how to check for
11781+        # readonly JSON, or else alter that one so that it knows how to
11782+        # do that.
11783+        d.addCallback(self.failUnlessIsQuuxJSON, readonly=True)
11784+        return d
11785+
11786+    def test_GET_FILEURL_json_mdmf(self):
11787+        d = self.GET("/uri/%s?t=json" % urllib.quote(self._quux_txt_uri))
11788+        d.addCallback(self.failUnlessIsQuuxJSON)
11789+        return d
11790+
11791     def test_GET_FILEURL_json_missing(self):
11792         d = self.GET(self.public_url + "/foo/missing?json")
11793         d.addBoth(self.should404, "test_GET_FILEURL_json_missing")
11794hunk ./src/allmydata/test/test_web.py 1373
11795             self.failUnless(CSS_STYLE.search(res), res)
11796         d.addCallback(_check)
11797         return d
11798-   
11799+
11800     def test_GET_FILEURL_uri_missing(self):
11801         d = self.GET(self.public_url + "/foo/missing?t=uri")
11802         d.addBoth(self.should404, "test_GET_FILEURL_uri_missing")
11803hunk ./src/allmydata/test/test_web.py 1379
11804         return d
11805 
11806-    def test_GET_DIRECTORY_html_banner(self):
11807+    def test_GET_DIRECTORY_html(self):
11808         d = self.GET(self.public_url + "/foo", followRedirect=True)
11809         def _check(res):
11810             self.failUnlessIn('<div class="toolbar-item"><a href="../../..">Return to Welcome page</a></div>',res)
11811hunk ./src/allmydata/test/test_web.py 1383
11812+            # These are radio buttons that allow a user to toggle
11813+            # whether a particular mutable file is SDMF or MDMF.
11814+            self.failUnlessIn("mutable-type-mdmf", res)
11815+            self.failUnlessIn("mutable-type-sdmf", res)
11816+            # Similarly, these toggle whether a particular directory
11817+            # should be MDMF or SDMF.
11818+            self.failUnlessIn("mutable-directory-mdmf", res)
11819+            self.failUnlessIn("mutable-directory-sdmf", res)
11820+            self.failUnlessIn("quux", res)
11821         d.addCallback(_check)
11822         return d
11823 
11824hunk ./src/allmydata/test/test_web.py 1395
11825+    def test_GET_root_html(self):
11826+        # make sure that we have the option to upload an unlinked
11827+        # mutable file in SDMF and MDMF formats.
11828+        d = self.GET("/")
11829+        def _got_html(html):
11830+            # These are radio buttons that allow the user to toggle
11831+            # whether a particular mutable file is MDMF or SDMF.
11832+            self.failUnlessIn("mutable-type-mdmf", html)
11833+            self.failUnlessIn("mutable-type-sdmf", html)
11834+            # We should also have the ability to create a mutable directory.
11835+            self.failUnlessIn("mkdir", html)
11836+            # ...and we should have the ability to say whether that's an
11837+            # MDMF or SDMF directory
11838+            self.failUnlessIn("mutable-directory-mdmf", html)
11839+            self.failUnlessIn("mutable-directory-sdmf", html)
11840+        d.addCallback(_got_html)
11841+        return d
11842+
11843+    def test_mutable_type_defaults(self):
11844+        # The checked="checked" attribute of the inputs corresponding to
11845+        # the mutable-type parameter should change as expected with the
11846+        # value configured in tahoe.cfg.
11847+        #
11848+        # By default, the value configured with the client is
11849+        # SDMF_VERSION, so that should be checked.
11850+        assert self.s.mutable_file_default == SDMF_VERSION
11851+
11852+        d = self.GET("/")
11853+        def _got_html(html, value):
11854+            i = 'input checked="checked" type="radio" id="mutable-type-%s"'
11855+            self.failUnlessIn(i % value, html)
11856+        d.addCallback(_got_html, "sdmf")
11857+        d.addCallback(lambda ignored:
11858+            self.GET(self.public_url + "/foo", followRedirect=True))
11859+        d.addCallback(_got_html, "sdmf")
11860+        # Now switch the configuration value to MDMF. The MDMF radio
11861+        # buttons should now be checked on these pages.
11862+        def _swap_values(ignored):
11863+            self.s.mutable_file_default = MDMF_VERSION
11864+        d.addCallback(_swap_values)
11865+        d.addCallback(lambda ignored: self.GET("/"))
11866+        d.addCallback(_got_html, "mdmf")
11867+        d.addCallback(lambda ignored:
11868+            self.GET(self.public_url + "/foo", followRedirect=True))
11869+        d.addCallback(_got_html, "mdmf")
11870+        return d
11871+
11872     def test_GET_DIRURL(self):
11873         # the addSlash means we get a redirect here
11874         # from /uri/$URI/foo/ , we need ../../../ to get back to the root
11875hunk ./src/allmydata/test/test_web.py 1535
11876         d.addCallback(self.failUnlessIsFooJSON)
11877         return d
11878 
11879+    def test_GET_DIRURL_json_mutable_type(self):
11880+        d = self.PUT(self.public_url + \
11881+                     "/foo/sdmf.txt?mutable=true&mutable-type=sdmf",
11882+                     self.NEWFILE_CONTENTS * 300000)
11883+        d.addCallback(lambda ignored:
11884+            self.PUT(self.public_url + \
11885+                     "/foo/mdmf.txt?mutable=true&mutable-type=mdmf",
11886+                     self.NEWFILE_CONTENTS * 300000))
11887+        # Now we have an MDMF and SDMF file in the directory. If we GET
11888+        # its JSON, we should see their encodings.
11889+        d.addCallback(lambda ignored:
11890+            self.GET(self.public_url + "/foo?t=json"))
11891+        def _got_json(json):
11892+            data = simplejson.loads(json)
11893+            assert data[0] == "dirnode"
11894+
11895+            data = data[1]
11896+            kids = data['children']
11897+
11898+            mdmf_data = kids['mdmf.txt'][1]
11899+            self.failUnlessIn("mutable-type", mdmf_data)
11900+            self.failUnlessEqual(mdmf_data['mutable-type'], "mdmf")
11901+
11902+            sdmf_data = kids['sdmf.txt'][1]
11903+            self.failUnlessIn("mutable-type", sdmf_data)
11904+            self.failUnlessEqual(sdmf_data['mutable-type'], "sdmf")
11905+        d.addCallback(_got_json)
11906+        return d
11907+
11908 
11909     def test_POST_DIRURL_manifest_no_ophandle(self):
11910         d = self.shouldFail2(error.Error,
11911hunk ./src/allmydata/test/test_web.py 1659
11912         d.addCallback(self.get_operation_results, "127", "json")
11913         def _got_json(stats):
11914             expected = {"count-immutable-files": 3,
11915-                        "count-mutable-files": 0,
11916+                        "count-mutable-files": 2,
11917                         "count-literal-files": 0,
11918hunk ./src/allmydata/test/test_web.py 1661
11919-                        "count-files": 3,
11920+                        "count-files": 5,
11921                         "count-directories": 3,
11922                         "size-immutable-files": 57,
11923                         "size-literal-files": 0,
11924hunk ./src/allmydata/test/test_web.py 1667
11925                         #"size-directories": 1912, # varies
11926                         #"largest-directory": 1590,
11927-                        "largest-directory-children": 5,
11928+                        "largest-directory-children": 7,
11929                         "largest-immutable-file": 19,
11930                         }
11931             for k,v in expected.iteritems():
11932hunk ./src/allmydata/test/test_web.py 1684
11933         def _check(res):
11934             self.failUnless(res.endswith("\n"))
11935             units = [simplejson.loads(t) for t in res[:-1].split("\n")]
11936-            self.failUnlessReallyEqual(len(units), 7)
11937+            self.failUnlessReallyEqual(len(units), 9)
11938             self.failUnlessEqual(units[-1]["type"], "stats")
11939             first = units[0]
11940             self.failUnlessEqual(first["path"], [])
11941hunk ./src/allmydata/test/test_web.py 1695
11942             self.failIfEqual(baz["storage-index"], None)
11943             self.failIfEqual(baz["verifycap"], None)
11944             self.failIfEqual(baz["repaircap"], None)
11945+            # XXX: Add quux and baz to this test.
11946             return
11947         d.addCallback(_check)
11948         return d
11949hunk ./src/allmydata/test/test_web.py 1722
11950         d.addCallback(self.failUnlessNodeKeysAre, [])
11951         return d
11952 
11953+    def test_PUT_NEWDIRURL_mdmf(self):
11954+        d = self.PUT(self.public_url + "/foo/newdir?t=mkdir&mutable-type=mdmf", "")
11955+        d.addCallback(lambda res:
11956+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
11957+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
11958+        d.addCallback(lambda node:
11959+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
11960+        return d
11961+
11962+    def test_PUT_NEWDIRURL_sdmf(self):
11963+        d = self.PUT(self.public_url + "/foo/newdir?t=mkdir&mutable-type=sdmf",
11964+                     "")
11965+        d.addCallback(lambda res:
11966+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
11967+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
11968+        d.addCallback(lambda node:
11969+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
11970+        return d
11971+
11972+    def test_PUT_NEWDIRURL_bad_mutable_type(self):
11973+        return self.shouldHTTPError("test bad mutable type",
11974+                             400, "Bad Request", "Unknown type: foo",
11975+                             self.PUT, self.public_url + \
11976+                             "/foo/newdir=?t=mkdir&mutable-type=foo", "")
11977+
11978     def test_POST_NEWDIRURL(self):
11979         d = self.POST2(self.public_url + "/foo/newdir?t=mkdir", "")
11980         d.addCallback(lambda res:
11981hunk ./src/allmydata/test/test_web.py 1755
11982         d.addCallback(self.failUnlessNodeKeysAre, [])
11983         return d
11984 
11985+    def test_POST_NEWDIRURL_mdmf(self):
11986+        d = self.POST2(self.public_url + "/foo/newdir?t=mkdir&mutable-type=mdmf", "")
11987+        d.addCallback(lambda res:
11988+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
11989+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
11990+        d.addCallback(lambda node:
11991+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
11992+        return d
11993+
11994+    def test_POST_NEWDIRURL_sdmf(self):
11995+        d = self.POST2(self.public_url + "/foo/newdir?t=mkdir&mutable-type=sdmf", "")
11996+        d.addCallback(lambda res:
11997+            self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
11998+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
11999+        d.addCallback(lambda node:
12000+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
12001+        return d
12002+
12003+    def test_POST_NEWDIRURL_bad_mutable_type(self):
12004+        return self.shouldHTTPError("test bad mutable type",
12005+                                    400, "Bad Request", "Unknown type: foo",
12006+                                    self.POST2, self.public_url + \
12007+                                    "/foo/newdir?t=mkdir&mutable-type=foo", "")
12008+
12009     def test_POST_NEWDIRURL_emptyname(self):
12010         # an empty pathname component (i.e. a double-slash) is disallowed
12011         d = self.shouldFail2(error.Error, "test_POST_NEWDIRURL_emptyname",
12012hunk ./src/allmydata/test/test_web.py 1787
12013                              self.POST, self.public_url + "//?t=mkdir")
12014         return d
12015 
12016-    def test_POST_NEWDIRURL_initial_children(self):
12017+    def _do_POST_NEWDIRURL_initial_children_test(self, version=None):
12018         (newkids, caps) = self._create_initial_children()
12019hunk ./src/allmydata/test/test_web.py 1789
12020-        d = self.POST2(self.public_url + "/foo/newdir?t=mkdir-with-children",
12021+        query = "/foo/newdir?t=mkdir-with-children"
12022+        if version == MDMF_VERSION:
12023+            query += "&mutable-type=mdmf"
12024+        elif version == SDMF_VERSION:
12025+            query += "&mutable-type=sdmf"
12026+        else:
12027+            version = SDMF_VERSION # for later
12028+        d = self.POST2(self.public_url + query,
12029                        simplejson.dumps(newkids))
12030         def _check(uri):
12031             n = self.s.create_node_from_uri(uri.strip())
12032hunk ./src/allmydata/test/test_web.py 1801
12033             d2 = self.failUnlessNodeKeysAre(n, newkids.keys())
12034+            self.failUnlessEqual(n._node.get_version(), version)
12035             d2.addCallback(lambda ign:
12036                            self.failUnlessROChildURIIs(n, u"child-imm",
12037                                                        caps['filecap1']))
12038hunk ./src/allmydata/test/test_web.py 1839
12039         d.addCallback(self.failUnlessROChildURIIs, u"child-imm", caps['filecap1'])
12040         return d
12041 
12042+    def test_POST_NEWDIRURL_initial_children(self):
12043+        return self._do_POST_NEWDIRURL_initial_children_test()
12044+
12045+    def test_POST_NEWDIRURL_initial_children_mdmf(self):
12046+        return self._do_POST_NEWDIRURL_initial_children_test(MDMF_VERSION)
12047+
12048+    def test_POST_NEWDIRURL_initial_children_sdmf(self):
12049+        return self._do_POST_NEWDIRURL_initial_children_test(SDMF_VERSION)
12050+
12051+    def test_POST_NEWDIRURL_initial_children_bad_mutable_type(self):
12052+        (newkids, caps) = self._create_initial_children()
12053+        return self.shouldHTTPError("test bad mutable type",
12054+                                    400, "Bad Request", "Unknown type: foo",
12055+                                    self.POST2, self.public_url + \
12056+                                    "/foo/newdir?t=mkdir-with-children&mutable-type=foo",
12057+                                    simplejson.dumps(newkids))
12058+
12059     def test_POST_NEWDIRURL_immutable(self):
12060         (newkids, caps) = self._create_immutable_children()
12061         d = self.POST2(self.public_url + "/foo/newdir?t=mkdir-immutable",
12062hunk ./src/allmydata/test/test_web.py 1956
12063         d.addCallback(self.failUnlessNodeKeysAre, [])
12064         return d
12065 
12066+    def test_PUT_NEWDIRURL_mkdirs_mdmf(self):
12067+        d = self.PUT(self.public_url + "/foo/subdir/newdir?t=mkdir&mutable-type=mdmf", "")
12068+        d.addCallback(lambda ignored:
12069+            self.failUnlessNodeHasChild(self._foo_node, u"subdir"))
12070+        d.addCallback(lambda ignored:
12071+            self.failIfNodeHasChild(self._foo_node, u"newdir"))
12072+        d.addCallback(lambda ignored:
12073+            self._foo_node.get_child_at_path(u"subdir"))
12074+        def _got_subdir(subdir):
12075+            # XXX: What we want?
12076+            #self.failUnlessEqual(subdir._node.get_version(), MDMF_VERSION)
12077+            self.failUnlessNodeHasChild(subdir, u"newdir")
12078+            return subdir.get_child_at_path(u"newdir")
12079+        d.addCallback(_got_subdir)
12080+        d.addCallback(lambda newdir:
12081+            self.failUnlessEqual(newdir._node.get_version(), MDMF_VERSION))
12082+        return d
12083+
12084+    def test_PUT_NEWDIRURL_mkdirs_sdmf(self):
12085+        d = self.PUT(self.public_url + "/foo/subdir/newdir?t=mkdir&mutable-type=sdmf", "")
12086+        d.addCallback(lambda ignored:
12087+            self.failUnlessNodeHasChild(self._foo_node, u"subdir"))
12088+        d.addCallback(lambda ignored:
12089+            self.failIfNodeHasChild(self._foo_node, u"newdir"))
12090+        d.addCallback(lambda ignored:
12091+            self._foo_node.get_child_at_path(u"subdir"))
12092+        def _got_subdir(subdir):
12093+            # XXX: What we want?
12094+            #self.failUnlessEqual(subdir._node.get_version(), MDMF_VERSION)
12095+            self.failUnlessNodeHasChild(subdir, u"newdir")
12096+            return subdir.get_child_at_path(u"newdir")
12097+        d.addCallback(_got_subdir)
12098+        d.addCallback(lambda newdir:
12099+            self.failUnlessEqual(newdir._node.get_version(), SDMF_VERSION))
12100+        return d
12101+
12102+    def test_PUT_NEWDIRURL_mkdirs_bad_mutable_type(self):
12103+        return self.shouldHTTPError("test bad mutable type",
12104+                                    400, "Bad Request", "Unknown type: foo",
12105+                                    self.PUT, self.public_url + \
12106+                                    "/foo/subdir/newdir?t=mkdir&mutable-type=foo",
12107+                                    "")
12108+
12109     def test_DELETE_DIRURL(self):
12110         d = self.DELETE(self.public_url + "/foo")
12111         d.addCallback(lambda res:
12112hunk ./src/allmydata/test/test_web.py 2236
12113         return d
12114 
12115     def test_POST_upload_no_link_mutable_toobig(self):
12116-        d = self.shouldFail2(error.Error,
12117-                             "test_POST_upload_no_link_mutable_toobig",
12118-                             "413 Request Entity Too Large",
12119-                             "SDMF is limited to one segment, and 10001 > 10000",
12120-                             self.POST,
12121-                             "/uri", t="upload", mutable="true",
12122-                             file=("new.txt",
12123-                                   "b" * (self.s.MUTABLE_SIZELIMIT+1)) )
12124+        # The SDMF size limit is no longer in place, so we should be
12125+        # able to upload mutable files that are as large as we want them
12126+        # to be.
12127+        d = self.POST("/uri", t="upload", mutable="true",
12128+                      file=("new.txt", "b" * (self.s.MUTABLE_SIZELIMIT + 1)))
12129+        return d
12130+
12131+
12132+    def test_POST_upload_mutable_type_unlinked(self):
12133+        d = self.POST("/uri?t=upload&mutable=true&mutable-type=sdmf",
12134+                      file=("sdmf.txt", self.NEWFILE_CONTENTS * 300000))
12135+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
12136+        def _got_json(json, version):
12137+            data = simplejson.loads(json)
12138+            data = data[1]
12139+
12140+            self.failUnlessIn("mutable-type", data)
12141+            self.failUnlessEqual(data['mutable-type'], version)
12142+        d.addCallback(_got_json, "sdmf")
12143+        d.addCallback(lambda ignored:
12144+            self.POST("/uri?t=upload&mutable=true&mutable-type=mdmf",
12145+                      file=('mdmf.txt', self.NEWFILE_CONTENTS * 300000)))
12146+        def _got_filecap(filecap):
12147+            self.failUnless(filecap.startswith("URI:MDMF"))
12148+            return filecap
12149+        d.addCallback(_got_filecap)
12150+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
12151+        d.addCallback(_got_json, "mdmf")
12152+        return d
12153+
12154+    def test_POST_upload_mutable_type_unlinked_bad_mutable_type(self):
12155+        return self.shouldHTTPError("test bad mutable type",
12156+                                    400, "Bad Request", "Unknown type: foo",
12157+                                    self.POST,
12158+                                    "/uri?5=upload&mutable=true&mutable-type=foo",
12159+                                    file=("foo.txt", self.NEWFILE_CONTENTS * 300000))
12160+
12161+    def test_POST_upload_mutable_type(self):
12162+        d = self.POST(self.public_url + \
12163+                      "/foo?t=upload&mutable=true&mutable-type=sdmf",
12164+                      file=("sdmf.txt", self.NEWFILE_CONTENTS * 300000))
12165+        fn = self._foo_node
12166+        def _got_cap(filecap, filename):
12167+            filenameu = unicode(filename)
12168+            self.failUnlessURIMatchesRWChild(filecap, fn, filenameu)
12169+            return self.GET(self.public_url + "/foo/%s?t=json" % filename)
12170+        def _got_mdmf_cap(filecap):
12171+            self.failUnless(filecap.startswith("URI:MDMF"))
12172+            return filecap
12173+        d.addCallback(_got_cap, "sdmf.txt")
12174+        def _got_json(json, version):
12175+            data = simplejson.loads(json)
12176+            data = data[1]
12177+
12178+            self.failUnlessIn("mutable-type", data)
12179+            self.failUnlessEqual(data['mutable-type'], version)
12180+        d.addCallback(_got_json, "sdmf")
12181+        d.addCallback(lambda ignored:
12182+            self.POST(self.public_url + \
12183+                      "/foo?t=upload&mutable=true&mutable-type=mdmf",
12184+                      file=("mdmf.txt", self.NEWFILE_CONTENTS * 300000)))
12185+        d.addCallback(_got_mdmf_cap)
12186+        d.addCallback(_got_cap, "mdmf.txt")
12187+        d.addCallback(_got_json, "mdmf")
12188         return d
12189 
12190hunk ./src/allmydata/test/test_web.py 2302
12191+    def test_POST_upload_bad_mutable_type(self):
12192+        return self.shouldHTTPError("test bad mutable type",
12193+                                    400, "Bad Request", "Unknown type: foo",
12194+                                    self.POST, self.public_url + \
12195+                                    "/foo?t=upload&mutable=true&mutable-type=foo",
12196+                                    file=("foo.txt", self.NEWFILE_CONTENTS * 300000))
12197+
12198     def test_POST_upload_mutable(self):
12199         # this creates a mutable file
12200         d = self.POST(self.public_url + "/foo", t="upload", mutable="true",
12201hunk ./src/allmydata/test/test_web.py 2433
12202             self.failUnlessReallyEqual(headers["content-type"], ["text/plain"])
12203         d.addCallback(_got_headers)
12204 
12205-        # make sure that size errors are displayed correctly for overwrite
12206-        d.addCallback(lambda res:
12207-                      self.shouldFail2(error.Error,
12208-                                       "test_POST_upload_mutable-toobig",
12209-                                       "413 Request Entity Too Large",
12210-                                       "SDMF is limited to one segment, and 10001 > 10000",
12211-                                       self.POST,
12212-                                       self.public_url + "/foo", t="upload",
12213-                                       mutable="true",
12214-                                       file=("new.txt",
12215-                                             "b" * (self.s.MUTABLE_SIZELIMIT+1)),
12216-                                       ))
12217-
12218+        # make sure that outdated size limits aren't enforced anymore.
12219+        d.addCallback(lambda ignored:
12220+            self.POST(self.public_url + "/foo", t="upload",
12221+                      mutable="true",
12222+                      file=("new.txt",
12223+                            "b" * (self.s.MUTABLE_SIZELIMIT+1))))
12224         d.addErrback(self.dump_error)
12225         return d
12226 
12227hunk ./src/allmydata/test/test_web.py 2443
12228     def test_POST_upload_mutable_toobig(self):
12229-        d = self.shouldFail2(error.Error,
12230-                             "test_POST_upload_mutable_toobig",
12231-                             "413 Request Entity Too Large",
12232-                             "SDMF is limited to one segment, and 10001 > 10000",
12233-                             self.POST,
12234-                             self.public_url + "/foo",
12235-                             t="upload", mutable="true",
12236-                             file=("new.txt",
12237-                                   "b" * (self.s.MUTABLE_SIZELIMIT+1)) )
12238+        # SDMF had a size limti that was removed a while ago. MDMF has
12239+        # never had a size limit. Test to make sure that we do not
12240+        # encounter errors when trying to upload large mutable files,
12241+        # since there should be no coded prohibitions regarding large
12242+        # mutable files.
12243+        d = self.POST(self.public_url + "/foo",
12244+                      t="upload", mutable="true",
12245+                      file=("new.txt", "b" * (self.s.MUTABLE_SIZELIMIT + 1)))
12246         return d
12247 
12248     def dump_error(self, f):
12249hunk ./src/allmydata/test/test_web.py 2538
12250         # make sure that nothing was added
12251         d.addCallback(lambda res:
12252                       self.failUnlessNodeKeysAre(self._foo_node,
12253-                                                 [u"bar.txt", u"blockingfile",
12254-                                                  u"empty", u"n\u00fc.txt",
12255+                                                 [u"bar.txt", u"baz.txt", u"blockingfile",
12256+                                                  u"empty", u"n\u00fc.txt", u"quux.txt",
12257                                                   u"sub"]))
12258         return d
12259 
12260hunk ./src/allmydata/test/test_web.py 2661
12261         d.addCallback(_check3)
12262         return d
12263 
12264+    def test_POST_FILEURL_mdmf_check(self):
12265+        quux_url = "/uri/%s" % urllib.quote(self._quux_txt_uri)
12266+        d = self.POST(quux_url, t="check")
12267+        def _check(res):
12268+            self.failUnlessIn("Healthy", res)
12269+        d.addCallback(_check)
12270+        quux_extension_url = "/uri/%s" % urllib.quote("%s:3:131073" % self._quux_txt_uri)
12271+        d.addCallback(lambda ignored:
12272+            self.POST(quux_extension_url, t="check"))
12273+        d.addCallback(_check)
12274+        return d
12275+
12276+    def test_POST_FILEURL_mdmf_check_and_repair(self):
12277+        quux_url = "/uri/%s" % urllib.quote(self._quux_txt_uri)
12278+        d = self.POST(quux_url, t="check", repair="true")
12279+        def _check(res):
12280+            self.failUnlessIn("Healthy", res)
12281+        d.addCallback(_check)
12282+        quux_extension_url = "/uri/%s" %\
12283+            urllib.quote("%s:3:131073" % self._quux_txt_uri)
12284+        d.addCallback(lambda ignored:
12285+            self.POST(quux_extension_url, t="check", repair="true"))
12286+        d.addCallback(_check)
12287+        return d
12288+
12289     def wait_for_operation(self, ignored, ophandle):
12290         url = "/operations/" + ophandle
12291         url += "?t=status&output=JSON"
12292hunk ./src/allmydata/test/test_web.py 2731
12293         d.addCallback(self.wait_for_operation, "123")
12294         def _check_json(data):
12295             self.failUnlessReallyEqual(data["finished"], True)
12296-            self.failUnlessReallyEqual(data["count-objects-checked"], 8)
12297-            self.failUnlessReallyEqual(data["count-objects-healthy"], 8)
12298+            self.failUnlessReallyEqual(data["count-objects-checked"], 10)
12299+            self.failUnlessReallyEqual(data["count-objects-healthy"], 10)
12300         d.addCallback(_check_json)
12301         d.addCallback(self.get_operation_results, "123", "html")
12302         def _check_html(res):
12303hunk ./src/allmydata/test/test_web.py 2736
12304-            self.failUnless("Objects Checked: <span>8</span>" in res)
12305-            self.failUnless("Objects Healthy: <span>8</span>" in res)
12306+            self.failUnless("Objects Checked: <span>10</span>" in res)
12307+            self.failUnless("Objects Healthy: <span>10</span>" in res)
12308         d.addCallback(_check_html)
12309 
12310         d.addCallback(lambda res:
12311hunk ./src/allmydata/test/test_web.py 2766
12312         d.addCallback(self.wait_for_operation, "124")
12313         def _check_json(data):
12314             self.failUnlessReallyEqual(data["finished"], True)
12315-            self.failUnlessReallyEqual(data["count-objects-checked"], 8)
12316-            self.failUnlessReallyEqual(data["count-objects-healthy-pre-repair"], 8)
12317+            self.failUnlessReallyEqual(data["count-objects-checked"], 10)
12318+            self.failUnlessReallyEqual(data["count-objects-healthy-pre-repair"], 10)
12319             self.failUnlessReallyEqual(data["count-objects-unhealthy-pre-repair"], 0)
12320             self.failUnlessReallyEqual(data["count-corrupt-shares-pre-repair"], 0)
12321             self.failUnlessReallyEqual(data["count-repairs-attempted"], 0)
12322hunk ./src/allmydata/test/test_web.py 2773
12323             self.failUnlessReallyEqual(data["count-repairs-successful"], 0)
12324             self.failUnlessReallyEqual(data["count-repairs-unsuccessful"], 0)
12325-            self.failUnlessReallyEqual(data["count-objects-healthy-post-repair"], 8)
12326+            self.failUnlessReallyEqual(data["count-objects-healthy-post-repair"], 10)
12327             self.failUnlessReallyEqual(data["count-objects-unhealthy-post-repair"], 0)
12328             self.failUnlessReallyEqual(data["count-corrupt-shares-post-repair"], 0)
12329         d.addCallback(_check_json)
12330hunk ./src/allmydata/test/test_web.py 2779
12331         d.addCallback(self.get_operation_results, "124", "html")
12332         def _check_html(res):
12333-            self.failUnless("Objects Checked: <span>8</span>" in res)
12334+            self.failUnless("Objects Checked: <span>10</span>" in res)
12335 
12336hunk ./src/allmydata/test/test_web.py 2781
12337-            self.failUnless("Objects Healthy (before repair): <span>8</span>" in res)
12338+            self.failUnless("Objects Healthy (before repair): <span>10</span>" in res)
12339             self.failUnless("Objects Unhealthy (before repair): <span>0</span>" in res)
12340             self.failUnless("Corrupt Shares (before repair): <span>0</span>" in res)
12341 
12342hunk ./src/allmydata/test/test_web.py 2789
12343             self.failUnless("Repairs Successful: <span>0</span>" in res)
12344             self.failUnless("Repairs Unsuccessful: <span>0</span>" in res)
12345 
12346-            self.failUnless("Objects Healthy (after repair): <span>8</span>" in res)
12347+            self.failUnless("Objects Healthy (after repair): <span>10</span>" in res)
12348             self.failUnless("Objects Unhealthy (after repair): <span>0</span>" in res)
12349             self.failUnless("Corrupt Shares (after repair): <span>0</span>" in res)
12350         d.addCallback(_check_html)
12351hunk ./src/allmydata/test/test_web.py 2808
12352         d.addCallback(self.failUnlessNodeKeysAre, [])
12353         return d
12354 
12355+    def test_POST_mkdir_mdmf(self):
12356+        d = self.POST(self.public_url + "/foo?t=mkdir&name=newdir&mutable-type=mdmf")
12357+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
12358+        d.addCallback(lambda node:
12359+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
12360+        return d
12361+
12362+    def test_POST_mkdir_sdmf(self):
12363+        d = self.POST(self.public_url + "/foo?t=mkdir&name=newdir&mutable-type=sdmf")
12364+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
12365+        d.addCallback(lambda node:
12366+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
12367+        return d
12368+
12369+    def test_POST_mkdir_bad_mutable_type(self):
12370+        return self.shouldHTTPError("test bad mutable type",
12371+                                    400, "Bad Request", "Unknown type: foo",
12372+                                    self.POST, self.public_url + \
12373+                                    "/foo?t=mkdir&name=newdir&mutable-type=foo")
12374+
12375     def test_POST_mkdir_initial_children(self):
12376         (newkids, caps) = self._create_initial_children()
12377         d = self.POST2(self.public_url +
12378hunk ./src/allmydata/test/test_web.py 2841
12379         d.addCallback(self.failUnlessROChildURIIs, u"child-imm", caps['filecap1'])
12380         return d
12381 
12382+    def test_POST_mkdir_initial_children_mdmf(self):
12383+        (newkids, caps) = self._create_initial_children()
12384+        d = self.POST2(self.public_url +
12385+                       "/foo?t=mkdir-with-children&name=newdir&mutable-type=mdmf",
12386+                       simplejson.dumps(newkids))
12387+        d.addCallback(lambda res:
12388+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
12389+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
12390+        d.addCallback(lambda node:
12391+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
12392+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
12393+        d.addCallback(self.failUnlessROChildURIIs, u"child-imm",
12394+                       caps['filecap1'])
12395+        return d
12396+
12397+    # XXX: Duplication.
12398+    def test_POST_mkdir_initial_children_sdmf(self):
12399+        (newkids, caps) = self._create_initial_children()
12400+        d = self.POST2(self.public_url +
12401+                       "/foo?t=mkdir-with-children&name=newdir&mutable-type=sdmf",
12402+                       simplejson.dumps(newkids))
12403+        d.addCallback(lambda res:
12404+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
12405+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
12406+        d.addCallback(lambda node:
12407+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
12408+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
12409+        d.addCallback(self.failUnlessROChildURIIs, u"child-imm",
12410+                       caps['filecap1'])
12411+        return d
12412+
12413+    def test_POST_mkdir_initial_children_bad_mutable_type(self):
12414+        (newkids, caps) = self._create_initial_children()
12415+        return self.shouldHTTPError("test bad mutable type",
12416+                                    400, "Bad Request", "Unknown type: foo",
12417+                                    self.POST, self.public_url + \
12418+                                    "/foo?t=mkdir-with-children&name=newdir&mutable-type=foo",
12419+                                    simplejson.dumps(newkids))
12420+
12421     def test_POST_mkdir_immutable(self):
12422         (newkids, caps) = self._create_immutable_children()
12423         d = self.POST2(self.public_url +
12424hunk ./src/allmydata/test/test_web.py 2936
12425         d.addCallback(_after_mkdir)
12426         return d
12427 
12428+    def test_POST_mkdir_no_parentdir_noredirect_mdmf(self):
12429+        d = self.POST("/uri?t=mkdir&mutable-type=mdmf")
12430+        def _after_mkdir(res):
12431+            u = uri.from_string(res)
12432+            # Check that this is an MDMF writecap
12433+            self.failUnlessIsInstance(u, uri.MDMFDirectoryURI)
12434+        d.addCallback(_after_mkdir)
12435+        return d
12436+
12437+    def test_POST_mkdir_no_parentdir_noredirect_sdmf(self):
12438+        d = self.POST("/uri?t=mkdir&mutable-type=sdmf")
12439+        def _after_mkdir(res):
12440+            u = uri.from_string(res)
12441+            self.failUnlessIsInstance(u, uri.DirectoryURI)
12442+        d.addCallback(_after_mkdir)
12443+        return d
12444+
12445+    def test_POST_mkdir_no_parentdir_noredirect_bad_mutable_type(self):
12446+        return self.shouldHTTPError("test bad mutable type",
12447+                                    400, "Bad Request", "Unknown type: foo",
12448+                                    self.POST, self.public_url + \
12449+                                    "/uri?t=mkdir&mutable-type=foo")
12450+
12451     def test_POST_mkdir_no_parentdir_noredirect2(self):
12452         # make sure form-based arguments (as on the welcome page) still work
12453         d = self.POST("/uri", t="mkdir")
12454hunk ./src/allmydata/test/test_web.py 3001
12455         filecap3 = node3.get_readonly_uri()
12456         node4 = self.s.create_node_from_uri(make_mutable_file_uri())
12457         dircap = DirectoryNode(node4, None, None).get_uri()
12458+        mdmfcap = make_mutable_file_uri(mdmf=True)
12459         litdircap = "URI:DIR2-LIT:ge3dumj2mewdcotyfqydulbshj5x2lbm"
12460         emptydircap = "URI:DIR2-LIT:"
12461         newkids = {u"child-imm":        ["filenode", {"rw_uri": filecap1,
12462hunk ./src/allmydata/test/test_web.py 3018
12463                                                       "ro_uri": self._make_readonly(dircap)}],
12464                    u"dirchild-lit":     ["dirnode",  {"ro_uri": litdircap}],
12465                    u"dirchild-empty":   ["dirnode",  {"ro_uri": emptydircap}],
12466+                   u"child-mutable-mdmf": ["filenode", {"rw_uri": mdmfcap,
12467+                                                        "ro_uri": self._make_readonly(mdmfcap)}],
12468                    }
12469         return newkids, {'filecap1': filecap1,
12470                          'filecap2': filecap2,
12471hunk ./src/allmydata/test/test_web.py 3029
12472                          'unknown_immcap': unknown_immcap,
12473                          'dircap': dircap,
12474                          'litdircap': litdircap,
12475-                         'emptydircap': emptydircap}
12476+                         'emptydircap': emptydircap,
12477+                         'mdmfcap': mdmfcap}
12478 
12479     def _create_immutable_children(self):
12480         contents, n, filecap1 = self.makefile(12)
12481hunk ./src/allmydata/test/test_web.py 3571
12482                                                       contents))
12483         return d
12484 
12485+    def test_PUT_NEWFILEURL_mdmf(self):
12486+        new_contents = self.NEWFILE_CONTENTS * 300000
12487+        d = self.PUT(self.public_url + \
12488+                     "/foo/mdmf.txt?mutable=true&mutable-type=mdmf",
12489+                     new_contents)
12490+        d.addCallback(lambda ignored:
12491+            self.GET(self.public_url + "/foo/mdmf.txt?t=json"))
12492+        def _got_json(json):
12493+            data = simplejson.loads(json)
12494+            data = data[1]
12495+            self.failUnlessIn("mutable-type", data)
12496+            self.failUnlessEqual(data['mutable-type'], "mdmf")
12497+            self.failUnless(data['rw_uri'].startswith("URI:MDMF"))
12498+            self.failUnless(data['ro_uri'].startswith("URI:MDMF"))
12499+        d.addCallback(_got_json)
12500+        return d
12501+
12502+    def test_PUT_NEWFILEURL_sdmf(self):
12503+        new_contents = self.NEWFILE_CONTENTS * 300000
12504+        d = self.PUT(self.public_url + \
12505+                     "/foo/sdmf.txt?mutable=true&mutable-type=sdmf",
12506+                     new_contents)
12507+        d.addCallback(lambda ignored:
12508+            self.GET(self.public_url + "/foo/sdmf.txt?t=json"))
12509+        def _got_json(json):
12510+            data = simplejson.loads(json)
12511+            data = data[1]
12512+            self.failUnlessIn("mutable-type", data)
12513+            self.failUnlessEqual(data['mutable-type'], "sdmf")
12514+        d.addCallback(_got_json)
12515+        return d
12516+
12517+    def test_PUT_NEWFILEURL_bad_mutable_type(self):
12518+       new_contents = self.NEWFILE_CONTENTS * 300000
12519+       return self.shouldHTTPError("test bad mutable type",
12520+                                   400, "Bad Request", "Unknown type: foo",
12521+                                   self.PUT, self.public_url + \
12522+                                   "/foo/foo.txt?mutable=true&mutable-type=foo",
12523+                                   new_contents)
12524+
12525     def test_PUT_NEWFILEURL_uri_replace(self):
12526         contents, n, new_uri = self.makefile(8)
12527         d = self.PUT(self.public_url + "/foo/bar.txt?t=uri", new_uri)
12528hunk ./src/allmydata/test/test_web.py 3720
12529         d.addCallback(self.failUnlessIsEmptyJSON)
12530         return d
12531 
12532+    def test_PUT_mkdir_mdmf(self):
12533+        d = self.PUT("/uri?t=mkdir&mutable-type=mdmf", "")
12534+        def _got(res):
12535+            u = uri.from_string(res)
12536+            # Check that this is an MDMF writecap
12537+            self.failUnlessIsInstance(u, uri.MDMFDirectoryURI)
12538+        d.addCallback(_got)
12539+        return d
12540+
12541+    def test_PUT_mkdir_sdmf(self):
12542+        d = self.PUT("/uri?t=mkdir&mutable-type=sdmf", "")
12543+        def _got(res):
12544+            u = uri.from_string(res)
12545+            self.failUnlessIsInstance(u, uri.DirectoryURI)
12546+        d.addCallback(_got)
12547+        return d
12548+
12549+    def test_PUT_mkdir_bad_mutable_type(self):
12550+        return self.shouldHTTPError("bad mutable type",
12551+                                    400, "Bad Request", "Unknown type: foo",
12552+                                    self.PUT, "/uri?t=mkdir&mutable-type=foo",
12553+                                    "")
12554+
12555     def test_POST_check(self):
12556         d = self.POST(self.public_url + "/foo", t="check", name="bar.txt")
12557         def _done(res):
12558hunk ./src/allmydata/test/test_web.py 3755
12559         d.addCallback(_done)
12560         return d
12561 
12562+
12563+    def test_PUT_update_at_offset(self):
12564+        file_contents = "test file" * 100000 # about 900 KiB
12565+        d = self.PUT("/uri?mutable=true", file_contents)
12566+        def _then(filecap):
12567+            self.filecap = filecap
12568+            new_data = file_contents[:100]
12569+            new = "replaced and so on"
12570+            new_data += new
12571+            new_data += file_contents[len(new_data):]
12572+            assert len(new_data) == len(file_contents)
12573+            self.new_data = new_data
12574+        d.addCallback(_then)
12575+        d.addCallback(lambda ignored:
12576+            self.PUT("/uri/%s?replace=True&offset=100" % self.filecap,
12577+                     "replaced and so on"))
12578+        def _get_data(filecap):
12579+            n = self.s.create_node_from_uri(filecap)
12580+            return n.download_best_version()
12581+        d.addCallback(_get_data)
12582+        d.addCallback(lambda results:
12583+            self.failUnlessEqual(results, self.new_data))
12584+        # Now try appending things to the file
12585+        d.addCallback(lambda ignored:
12586+            self.PUT("/uri/%s?offset=%d" % (self.filecap, len(self.new_data)),
12587+                     "puppies" * 100))
12588+        d.addCallback(_get_data)
12589+        d.addCallback(lambda results:
12590+            self.failUnlessEqual(results, self.new_data + ("puppies" * 100)))
12591+        # and try replacing the beginning of the file
12592+        d.addCallback(lambda ignored:
12593+            self.PUT("/uri/%s?offset=0" % self.filecap, "begin"))
12594+        d.addCallback(_get_data)
12595+        d.addCallback(lambda results:
12596+            self.failUnlessEqual(results, "begin"+self.new_data[len("begin"):]+("puppies"*100)))
12597+        return d
12598+
12599+    def test_PUT_update_at_invalid_offset(self):
12600+        file_contents = "test file" * 100000 # about 900 KiB
12601+        d = self.PUT("/uri?mutable=true", file_contents)
12602+        def _then(filecap):
12603+            self.filecap = filecap
12604+        d.addCallback(_then)
12605+        # Negative offsets should cause an error.
12606+        d.addCallback(lambda ignored:
12607+            self.shouldHTTPError("test mutable invalid offset negative",
12608+                                 400, "Bad Request",
12609+                                 "Invalid offset",
12610+                                 self.PUT,
12611+                                 "/uri/%s?offset=-1" % self.filecap,
12612+                                 "foo"))
12613+        return d
12614+
12615+    def test_PUT_update_at_offset_immutable(self):
12616+        file_contents = "Test file" * 100000
12617+        d = self.PUT("/uri", file_contents)
12618+        def _then(filecap):
12619+            self.filecap = filecap
12620+        d.addCallback(_then)
12621+        d.addCallback(lambda ignored:
12622+            self.shouldHTTPError("test immutable update",
12623+                                 400, "Bad Request",
12624+                                 "immutable",
12625+                                 self.PUT,
12626+                                 "/uri/%s?offset=50" % self.filecap,
12627+                                 "foo"))
12628+        return d
12629+
12630+
12631     def test_bad_method(self):
12632         url = self.webish_url + self.public_url + "/foo/bar.txt"
12633         d = self.shouldHTTPError("test_bad_method",
12634hunk ./src/allmydata/test/test_web.py 4093
12635         def _stash_mutable_uri(n, which):
12636             self.uris[which] = n.get_uri()
12637             assert isinstance(self.uris[which], str)
12638-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"3"))
12639+        d.addCallback(lambda ign:
12640+            c0.create_mutable_file(publish.MutableData(DATA+"3")))
12641         d.addCallback(_stash_mutable_uri, "corrupt")
12642         d.addCallback(lambda ign:
12643                       c0.upload(upload.Data("literal", convergence="")))
12644hunk ./src/allmydata/test/test_web.py 4240
12645         def _stash_mutable_uri(n, which):
12646             self.uris[which] = n.get_uri()
12647             assert isinstance(self.uris[which], str)
12648-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"3"))
12649+        d.addCallback(lambda ign:
12650+            c0.create_mutable_file(publish.MutableData(DATA+"3")))
12651         d.addCallback(_stash_mutable_uri, "corrupt")
12652 
12653         def _compute_fileurls(ignored):
12654hunk ./src/allmydata/test/test_web.py 4903
12655         def _stash_mutable_uri(n, which):
12656             self.uris[which] = n.get_uri()
12657             assert isinstance(self.uris[which], str)
12658-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"2"))
12659+        d.addCallback(lambda ign:
12660+            c0.create_mutable_file(publish.MutableData(DATA+"2")))
12661         d.addCallback(_stash_mutable_uri, "mutable")
12662 
12663         def _compute_fileurls(ignored):
12664hunk ./src/allmydata/test/test_web.py 5003
12665                                                         convergence="")))
12666         d.addCallback(_stash_uri, "small")
12667 
12668-        d.addCallback(lambda ign: c0.create_mutable_file("mutable"))
12669+        d.addCallback(lambda ign:
12670+            c0.create_mutable_file(publish.MutableData("mutable")))
12671         d.addCallback(lambda fn: self.rootnode.set_node(u"mutable", fn))
12672         d.addCallback(_stash_uri, "mutable")
12673 
12674hunk ./src/allmydata/web/common.py 12
12675 from allmydata.interfaces import ExistingChildError, NoSuchChildError, \
12676      FileTooLargeError, NotEnoughSharesError, NoSharesError, \
12677      EmptyPathnameComponentError, MustBeDeepImmutableError, \
12678-     MustBeReadonlyError, MustNotBeUnknownRWError
12679+     MustBeReadonlyError, MustNotBeUnknownRWError, SDMF_VERSION, MDMF_VERSION
12680 from allmydata.mutable.common import UnrecoverableFileError
12681 from allmydata.util import abbreviate
12682 from allmydata.util.encodingutil import to_str, quote_output
12683hunk ./src/allmydata/web/common.py 35
12684     else:
12685         return boolean_of_arg(replace)
12686 
12687+
12688+def parse_mutable_type_arg(arg):
12689+    if not arg:
12690+        return None # interpreted by the caller as "let the nodemaker decide"
12691+
12692+    arg = arg.lower()
12693+    if arg == "mdmf":
12694+        return MDMF_VERSION
12695+    elif arg == "sdmf":
12696+        return SDMF_VERSION
12697+
12698+    return "invalid"
12699+
12700+
12701+def parse_offset_arg(offset):
12702+    # XXX: This will raise a ValueError when invoked on something that
12703+    # is not an integer. Is that okay? Or do we want a better error
12704+    # message? Since this call is going to be used by programmers and
12705+    # their tools rather than users (through the wui), it is not
12706+    # inconsistent to return that, I guess.
12707+    if offset is not None:
12708+        offset = int(offset)
12709+
12710+    return offset
12711+
12712+
12713 def get_root(ctx_or_req):
12714     req = IRequest(ctx_or_req)
12715     # the addSlash=True gives us one extra (empty) segment
12716hunk ./src/allmydata/web/directory.py 19
12717 from allmydata.uri import from_string_dirnode
12718 from allmydata.interfaces import IDirectoryNode, IFileNode, IFilesystemNode, \
12719      IImmutableFileNode, IMutableFileNode, ExistingChildError, \
12720-     NoSuchChildError, EmptyPathnameComponentError
12721+     NoSuchChildError, EmptyPathnameComponentError, SDMF_VERSION, MDMF_VERSION
12722 from allmydata.monitor import Monitor, OperationCancelledError
12723 from allmydata import dirnode
12724 from allmydata.web.common import text_plain, WebError, \
12725hunk ./src/allmydata/web/directory.py 26
12726      IOpHandleTable, NeedOperationHandleError, \
12727      boolean_of_arg, get_arg, get_root, parse_replace_arg, \
12728      should_create_intermediate_directories, \
12729-     getxmlfile, RenderMixin, humanize_failure, convert_children_json
12730+     getxmlfile, RenderMixin, humanize_failure, convert_children_json, \
12731+     parse_mutable_type_arg
12732 from allmydata.web.filenode import ReplaceMeMixin, \
12733      FileNodeHandler, PlaceHolderNodeHandler
12734 from allmydata.web.check_results import CheckResults, \
12735hunk ./src/allmydata/web/directory.py 112
12736                     mutable = True
12737                     if t == "mkdir-immutable":
12738                         mutable = False
12739+
12740+                    mt = None
12741+                    if mutable:
12742+                        arg = get_arg(req, "mutable-type", None)
12743+                        mt = parse_mutable_type_arg(arg)
12744+                        if mt is "invalid":
12745+                            raise WebError("Unknown type: %s" % arg,
12746+                                           http.BAD_REQUEST)
12747                     d = self.node.create_subdirectory(name, kids,
12748hunk ./src/allmydata/web/directory.py 121
12749-                                                      mutable=mutable)
12750+                                                      mutable=mutable,
12751+                                                      mutable_version=mt)
12752                     d.addCallback(make_handler_for,
12753                                   self.client, self.node, name)
12754                     return d
12755hunk ./src/allmydata/web/directory.py 163
12756         if not t:
12757             # render the directory as HTML, using the docFactory and Nevow's
12758             # whole templating thing.
12759-            return DirectoryAsHTML(self.node)
12760+            return DirectoryAsHTML(self.node,
12761+                                   self.client.mutable_file_default)
12762 
12763         if t == "json":
12764             return DirectoryJSONMetadata(ctx, self.node)
12765hunk ./src/allmydata/web/directory.py 253
12766         name = name.decode("utf-8")
12767         replace = boolean_of_arg(get_arg(req, "replace", "true"))
12768         kids = {}
12769-        d = self.node.create_subdirectory(name, kids, overwrite=replace)
12770+        arg = get_arg(req, "mutable-type", None)
12771+        mt = parse_mutable_type_arg(arg)
12772+        if mt is not None and mt is not "invalid":
12773+            d = self.node.create_subdirectory(name, kids, overwrite=replace,
12774+                                          mutable_version=mt)
12775+        elif mt is "invalid":
12776+            raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
12777+        else:
12778+            d = self.node.create_subdirectory(name, kids, overwrite=replace)
12779         d.addCallback(lambda child: child.get_uri()) # TODO: urlencode
12780         return d
12781 
12782hunk ./src/allmydata/web/directory.py 277
12783         req.content.seek(0)
12784         kids_json = req.content.read()
12785         kids = convert_children_json(self.client.nodemaker, kids_json)
12786-        d = self.node.create_subdirectory(name, kids, overwrite=False)
12787+        arg = get_arg(req, "mutable-type", None)
12788+        mt = parse_mutable_type_arg(arg)
12789+        if mt is not None and mt is not "invalid":
12790+            d = self.node.create_subdirectory(name, kids, overwrite=False,
12791+                                              mutable_version=mt)
12792+        elif mt is "invalid":
12793+            raise WebError("Unknown type: %s" % arg)
12794+        else:
12795+            d = self.node.create_subdirectory(name, kids, overwrite=False)
12796         d.addCallback(lambda child: child.get_uri()) # TODO: urlencode
12797         return d
12798 
12799hunk ./src/allmydata/web/directory.py 582
12800     docFactory = getxmlfile("directory.xhtml")
12801     addSlash = True
12802 
12803-    def __init__(self, node):
12804+    def __init__(self, node, default_mutable_format):
12805         rend.Page.__init__(self)
12806         self.node = node
12807 
12808hunk ./src/allmydata/web/directory.py 586
12809+        assert default_mutable_format in (MDMF_VERSION, SDMF_VERSION)
12810+        self.default_mutable_format = default_mutable_format
12811+
12812     def beforeRender(self, ctx):
12813         # attempt to get the dirnode's children, stashing them (or the
12814         # failure that results) for later use
12815hunk ./src/allmydata/web/directory.py 786
12816 
12817         return ctx.tag
12818 
12819+    # XXX: Duplicated from root.py.
12820     def render_forms(self, ctx, data):
12821         forms = []
12822 
12823hunk ./src/allmydata/web/directory.py 795
12824         if self.dirnode_children is None:
12825             return T.div["No upload forms: directory is unreadable"]
12826 
12827+        mdmf_directory_input = T.input(type='radio', name='mutable-type',
12828+                                       id='mutable-directory-mdmf',
12829+                                       value='mdmf')
12830+        sdmf_directory_input = T.input(type='radio', name='mutable-type',
12831+                                       id='mutable-directory-sdmf',
12832+                                       value='sdmf', checked='checked')
12833         mkdir = T.form(action=".", method="post",
12834                        enctype="multipart/form-data")[
12835             T.fieldset[
12836hunk ./src/allmydata/web/directory.py 809
12837             T.legend(class_="freeform-form-label")["Create a new directory in this directory"],
12838             "New directory name: ",
12839             T.input(type="text", name="name"), " ",
12840+            T.label(for_='mutable-directory-sdmf')["SDMF"],
12841+            sdmf_directory_input,
12842+            T.label(for_='mutable-directory-mdmf')["MDMF"],
12843+            mdmf_directory_input,
12844             T.input(type="submit", value="Create"),
12845             ]]
12846         forms.append(T.div(class_="freeform-form")[mkdir])
12847hunk ./src/allmydata/web/directory.py 817
12848 
12849+        # Build input elements for mutable file type. We do this outside
12850+        # of the list so we can check the appropriate format, based on
12851+        # the default configured in the client (which reflects the
12852+        # default configured in tahoe.cfg)
12853+        if self.default_mutable_format == MDMF_VERSION:
12854+            mdmf_input = T.input(type='radio', name='mutable-type',
12855+                                 id='mutable-type-mdmf', value='mdmf',
12856+                                 checked='checked')
12857+        else:
12858+            mdmf_input = T.input(type='radio', name='mutable-type',
12859+                                 id='mutable-type-mdmf', value='mdmf')
12860+
12861+        if self.default_mutable_format == SDMF_VERSION:
12862+            sdmf_input = T.input(type='radio', name='mutable-type',
12863+                                 id='mutable-type-sdmf', value='sdmf',
12864+                                 checked="checked")
12865+        else:
12866+            sdmf_input = T.input(type='radio', name='mutable-type',
12867+                                 id='mutable-type-sdmf', value='sdmf')
12868+
12869         upload = T.form(action=".", method="post",
12870                         enctype="multipart/form-data")[
12871             T.fieldset[
12872hunk ./src/allmydata/web/directory.py 849
12873             T.input(type="submit", value="Upload"),
12874             " Mutable?:",
12875             T.input(type="checkbox", name="mutable"),
12876+            sdmf_input, T.label(for_="mutable-type-sdmf")["SDMF"],
12877+            mdmf_input,
12878+            T.label(for_="mutable-type-mdmf")["MDMF (experimental)"],
12879             ]]
12880         forms.append(T.div(class_="freeform-form")[upload])
12881 
12882hunk ./src/allmydata/web/directory.py 887
12883                 kiddata = ("filenode", {'size': childnode.get_size(),
12884                                         'mutable': childnode.is_mutable(),
12885                                         })
12886+                if childnode.is_mutable() and \
12887+                    childnode.get_version() is not None:
12888+                    mutable_type = childnode.get_version()
12889+                    assert mutable_type in (SDMF_VERSION, MDMF_VERSION)
12890+
12891+                    if mutable_type == MDMF_VERSION:
12892+                        mutable_type = "mdmf"
12893+                    else:
12894+                        mutable_type = "sdmf"
12895+                    kiddata[1]['mutable-type'] = mutable_type
12896+
12897             elif IDirectoryNode.providedBy(childnode):
12898                 kiddata = ("dirnode", {'mutable': childnode.is_mutable()})
12899             else:
12900hunk ./src/allmydata/web/filenode.py 9
12901 from nevow import url, rend
12902 from nevow.inevow import IRequest
12903 
12904-from allmydata.interfaces import ExistingChildError
12905+from allmydata.interfaces import ExistingChildError, SDMF_VERSION, MDMF_VERSION
12906 from allmydata.monitor import Monitor
12907 from allmydata.immutable.upload import FileHandle
12908hunk ./src/allmydata/web/filenode.py 12
12909+from allmydata.mutable.publish import MutableFileHandle
12910+from allmydata.mutable.common import MODE_READ
12911 from allmydata.util import log, base32
12912 
12913 from allmydata.web.common import text_plain, WebError, RenderMixin, \
12914hunk ./src/allmydata/web/filenode.py 18
12915      boolean_of_arg, get_arg, should_create_intermediate_directories, \
12916-     MyExceptionHandler, parse_replace_arg
12917+     MyExceptionHandler, parse_replace_arg, parse_offset_arg, \
12918+     parse_mutable_type_arg
12919 from allmydata.web.check_results import CheckResults, \
12920      CheckAndRepairResults, LiteralCheckResults
12921 from allmydata.web.info import MoreInfo
12922hunk ./src/allmydata/web/filenode.py 29
12923         # a new file is being uploaded in our place.
12924         mutable = boolean_of_arg(get_arg(req, "mutable", "false"))
12925         if mutable:
12926-            req.content.seek(0)
12927-            data = req.content.read()
12928-            d = client.create_mutable_file(data)
12929+            arg = get_arg(req, "mutable-type", None)
12930+            mutable_type = parse_mutable_type_arg(arg)
12931+            if mutable_type is "invalid":
12932+                raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
12933+
12934+            data = MutableFileHandle(req.content)
12935+            d = client.create_mutable_file(data, version=mutable_type)
12936             def _uploaded(newnode):
12937                 d2 = self.parentnode.set_node(self.name, newnode,
12938                                               overwrite=replace)
12939hunk ./src/allmydata/web/filenode.py 68
12940         d.addCallback(lambda res: childnode.get_uri())
12941         return d
12942 
12943-    def _read_data_from_formpost(self, req):
12944-        # SDMF: files are small, and we can only upload data, so we read
12945-        # the whole file into memory before uploading.
12946-        contents = req.fields["file"]
12947-        contents.file.seek(0)
12948-        data = contents.file.read()
12949-        return data
12950 
12951     def replace_me_with_a_formpost(self, req, client, replace):
12952         # create a new file, maybe mutable, maybe immutable
12953hunk ./src/allmydata/web/filenode.py 73
12954         mutable = boolean_of_arg(get_arg(req, "mutable", "false"))
12955 
12956+        # create an immutable file
12957+        contents = req.fields["file"]
12958         if mutable:
12959hunk ./src/allmydata/web/filenode.py 76
12960-            data = self._read_data_from_formpost(req)
12961-            d = client.create_mutable_file(data)
12962+            arg = get_arg(req, "mutable-type", None)
12963+            mutable_type = parse_mutable_type_arg(arg)
12964+            if mutable_type is "invalid":
12965+                raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
12966+            uploadable = MutableFileHandle(contents.file)
12967+            d = client.create_mutable_file(uploadable, version=mutable_type)
12968             def _uploaded(newnode):
12969                 d2 = self.parentnode.set_node(self.name, newnode,
12970                                               overwrite=replace)
12971hunk ./src/allmydata/web/filenode.py 89
12972                 return d2
12973             d.addCallback(_uploaded)
12974             return d
12975-        # create an immutable file
12976-        contents = req.fields["file"]
12977+
12978         uploadable = FileHandle(contents.file, convergence=client.convergence)
12979         d = self.parentnode.add_file(self.name, uploadable, overwrite=replace)
12980         d.addCallback(lambda newnode: newnode.get_uri())
12981hunk ./src/allmydata/web/filenode.py 95
12982         return d
12983 
12984+
12985 class PlaceHolderNodeHandler(RenderMixin, rend.Page, ReplaceMeMixin):
12986     def __init__(self, client, parentnode, name):
12987         rend.Page.__init__(self)
12988hunk ./src/allmydata/web/filenode.py 178
12989             # properly. So we assume that at least the browser will agree
12990             # with itself, and echo back the same bytes that we were given.
12991             filename = get_arg(req, "filename", self.name) or "unknown"
12992-            if self.node.is_mutable():
12993-                # some day: d = self.node.get_best_version()
12994-                d = makeMutableDownloadable(self.node)
12995-            else:
12996-                d = defer.succeed(self.node)
12997+            d = self.node.get_best_readable_version()
12998             d.addCallback(lambda dn: FileDownloader(dn, filename))
12999             return d
13000         if t == "json":
13001hunk ./src/allmydata/web/filenode.py 182
13002-            if self.parentnode and self.name:
13003-                d = self.parentnode.get_metadata_for(self.name)
13004+            # We do this to make sure that fields like size and
13005+            # mutable-type (which depend on the file on the grid and not
13006+            # just on the cap) are filled in. The latter gets used in
13007+            # tests, in particular.
13008+            #
13009+            # TODO: Make it so that the servermap knows how to update in
13010+            # a mode specifically designed to fill in these fields, and
13011+            # then update it in that mode.
13012+            if self.node.is_mutable():
13013+                d = self.node.get_servermap(MODE_READ)
13014             else:
13015                 d = defer.succeed(None)
13016hunk ./src/allmydata/web/filenode.py 194
13017+            if self.parentnode and self.name:
13018+                d.addCallback(lambda ignored:
13019+                    self.parentnode.get_metadata_for(self.name))
13020+            else:
13021+                d.addCallback(lambda ignored: None)
13022             d.addCallback(lambda md: FileJSONMetadata(ctx, self.node, md))
13023             return d
13024         if t == "info":
13025hunk ./src/allmydata/web/filenode.py 215
13026         if t:
13027             raise WebError("GET file: bad t=%s" % t)
13028         filename = get_arg(req, "filename", self.name) or "unknown"
13029-        if self.node.is_mutable():
13030-            # some day: d = self.node.get_best_version()
13031-            d = makeMutableDownloadable(self.node)
13032-        else:
13033-            d = defer.succeed(self.node)
13034+        d = self.node.get_best_readable_version()
13035         d.addCallback(lambda dn: FileDownloader(dn, filename))
13036         return d
13037 
13038hunk ./src/allmydata/web/filenode.py 223
13039         req = IRequest(ctx)
13040         t = get_arg(req, "t", "").strip()
13041         replace = parse_replace_arg(get_arg(req, "replace", "true"))
13042+        offset = parse_offset_arg(get_arg(req, "offset", None))
13043 
13044         if not t:
13045hunk ./src/allmydata/web/filenode.py 226
13046-            if self.node.is_mutable():
13047-                return self.replace_my_contents(req)
13048             if not replace:
13049                 # this is the early trap: if someone else modifies the
13050                 # directory while we're uploading, the add_file(overwrite=)
13051hunk ./src/allmydata/web/filenode.py 231
13052                 # call in replace_me_with_a_child will do the late trap.
13053                 raise ExistingChildError()
13054-            assert self.parentnode and self.name
13055-            return self.replace_me_with_a_child(req, self.client, replace)
13056+
13057+            if self.node.is_mutable():
13058+                # Are we a readonly filenode? We shouldn't allow callers
13059+                # to try to replace us if we are.
13060+                if self.node.is_readonly():
13061+                    raise WebError("PUT to a mutable file: replace or update"
13062+                                   " requested with read-only cap")
13063+                if offset is None:
13064+                    return self.replace_my_contents(req)
13065+
13066+                if offset >= 0:
13067+                    return self.update_my_contents(req, offset)
13068+
13069+                raise WebError("PUT to a mutable file: Invalid offset")
13070+
13071+            else:
13072+                if offset is not None:
13073+                    raise WebError("PUT to a file: append operation invoked "
13074+                                   "on an immutable cap")
13075+
13076+                assert self.parentnode and self.name
13077+                return self.replace_me_with_a_child(req, self.client, replace)
13078+
13079         if t == "uri":
13080             if not replace:
13081                 raise ExistingChildError()
13082hunk ./src/allmydata/web/filenode.py 314
13083 
13084     def replace_my_contents(self, req):
13085         req.content.seek(0)
13086-        new_contents = req.content.read()
13087+        new_contents = MutableFileHandle(req.content)
13088         d = self.node.overwrite(new_contents)
13089         d.addCallback(lambda res: self.node.get_uri())
13090         return d
13091hunk ./src/allmydata/web/filenode.py 319
13092 
13093+
13094+    def update_my_contents(self, req, offset):
13095+        req.content.seek(0)
13096+        added_contents = MutableFileHandle(req.content)
13097+
13098+        d = self.node.get_best_mutable_version()
13099+        d.addCallback(lambda mv:
13100+            mv.update(added_contents, offset))
13101+        d.addCallback(lambda ignored:
13102+            self.node.get_uri())
13103+        return d
13104+
13105+
13106     def replace_my_contents_with_a_formpost(self, req):
13107         # we have a mutable file. Get the data from the formpost, and replace
13108         # the mutable file's contents with it.
13109hunk ./src/allmydata/web/filenode.py 335
13110-        new_contents = self._read_data_from_formpost(req)
13111+        new_contents = req.fields['file']
13112+        new_contents = MutableFileHandle(new_contents.file)
13113+
13114         d = self.node.overwrite(new_contents)
13115         d.addCallback(lambda res: self.node.get_uri())
13116         return d
13117hunk ./src/allmydata/web/filenode.py 342
13118 
13119-class MutableDownloadable:
13120-    #implements(IDownloadable)
13121-    def __init__(self, size, node):
13122-        self.size = size
13123-        self.node = node
13124-    def get_size(self):
13125-        return self.size
13126-    def is_mutable(self):
13127-        return True
13128-    def read(self, consumer, offset=0, size=None):
13129-        d = self.node.download_best_version()
13130-        d.addCallback(self._got_data, consumer, offset, size)
13131-        return d
13132-    def _got_data(self, contents, consumer, offset, size):
13133-        start = offset
13134-        if size is not None:
13135-            end = offset+size
13136-        else:
13137-            end = self.size
13138-        # SDMF: we can write the whole file in one big chunk
13139-        consumer.write(contents[start:end])
13140-        return consumer
13141-
13142-def makeMutableDownloadable(n):
13143-    d = defer.maybeDeferred(n.get_size_of_best_version)
13144-    d.addCallback(MutableDownloadable, n)
13145-    return d
13146 
13147 class FileDownloader(rend.Page):
13148     def __init__(self, filenode, filename):
13149hunk ./src/allmydata/web/filenode.py 516
13150     data[1]['mutable'] = filenode.is_mutable()
13151     if edge_metadata is not None:
13152         data[1]['metadata'] = edge_metadata
13153+
13154+    if filenode.is_mutable() and filenode.get_version() is not None:
13155+        mutable_type = filenode.get_version()
13156+        assert mutable_type in (MDMF_VERSION, SDMF_VERSION)
13157+        if mutable_type == MDMF_VERSION:
13158+            mutable_type = "mdmf"
13159+        else:
13160+            mutable_type = "sdmf"
13161+        data[1]['mutable-type'] = mutable_type
13162+
13163     return text_plain(simplejson.dumps(data, indent=1) + "\n", ctx)
13164 
13165 def FileURI(ctx, filenode):
13166hunk ./src/allmydata/web/info.py 8
13167 from nevow.inevow import IRequest
13168 
13169 from allmydata.util import base32
13170-from allmydata.interfaces import IDirectoryNode, IFileNode
13171+from allmydata.interfaces import IDirectoryNode, IFileNode, MDMF_VERSION
13172 from allmydata.web.common import getxmlfile
13173 from allmydata.mutable.common import UnrecoverableFileError # TODO: move
13174 
13175hunk ./src/allmydata/web/info.py 31
13176             si = node.get_storage_index()
13177             if si:
13178                 if node.is_mutable():
13179-                    return "mutable file"
13180+                    ret = "mutable file"
13181+                    if node.get_version() == MDMF_VERSION:
13182+                        ret += " (mdmf)"
13183+                    else:
13184+                        ret += " (sdmf)"
13185+                    return ret
13186                 return "immutable file"
13187             return "immutable LIT file"
13188         return "unknown"
13189hunk ./src/allmydata/web/root.py 15
13190 from allmydata import get_package_versions_string
13191 from allmydata import provisioning
13192 from allmydata.util import idlib, log
13193-from allmydata.interfaces import IFileNode
13194+from allmydata.interfaces import IFileNode, MDMF_VERSION, SDMF_VERSION
13195 from allmydata.web import filenode, directory, unlinked, status, operations
13196 from allmydata.web import reliability, storage
13197 from allmydata.web.common import abbreviate_size, getxmlfile, WebError, \
13198hunk ./src/allmydata/web/root.py 19
13199-     get_arg, RenderMixin, boolean_of_arg
13200+     get_arg, RenderMixin, boolean_of_arg, parse_mutable_type_arg
13201 
13202 
13203 class URIHandler(RenderMixin, rend.Page):
13204hunk ./src/allmydata/web/root.py 50
13205         if t == "":
13206             mutable = boolean_of_arg(get_arg(req, "mutable", "false").strip())
13207             if mutable:
13208-                return unlinked.PUTUnlinkedSSK(req, self.client)
13209+                arg = get_arg(req, "mutable-type", None)
13210+                version = parse_mutable_type_arg(arg)
13211+                if version == "invalid":
13212+                    errmsg = "Unknown type: %s" % arg
13213+                    raise WebError(errmsg, http.BAD_REQUEST)
13214+
13215+                return unlinked.PUTUnlinkedSSK(req, self.client, version)
13216             else:
13217                 return unlinked.PUTUnlinkedCHK(req, self.client)
13218         if t == "mkdir":
13219hunk ./src/allmydata/web/root.py 74
13220         if t in ("", "upload"):
13221             mutable = bool(get_arg(req, "mutable", "").strip())
13222             if mutable:
13223-                return unlinked.POSTUnlinkedSSK(req, self.client)
13224+                arg = get_arg(req, "mutable-type", None)
13225+                version = parse_mutable_type_arg(arg)
13226+                if version is "invalid":
13227+                    raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
13228+                return unlinked.POSTUnlinkedSSK(req, self.client, version)
13229             else:
13230                 return unlinked.POSTUnlinkedCHK(req, self.client)
13231         if t == "mkdir":
13232hunk ./src/allmydata/web/root.py 335
13233 
13234     def render_upload_form(self, ctx, data):
13235         # this is a form where users can upload unlinked files
13236+        #
13237+        # for mutable files, users can choose the format by selecting
13238+        # MDMF or SDMF from a radio button. They can also configure a
13239+        # default format in tahoe.cfg, which they rightly expect us to
13240+        # obey. we convey to them that we are obeying their choice by
13241+        # ensuring that the one that they've chosen is selected in the
13242+        # interface.
13243+        if self.client.mutable_file_default == MDMF_VERSION:
13244+            mdmf_input = T.input(type='radio', name='mutable-type',
13245+                                 value='mdmf', id='mutable-type-mdmf',
13246+                                 checked='checked')
13247+        else:
13248+            mdmf_input = T.input(type='radio', name='mutable-type',
13249+                                 value='mdmf', id='mutable-type-mdmf')
13250+
13251+        if self.client.mutable_file_default == SDMF_VERSION:
13252+            sdmf_input = T.input(type='radio', name='mutable-type',
13253+                                 value='sdmf', id='mutable-type-sdmf',
13254+                                 checked='checked')
13255+        else:
13256+            sdmf_input = T.input(type='radio', name='mutable-type',
13257+                                 value='sdmf', id='mutable-type-sdmf')
13258+
13259+
13260         form = T.form(action="uri", method="post",
13261                       enctype="multipart/form-data")[
13262             T.fieldset[
13263hunk ./src/allmydata/web/root.py 367
13264                   T.input(type="file", name="file", class_="freeform-input-file")],
13265             T.input(type="hidden", name="t", value="upload"),
13266             T.div[T.input(type="checkbox", name="mutable"), T.label(for_="mutable")["Create mutable file"],
13267+                  sdmf_input, T.label(for_="mutable-type-sdmf")["SDMF"],
13268+                  mdmf_input,
13269+                  T.label(for_='mutable-type-mdmf')['MDMF (experimental)'],
13270                   " ", T.input(type="submit", value="Upload!")],
13271             ]]
13272         return T.div[form]
13273hunk ./src/allmydata/web/root.py 376
13274 
13275     def render_mkdir_form(self, ctx, data):
13276         # this is a form where users can create new directories
13277+        mdmf_input = T.input(type='radio', name='mutable-type',
13278+                             value='mdmf', id='mutable-directory-mdmf')
13279+        sdmf_input = T.input(type='radio', name='mutable-type',
13280+                             value='sdmf', id='mutable-directory-sdmf',
13281+                             checked='checked')
13282         form = T.form(action="uri", method="post",
13283                       enctype="multipart/form-data")[
13284             T.fieldset[
13285hunk ./src/allmydata/web/root.py 385
13286             T.legend(class_="freeform-form-label")["Create a directory"],
13287+            T.label(for_='mutable-directory-sdmf')["SDMF"],
13288+            sdmf_input,
13289+            T.label(for_='mutable-directory-mdmf')["MDMF"],
13290+            mdmf_input,
13291             T.input(type="hidden", name="t", value="mkdir"),
13292             T.input(type="hidden", name="redirect_to_result", value="true"),
13293             T.input(type="submit", value="Create a directory"),
13294hunk ./src/allmydata/web/unlinked.py 7
13295 from twisted.internet import defer
13296 from nevow import rend, url, tags as T
13297 from allmydata.immutable.upload import FileHandle
13298+from allmydata.mutable.publish import MutableFileHandle
13299 from allmydata.web.common import getxmlfile, get_arg, boolean_of_arg, \
13300hunk ./src/allmydata/web/unlinked.py 9
13301-     convert_children_json, WebError
13302+     convert_children_json, WebError, parse_mutable_type_arg
13303 from allmydata.web import status
13304 
13305 def PUTUnlinkedCHK(req, client):
13306hunk ./src/allmydata/web/unlinked.py 20
13307     # that fires with the URI of the new file
13308     return d
13309 
13310-def PUTUnlinkedSSK(req, client):
13311+def PUTUnlinkedSSK(req, client, version):
13312     # SDMF: files are small, and we can only upload data
13313     req.content.seek(0)
13314hunk ./src/allmydata/web/unlinked.py 23
13315-    data = req.content.read()
13316-    d = client.create_mutable_file(data)
13317+    data = MutableFileHandle(req.content)
13318+    d = client.create_mutable_file(data, version=version)
13319     d.addCallback(lambda n: n.get_uri())
13320     return d
13321 
13322hunk ./src/allmydata/web/unlinked.py 30
13323 def PUTUnlinkedCreateDirectory(req, client):
13324     # "PUT /uri?t=mkdir", to create an unlinked directory.
13325-    d = client.create_dirnode()
13326+    arg = get_arg(req, "mutable-type", None)
13327+    mt = parse_mutable_type_arg(arg)
13328+    if mt is not None and mt is not "invalid":
13329+        d = client.create_dirnode(version=mt)
13330+    elif mt is "invalid":
13331+        msg = "Unknown type: %s" % arg
13332+        raise WebError(msg, http.BAD_REQUEST)
13333+    else:
13334+        d = client.create_dirnode()
13335     d.addCallback(lambda dirnode: dirnode.get_uri())
13336     # XXX add redirect_to_result
13337     return d
13338hunk ./src/allmydata/web/unlinked.py 91
13339                       ["/uri/" + res.uri])
13340         return d
13341 
13342-def POSTUnlinkedSSK(req, client):
13343+def POSTUnlinkedSSK(req, client, version):
13344     # "POST /uri", to create an unlinked file.
13345     # SDMF: files are small, and we can only upload data
13346hunk ./src/allmydata/web/unlinked.py 94
13347-    contents = req.fields["file"]
13348-    contents.file.seek(0)
13349-    data = contents.file.read()
13350-    d = client.create_mutable_file(data)
13351+    contents = req.fields["file"].file
13352+    data = MutableFileHandle(contents)
13353+    d = client.create_mutable_file(data, version=version)
13354     d.addCallback(lambda n: n.get_uri())
13355     return d
13356 
13357hunk ./src/allmydata/web/unlinked.py 115
13358             raise WebError("t=mkdir does not accept children=, "
13359                            "try t=mkdir-with-children instead",
13360                            http.BAD_REQUEST)
13361-    d = client.create_dirnode()
13362+    arg = get_arg(req, "mutable-type", None)
13363+    mt = parse_mutable_type_arg(arg)
13364+    if mt is not None and mt is not "invalid":
13365+        d = client.create_dirnode(version=mt)
13366+    elif mt is "invalid":
13367+        msg = "Unknown type: %s" % arg
13368+        raise WebError(msg, http.BAD_REQUEST)
13369+    else:
13370+        d = client.create_dirnode()
13371     redirect = get_arg(req, "redirect_to_result", "false")
13372     if boolean_of_arg(redirect):
13373         def _then_redir(res):
13374}
13375[test/test_mutable: tests for MDMF
13376Kevan Carstensen <kevan@isnotajoke.com>**20110807004414
13377 Ignore-this: 29f9c3a806d67df0ed09c4f0d857d347
13378 
13379 These are their own patch because they cut across a lot of the changes
13380 I've made in implementing MDMF in such a way as to make it difficult to
13381 split them up into the other patches.
13382] {
13383hunk ./src/allmydata/test/test_mutable.py 2
13384 
13385-import struct
13386+import os, re, base64
13387 from cStringIO import StringIO
13388 from twisted.trial import unittest
13389 from twisted.internet import defer, reactor
13390hunk ./src/allmydata/test/test_mutable.py 6
13391+from twisted.internet.interfaces import IConsumer
13392+from zope.interface import implements
13393 from allmydata import uri, client
13394 from allmydata.nodemaker import NodeMaker
13395hunk ./src/allmydata/test/test_mutable.py 10
13396-from allmydata.util import base32
13397+from allmydata.util import base32, consumer, fileutil
13398 from allmydata.util.hashutil import tagged_hash, ssk_writekey_hash, \
13399      ssk_pubkey_fingerprint_hash
13400hunk ./src/allmydata/test/test_mutable.py 13
13401+from allmydata.util.deferredutil import gatherResults
13402 from allmydata.interfaces import IRepairResults, ICheckAndRepairResults, \
13403hunk ./src/allmydata/test/test_mutable.py 15
13404-     NotEnoughSharesError
13405+     NotEnoughSharesError, SDMF_VERSION, MDMF_VERSION
13406 from allmydata.monitor import Monitor
13407 from allmydata.test.common import ShouldFailMixin
13408 from allmydata.test.no_network import GridTestMixin
13409hunk ./src/allmydata/test/test_mutable.py 22
13410 from foolscap.api import eventually, fireEventually
13411 from foolscap.logging import log
13412 from allmydata.storage_client import StorageFarmBroker
13413+from allmydata.storage.common import storage_index_to_dir
13414 
13415 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
13416 from allmydata.mutable.common import ResponseCache, \
13417hunk ./src/allmydata/test/test_mutable.py 30
13418      NeedMoreDataError, UnrecoverableFileError, UncoordinatedWriteError, \
13419      NotEnoughServersError, CorruptShareError
13420 from allmydata.mutable.retrieve import Retrieve
13421-from allmydata.mutable.publish import Publish
13422+from allmydata.mutable.publish import Publish, MutableFileHandle, \
13423+                                      MutableData, \
13424+                                      DEFAULT_MAX_SEGMENT_SIZE
13425 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
13426hunk ./src/allmydata/test/test_mutable.py 34
13427-from allmydata.mutable.layout import unpack_header, unpack_share
13428+from allmydata.mutable.layout import unpack_header, MDMFSlotReadProxy
13429 from allmydata.mutable.repairer import MustForceRepairError
13430 
13431 import allmydata.test.common_util as testutil
13432hunk ./src/allmydata/test/test_mutable.py 103
13433         self.storage = storage
13434         self.queries = 0
13435     def callRemote(self, methname, *args, **kwargs):
13436+        self.queries += 1
13437         def _call():
13438             meth = getattr(self, methname)
13439             return meth(*args, **kwargs)
13440hunk ./src/allmydata/test/test_mutable.py 110
13441         d = fireEventually()
13442         d.addCallback(lambda res: _call())
13443         return d
13444+
13445     def callRemoteOnly(self, methname, *args, **kwargs):
13446hunk ./src/allmydata/test/test_mutable.py 112
13447+        self.queries += 1
13448         d = self.callRemote(methname, *args, **kwargs)
13449         d.addBoth(lambda ignore: None)
13450         pass
13451hunk ./src/allmydata/test/test_mutable.py 160
13452             chr(ord(original[byte_offset]) ^ 0x01) +
13453             original[byte_offset+1:])
13454 
13455+def add_two(original, byte_offset):
13456+    # It isn't enough to simply flip the bit for the version number,
13457+    # because 1 is a valid version number. So we add two instead.
13458+    return (original[:byte_offset] +
13459+            chr(ord(original[byte_offset]) ^ 0x02) +
13460+            original[byte_offset+1:])
13461+
13462 def corrupt(res, s, offset, shnums_to_corrupt=None, offset_offset=0):
13463     # if shnums_to_corrupt is None, corrupt all shares. Otherwise it is a
13464     # list of shnums to corrupt.
13465hunk ./src/allmydata/test/test_mutable.py 170
13466+    ds = []
13467     for peerid in s._peers:
13468         shares = s._peers[peerid]
13469         for shnum in shares:
13470hunk ./src/allmydata/test/test_mutable.py 178
13471                 and shnum not in shnums_to_corrupt):
13472                 continue
13473             data = shares[shnum]
13474-            (version,
13475-             seqnum,
13476-             root_hash,
13477-             IV,
13478-             k, N, segsize, datalen,
13479-             o) = unpack_header(data)
13480-            if isinstance(offset, tuple):
13481-                offset1, offset2 = offset
13482-            else:
13483-                offset1 = offset
13484-                offset2 = 0
13485-            if offset1 == "pubkey":
13486-                real_offset = 107
13487-            elif offset1 in o:
13488-                real_offset = o[offset1]
13489-            else:
13490-                real_offset = offset1
13491-            real_offset = int(real_offset) + offset2 + offset_offset
13492-            assert isinstance(real_offset, int), offset
13493-            shares[shnum] = flip_bit(data, real_offset)
13494-    return res
13495+            # We're feeding the reader all of the share data, so it
13496+            # won't need to use the rref that we didn't provide, nor the
13497+            # storage index that we didn't provide. We do this because
13498+            # the reader will work for both MDMF and SDMF.
13499+            reader = MDMFSlotReadProxy(None, None, shnum, data)
13500+            # We need to get the offsets for the next part.
13501+            d = reader.get_verinfo()
13502+            def _do_corruption(verinfo, data, shnum):
13503+                (seqnum,
13504+                 root_hash,
13505+                 IV,
13506+                 segsize,
13507+                 datalen,
13508+                 k, n, prefix, o) = verinfo
13509+                if isinstance(offset, tuple):
13510+                    offset1, offset2 = offset
13511+                else:
13512+                    offset1 = offset
13513+                    offset2 = 0
13514+                if offset1 == "pubkey" and IV:
13515+                    real_offset = 107
13516+                elif offset1 in o:
13517+                    real_offset = o[offset1]
13518+                else:
13519+                    real_offset = offset1
13520+                real_offset = int(real_offset) + offset2 + offset_offset
13521+                assert isinstance(real_offset, int), offset
13522+                if offset1 == 0: # verbyte
13523+                    f = add_two
13524+                else:
13525+                    f = flip_bit
13526+                shares[shnum] = f(data, real_offset)
13527+            d.addCallback(_do_corruption, data, shnum)
13528+            ds.append(d)
13529+    dl = defer.DeferredList(ds)
13530+    dl.addCallback(lambda ignored: res)
13531+    return dl
13532 
13533 def make_storagebroker(s=None, num_peers=10):
13534     if not s:
13535hunk ./src/allmydata/test/test_mutable.py 257
13536             self.failUnlessEqual(len(shnums), 1)
13537         d.addCallback(_created)
13538         return d
13539+    test_create.timeout = 15
13540+
13541+
13542+    def test_create_mdmf(self):
13543+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
13544+        def _created(n):
13545+            self.failUnless(isinstance(n, MutableFileNode))
13546+            self.failUnlessEqual(n.get_storage_index(), n._storage_index)
13547+            sb = self.nodemaker.storage_broker
13548+            peer0 = sorted(sb.get_all_serverids())[0]
13549+            shnums = self._storage._peers[peer0].keys()
13550+            self.failUnlessEqual(len(shnums), 1)
13551+        d.addCallback(_created)
13552+        return d
13553+
13554+    def test_single_share(self):
13555+        # Make sure that we tolerate publishing a single share.
13556+        self.nodemaker.default_encoding_parameters['k'] = 1
13557+        self.nodemaker.default_encoding_parameters['happy'] = 1
13558+        self.nodemaker.default_encoding_parameters['n'] = 1
13559+        d = defer.succeed(None)
13560+        for v in (SDMF_VERSION, MDMF_VERSION):
13561+            d.addCallback(lambda ignored:
13562+                self.nodemaker.create_mutable_file(version=v))
13563+            def _created(n):
13564+                self.failUnless(isinstance(n, MutableFileNode))
13565+                self._node = n
13566+                return n
13567+            d.addCallback(_created)
13568+            d.addCallback(lambda n:
13569+                n.overwrite(MutableData("Contents" * 50000)))
13570+            d.addCallback(lambda ignored:
13571+                self._node.download_best_version())
13572+            d.addCallback(lambda contents:
13573+                self.failUnlessEqual(contents, "Contents" * 50000))
13574+        return d
13575+
13576+    def test_max_shares(self):
13577+        self.nodemaker.default_encoding_parameters['n'] = 255
13578+        d = self.nodemaker.create_mutable_file(version=SDMF_VERSION)
13579+        def _created(n):
13580+            self.failUnless(isinstance(n, MutableFileNode))
13581+            self.failUnlessEqual(n.get_storage_index(), n._storage_index)
13582+            sb = self.nodemaker.storage_broker
13583+            num_shares = sum([len(self._storage._peers[x].keys()) for x \
13584+                              in sb.get_all_serverids()])
13585+            self.failUnlessEqual(num_shares, 255)
13586+            self._node = n
13587+            return n
13588+        d.addCallback(_created)
13589+        # Now we upload some contents
13590+        d.addCallback(lambda n:
13591+            n.overwrite(MutableData("contents" * 50000)))
13592+        # ...then download contents
13593+        d.addCallback(lambda ignored:
13594+            self._node.download_best_version())
13595+        # ...and check to make sure everything went okay.
13596+        d.addCallback(lambda contents:
13597+            self.failUnlessEqual("contents" * 50000, contents))
13598+        return d
13599+
13600+    def test_max_shares_mdmf(self):
13601+        # Test how files behave when there are 255 shares.
13602+        self.nodemaker.default_encoding_parameters['n'] = 255
13603+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
13604+        def _created(n):
13605+            self.failUnless(isinstance(n, MutableFileNode))
13606+            self.failUnlessEqual(n.get_storage_index(), n._storage_index)
13607+            sb = self.nodemaker.storage_broker
13608+            num_shares = sum([len(self._storage._peers[x].keys()) for x \
13609+                              in sb.get_all_serverids()])
13610+            self.failUnlessEqual(num_shares, 255)
13611+            self._node = n
13612+            return n
13613+        d.addCallback(_created)
13614+        d.addCallback(lambda n:
13615+            n.overwrite(MutableData("contents" * 50000)))
13616+        d.addCallback(lambda ignored:
13617+            self._node.download_best_version())
13618+        d.addCallback(lambda contents:
13619+            self.failUnlessEqual(contents, "contents" * 50000))
13620+        return d
13621+
13622+    def test_mdmf_filenode_cap(self):
13623+        # Test that an MDMF filenode, once created, returns an MDMF URI.
13624+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
13625+        def _created(n):
13626+            self.failUnless(isinstance(n, MutableFileNode))
13627+            cap = n.get_cap()
13628+            self.failUnless(isinstance(cap, uri.WritableMDMFFileURI))
13629+            rcap = n.get_readcap()
13630+            self.failUnless(isinstance(rcap, uri.ReadonlyMDMFFileURI))
13631+            vcap = n.get_verify_cap()
13632+            self.failUnless(isinstance(vcap, uri.MDMFVerifierURI))
13633+        d.addCallback(_created)
13634+        return d
13635+
13636+
13637+    def test_create_from_mdmf_writecap(self):
13638+        # Test that the nodemaker is capable of creating an MDMF
13639+        # filenode given an MDMF cap.
13640+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
13641+        def _created(n):
13642+            self.failUnless(isinstance(n, MutableFileNode))
13643+            s = n.get_uri()
13644+            self.failUnless(s.startswith("URI:MDMF"))
13645+            n2 = self.nodemaker.create_from_cap(s)
13646+            self.failUnless(isinstance(n2, MutableFileNode))
13647+            self.failUnlessEqual(n.get_storage_index(), n2.get_storage_index())
13648+            self.failUnlessEqual(n.get_uri(), n2.get_uri())
13649+        d.addCallback(_created)
13650+        return d
13651+
13652+
13653+    def test_create_from_mdmf_writecap_with_extensions(self):
13654+        # Test that the nodemaker is capable of creating an MDMF
13655+        # filenode when given a writecap with extension parameters in
13656+        # them.
13657+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
13658+        def _created(n):
13659+            self.failUnless(isinstance(n, MutableFileNode))
13660+            s = n.get_uri()
13661+            # We need to cheat a little and delete the nodemaker's
13662+            # cache, otherwise we'll get the same node instance back.
13663+            self.failUnlessIn(":3:131073", s)
13664+            n2 = self.nodemaker.create_from_cap(s)
13665+
13666+            self.failUnlessEqual(n2.get_storage_index(), n.get_storage_index())
13667+            self.failUnlessEqual(n.get_writekey(), n2.get_writekey())
13668+            hints = n2._downloader_hints
13669+            self.failUnlessEqual(hints['k'], 3)
13670+            self.failUnlessEqual(hints['segsize'], 131073)
13671+        d.addCallback(_created)
13672+        return d
13673+
13674+
13675+    def test_create_from_mdmf_readcap(self):
13676+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
13677+        def _created(n):
13678+            self.failUnless(isinstance(n, MutableFileNode))
13679+            s = n.get_readonly_uri()
13680+            n2 = self.nodemaker.create_from_cap(s)
13681+            self.failUnless(isinstance(n2, MutableFileNode))
13682+
13683+            # Check that it's a readonly node
13684+            self.failUnless(n2.is_readonly())
13685+        d.addCallback(_created)
13686+        return d
13687+
13688+
13689+    def test_create_from_mdmf_readcap_with_extensions(self):
13690+        # We should be able to create an MDMF filenode with the
13691+        # extension parameters without it breaking.
13692+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
13693+        def _created(n):
13694+            self.failUnless(isinstance(n, MutableFileNode))
13695+            s = n.get_readonly_uri()
13696+            self.failUnlessIn(":3:131073", s)
13697+
13698+            n2 = self.nodemaker.create_from_cap(s)
13699+            self.failUnless(isinstance(n2, MutableFileNode))
13700+            self.failUnless(n2.is_readonly())
13701+            self.failUnlessEqual(n.get_storage_index(), n2.get_storage_index())
13702+            hints = n2._downloader_hints
13703+            self.failUnlessEqual(hints["k"], 3)
13704+            self.failUnlessEqual(hints["segsize"], 131073)
13705+        d.addCallback(_created)
13706+        return d
13707+
13708+
13709+    def test_internal_version_from_cap(self):
13710+        # MutableFileNodes and MutableFileVersions have an internal
13711+        # switch that tells them whether they're dealing with an SDMF or
13712+        # MDMF mutable file when they start doing stuff. We want to make
13713+        # sure that this is set appropriately given an MDMF cap.
13714+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
13715+        def _created(n):
13716+            self.uri = n.get_uri()
13717+            self.failUnlessEqual(n._protocol_version, MDMF_VERSION)
13718+
13719+            n2 = self.nodemaker.create_from_cap(self.uri)
13720+            self.failUnlessEqual(n2._protocol_version, MDMF_VERSION)
13721+        d.addCallback(_created)
13722+        return d
13723+
13724 
13725     def test_serialize(self):
13726         n = MutableFileNode(None, None, {"k": 3, "n": 10}, None)
13727hunk ./src/allmydata/test/test_mutable.py 472
13728             d.addCallback(lambda smap: smap.dump(StringIO()))
13729             d.addCallback(lambda sio:
13730                           self.failUnless("3-of-10" in sio.getvalue()))
13731-            d.addCallback(lambda res: n.overwrite("contents 1"))
13732+            d.addCallback(lambda res: n.overwrite(MutableData("contents 1")))
13733             d.addCallback(lambda res: self.failUnlessIdentical(res, None))
13734             d.addCallback(lambda res: n.download_best_version())
13735             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
13736hunk ./src/allmydata/test/test_mutable.py 479
13737             d.addCallback(lambda res: n.get_size_of_best_version())
13738             d.addCallback(lambda size:
13739                           self.failUnlessEqual(size, len("contents 1")))
13740-            d.addCallback(lambda res: n.overwrite("contents 2"))
13741+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
13742             d.addCallback(lambda res: n.download_best_version())
13743             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
13744             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
13745hunk ./src/allmydata/test/test_mutable.py 483
13746-            d.addCallback(lambda smap: n.upload("contents 3", smap))
13747+            d.addCallback(lambda smap: n.upload(MutableData("contents 3"), smap))
13748             d.addCallback(lambda res: n.download_best_version())
13749             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 3"))
13750             d.addCallback(lambda res: n.get_servermap(MODE_ANYTHING))
13751hunk ./src/allmydata/test/test_mutable.py 495
13752             # mapupdate-to-retrieve data caching (i.e. make the shares larger
13753             # than the default readsize, which is 2000 bytes). A 15kB file
13754             # will have 5kB shares.
13755-            d.addCallback(lambda res: n.overwrite("large size file" * 1000))
13756+            d.addCallback(lambda res: n.overwrite(MutableData("large size file" * 1000)))
13757             d.addCallback(lambda res: n.download_best_version())
13758             d.addCallback(lambda res:
13759                           self.failUnlessEqual(res, "large size file" * 1000))
13760hunk ./src/allmydata/test/test_mutable.py 503
13761         d.addCallback(_created)
13762         return d
13763 
13764+
13765+    def test_upload_and_download_mdmf(self):
13766+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
13767+        def _created(n):
13768+            d = defer.succeed(None)
13769+            d.addCallback(lambda ignored:
13770+                n.get_servermap(MODE_READ))
13771+            def _then(servermap):
13772+                dumped = servermap.dump(StringIO())
13773+                self.failUnlessIn("3-of-10", dumped.getvalue())
13774+            d.addCallback(_then)
13775+            # Now overwrite the contents with some new contents. We want
13776+            # to make them big enough to force the file to be uploaded
13777+            # in more than one segment.
13778+            big_contents = "contents1" * 100000 # about 900 KiB
13779+            big_contents_uploadable = MutableData(big_contents)
13780+            d.addCallback(lambda ignored:
13781+                n.overwrite(big_contents_uploadable))
13782+            d.addCallback(lambda ignored:
13783+                n.download_best_version())
13784+            d.addCallback(lambda data:
13785+                self.failUnlessEqual(data, big_contents))
13786+            # Overwrite the contents again with some new contents. As
13787+            # before, they need to be big enough to force multiple
13788+            # segments, so that we make the downloader deal with
13789+            # multiple segments.
13790+            bigger_contents = "contents2" * 1000000 # about 9MiB
13791+            bigger_contents_uploadable = MutableData(bigger_contents)
13792+            d.addCallback(lambda ignored:
13793+                n.overwrite(bigger_contents_uploadable))
13794+            d.addCallback(lambda ignored:
13795+                n.download_best_version())
13796+            d.addCallback(lambda data:
13797+                self.failUnlessEqual(data, bigger_contents))
13798+            return d
13799+        d.addCallback(_created)
13800+        return d
13801+
13802+
13803+    def test_retrieve_pause(self):
13804+        # We should make sure that the retriever is able to pause
13805+        # correctly.
13806+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
13807+        def _created(node):
13808+            self.node = node
13809+
13810+            return node.overwrite(MutableData("contents1" * 100000))
13811+        d.addCallback(_created)
13812+        # Now we'll retrieve it into a pausing consumer.
13813+        d.addCallback(lambda ignored:
13814+            self.node.get_best_mutable_version())
13815+        def _got_version(version):
13816+            self.c = PausingConsumer()
13817+            return version.read(self.c)
13818+        d.addCallback(_got_version)
13819+        d.addCallback(lambda ignored:
13820+            self.failUnlessEqual(self.c.data, "contents1" * 100000))
13821+        return d
13822+    test_retrieve_pause.timeout = 25
13823+
13824+
13825+    def test_download_from_mdmf_cap(self):
13826+        # We should be able to download an MDMF file given its cap
13827+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
13828+        def _created(node):
13829+            self.uri = node.get_uri()
13830+
13831+            return node.overwrite(MutableData("contents1" * 100000))
13832+        def _then(ignored):
13833+            node = self.nodemaker.create_from_cap(self.uri)
13834+            return node.download_best_version()
13835+        def _downloaded(data):
13836+            self.failUnlessEqual(data, "contents1" * 100000)
13837+        d.addCallback(_created)
13838+        d.addCallback(_then)
13839+        d.addCallback(_downloaded)
13840+        return d
13841+
13842+
13843+    def test_create_and_download_from_bare_mdmf_cap(self):
13844+        # MDMF caps have extension parameters on them by default. We
13845+        # need to make sure that they work without extension parameters.
13846+        contents = MutableData("contents" * 100000)
13847+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION,
13848+                                               contents=contents)
13849+        def _created(node):
13850+            uri = node.get_uri()
13851+            self._created = node
13852+            self.failUnlessIn(":3:131073", uri)
13853+            # Now strip that off the end of the uri, then try creating
13854+            # and downloading the node again.
13855+            bare_uri = uri.replace(":3:131073", "")
13856+            assert ":3:131073" not in bare_uri
13857+
13858+            return self.nodemaker.create_from_cap(bare_uri)
13859+        d.addCallback(_created)
13860+        def _created_bare(node):
13861+            self.failUnlessEqual(node.get_writekey(),
13862+                                 self._created.get_writekey())
13863+            self.failUnlessEqual(node.get_readkey(),
13864+                                 self._created.get_readkey())
13865+            self.failUnlessEqual(node.get_storage_index(),
13866+                                 self._created.get_storage_index())
13867+            return node.download_best_version()
13868+        d.addCallback(_created_bare)
13869+        d.addCallback(lambda data:
13870+            self.failUnlessEqual(data, "contents" * 100000))
13871+        return d
13872+
13873+
13874+    def test_mdmf_write_count(self):
13875+        # Publishing an MDMF file should only cause one write for each
13876+        # share that is to be published. Otherwise, we introduce
13877+        # undesirable semantics that are a regression from SDMF
13878+        upload = MutableData("MDMF" * 100000) # about 400 KiB
13879+        d = self.nodemaker.create_mutable_file(upload,
13880+                                               version=MDMF_VERSION)
13881+        def _check_server_write_counts(ignored):
13882+            sb = self.nodemaker.storage_broker
13883+            for server in sb.servers.itervalues():
13884+                self.failUnlessEqual(server.get_rref().queries, 1)
13885+        d.addCallback(_check_server_write_counts)
13886+        return d
13887+
13888+
13889     def test_create_with_initial_contents(self):
13890hunk ./src/allmydata/test/test_mutable.py 629
13891-        d = self.nodemaker.create_mutable_file("contents 1")
13892+        upload1 = MutableData("contents 1")
13893+        d = self.nodemaker.create_mutable_file(upload1)
13894         def _created(n):
13895             d = n.download_best_version()
13896             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
13897hunk ./src/allmydata/test/test_mutable.py 634
13898-            d.addCallback(lambda res: n.overwrite("contents 2"))
13899+            upload2 = MutableData("contents 2")
13900+            d.addCallback(lambda res: n.overwrite(upload2))
13901             d.addCallback(lambda res: n.download_best_version())
13902             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
13903             return d
13904hunk ./src/allmydata/test/test_mutable.py 641
13905         d.addCallback(_created)
13906         return d
13907+    test_create_with_initial_contents.timeout = 15
13908+
13909+
13910+    def test_create_mdmf_with_initial_contents(self):
13911+        initial_contents = "foobarbaz" * 131072 # 900KiB
13912+        initial_contents_uploadable = MutableData(initial_contents)
13913+        d = self.nodemaker.create_mutable_file(initial_contents_uploadable,
13914+                                               version=MDMF_VERSION)
13915+        def _created(n):
13916+            d = n.download_best_version()
13917+            d.addCallback(lambda data:
13918+                self.failUnlessEqual(data, initial_contents))
13919+            uploadable2 = MutableData(initial_contents + "foobarbaz")
13920+            d.addCallback(lambda ignored:
13921+                n.overwrite(uploadable2))
13922+            d.addCallback(lambda ignored:
13923+                n.download_best_version())
13924+            d.addCallback(lambda data:
13925+                self.failUnlessEqual(data, initial_contents +
13926+                                           "foobarbaz"))
13927+            return d
13928+        d.addCallback(_created)
13929+        return d
13930+    test_create_mdmf_with_initial_contents.timeout = 20
13931+
13932 
13933     def test_response_cache_memory_leak(self):
13934         d = self.nodemaker.create_mutable_file("contents")
13935hunk ./src/allmydata/test/test_mutable.py 692
13936             key = n.get_writekey()
13937             self.failUnless(isinstance(key, str), key)
13938             self.failUnlessEqual(len(key), 16) # AES key size
13939-            return data
13940+            return MutableData(data)
13941         d = self.nodemaker.create_mutable_file(_make_contents)
13942         def _created(n):
13943             return n.download_best_version()
13944hunk ./src/allmydata/test/test_mutable.py 700
13945         d.addCallback(lambda data2: self.failUnlessEqual(data2, data))
13946         return d
13947 
13948+
13949+    def test_create_mdmf_with_initial_contents_function(self):
13950+        data = "initial contents" * 100000
13951+        def _make_contents(n):
13952+            self.failUnless(isinstance(n, MutableFileNode))
13953+            key = n.get_writekey()
13954+            self.failUnless(isinstance(key, str), key)
13955+            self.failUnlessEqual(len(key), 16)
13956+            return MutableData(data)
13957+        d = self.nodemaker.create_mutable_file(_make_contents,
13958+                                               version=MDMF_VERSION)
13959+        d.addCallback(lambda n:
13960+            n.download_best_version())
13961+        d.addCallback(lambda data2:
13962+            self.failUnlessEqual(data2, data))
13963+        return d
13964+
13965+
13966     def test_create_with_too_large_contents(self):
13967         BIG = "a" * (self.OLD_MAX_SEGMENT_SIZE + 1)
13968hunk ./src/allmydata/test/test_mutable.py 720
13969-        d = self.nodemaker.create_mutable_file(BIG)
13970+        BIG_uploadable = MutableData(BIG)
13971+        d = self.nodemaker.create_mutable_file(BIG_uploadable)
13972         def _created(n):
13973hunk ./src/allmydata/test/test_mutable.py 723
13974-            d = n.overwrite(BIG)
13975+            other_BIG_uploadable = MutableData(BIG)
13976+            d = n.overwrite(other_BIG_uploadable)
13977             return d
13978         d.addCallback(_created)
13979         return d
13980hunk ./src/allmydata/test/test_mutable.py 738
13981 
13982     def test_modify(self):
13983         def _modifier(old_contents, servermap, first_time):
13984-            return old_contents + "line2"
13985+            new_contents = old_contents + "line2"
13986+            return new_contents
13987         def _non_modifier(old_contents, servermap, first_time):
13988             return old_contents
13989         def _none_modifier(old_contents, servermap, first_time):
13990hunk ./src/allmydata/test/test_mutable.py 747
13991         def _error_modifier(old_contents, servermap, first_time):
13992             raise ValueError("oops")
13993         def _toobig_modifier(old_contents, servermap, first_time):
13994-            return "b" * (self.OLD_MAX_SEGMENT_SIZE+1)
13995+            new_content = "b" * (self.OLD_MAX_SEGMENT_SIZE + 1)
13996+            return new_content
13997         calls = []
13998         def _ucw_error_modifier(old_contents, servermap, first_time):
13999             # simulate an UncoordinatedWriteError once
14000hunk ./src/allmydata/test/test_mutable.py 755
14001             calls.append(1)
14002             if len(calls) <= 1:
14003                 raise UncoordinatedWriteError("simulated")
14004-            return old_contents + "line3"
14005+            new_contents = old_contents + "line3"
14006+            return new_contents
14007         def _ucw_error_non_modifier(old_contents, servermap, first_time):
14008             # simulate an UncoordinatedWriteError once, and don't actually
14009             # modify the contents on subsequent invocations
14010hunk ./src/allmydata/test/test_mutable.py 765
14011                 raise UncoordinatedWriteError("simulated")
14012             return old_contents
14013 
14014-        d = self.nodemaker.create_mutable_file("line1")
14015+        initial_contents = "line1"
14016+        d = self.nodemaker.create_mutable_file(MutableData(initial_contents))
14017         def _created(n):
14018             d = n.modify(_modifier)
14019             d.addCallback(lambda res: n.download_best_version())
14020hunk ./src/allmydata/test/test_mutable.py 823
14021             return d
14022         d.addCallback(_created)
14023         return d
14024+    test_modify.timeout = 15
14025+
14026 
14027     def test_modify_backoffer(self):
14028         def _modifier(old_contents, servermap, first_time):
14029hunk ./src/allmydata/test/test_mutable.py 850
14030         giveuper._delay = 0.1
14031         giveuper.factor = 1
14032 
14033-        d = self.nodemaker.create_mutable_file("line1")
14034+        d = self.nodemaker.create_mutable_file(MutableData("line1"))
14035         def _created(n):
14036             d = n.modify(_modifier)
14037             d.addCallback(lambda res: n.download_best_version())
14038hunk ./src/allmydata/test/test_mutable.py 900
14039             d.addCallback(lambda smap: smap.dump(StringIO()))
14040             d.addCallback(lambda sio:
14041                           self.failUnless("3-of-10" in sio.getvalue()))
14042-            d.addCallback(lambda res: n.overwrite("contents 1"))
14043+            d.addCallback(lambda res: n.overwrite(MutableData("contents 1")))
14044             d.addCallback(lambda res: self.failUnlessIdentical(res, None))
14045             d.addCallback(lambda res: n.download_best_version())
14046             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
14047hunk ./src/allmydata/test/test_mutable.py 904
14048-            d.addCallback(lambda res: n.overwrite("contents 2"))
14049+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
14050             d.addCallback(lambda res: n.download_best_version())
14051             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
14052             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
14053hunk ./src/allmydata/test/test_mutable.py 908
14054-            d.addCallback(lambda smap: n.upload("contents 3", smap))
14055+            d.addCallback(lambda smap: n.upload(MutableData("contents 3"), smap))
14056             d.addCallback(lambda res: n.download_best_version())
14057             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 3"))
14058             d.addCallback(lambda res: n.get_servermap(MODE_ANYTHING))
14059hunk ./src/allmydata/test/test_mutable.py 921
14060         return d
14061 
14062 
14063-class MakeShares(unittest.TestCase):
14064-    def test_encrypt(self):
14065-        nm = make_nodemaker()
14066-        CONTENTS = "some initial contents"
14067-        d = nm.create_mutable_file(CONTENTS)
14068-        def _created(fn):
14069-            p = Publish(fn, nm.storage_broker, None)
14070-            p.salt = "SALT" * 4
14071-            p.readkey = "\x00" * 16
14072-            p.newdata = CONTENTS
14073-            p.required_shares = 3
14074-            p.total_shares = 10
14075-            p.setup_encoding_parameters()
14076-            return p._encrypt_and_encode()
14077+    def test_size_after_servermap_update(self):
14078+        # a mutable file node should have something to say about how big
14079+        # it is after a servermap update is performed, since this tells
14080+        # us how large the best version of that mutable file is.
14081+        d = self.nodemaker.create_mutable_file()
14082+        def _created(n):
14083+            self.n = n
14084+            return n.get_servermap(MODE_READ)
14085         d.addCallback(_created)
14086hunk ./src/allmydata/test/test_mutable.py 930
14087-        def _done(shares_and_shareids):
14088-            (shares, share_ids) = shares_and_shareids
14089-            self.failUnlessEqual(len(shares), 10)
14090-            for sh in shares:
14091-                self.failUnless(isinstance(sh, str))
14092-                self.failUnlessEqual(len(sh), 7)
14093-            self.failUnlessEqual(len(share_ids), 10)
14094-        d.addCallback(_done)
14095-        return d
14096-
14097-    def test_generate(self):
14098-        nm = make_nodemaker()
14099-        CONTENTS = "some initial contents"
14100-        d = nm.create_mutable_file(CONTENTS)
14101-        def _created(fn):
14102-            self._fn = fn
14103-            p = Publish(fn, nm.storage_broker, None)
14104-            self._p = p
14105-            p.newdata = CONTENTS
14106-            p.required_shares = 3
14107-            p.total_shares = 10
14108-            p.setup_encoding_parameters()
14109-            p._new_seqnum = 3
14110-            p.salt = "SALT" * 4
14111-            # make some fake shares
14112-            shares_and_ids = ( ["%07d" % i for i in range(10)], range(10) )
14113-            p._privkey = fn.get_privkey()
14114-            p._encprivkey = fn.get_encprivkey()
14115-            p._pubkey = fn.get_pubkey()
14116-            return p._generate_shares(shares_and_ids)
14117+        d.addCallback(lambda ignored:
14118+            self.failUnlessEqual(self.n.get_size(), 0))
14119+        d.addCallback(lambda ignored:
14120+            self.n.overwrite(MutableData("foobarbaz")))
14121+        d.addCallback(lambda ignored:
14122+            self.failUnlessEqual(self.n.get_size(), 9))
14123+        d.addCallback(lambda ignored:
14124+            self.nodemaker.create_mutable_file(MutableData("foobarbaz")))
14125         d.addCallback(_created)
14126hunk ./src/allmydata/test/test_mutable.py 939
14127-        def _generated(res):
14128-            p = self._p
14129-            final_shares = p.shares
14130-            root_hash = p.root_hash
14131-            self.failUnlessEqual(len(root_hash), 32)
14132-            self.failUnless(isinstance(final_shares, dict))
14133-            self.failUnlessEqual(len(final_shares), 10)
14134-            self.failUnlessEqual(sorted(final_shares.keys()), range(10))
14135-            for i,sh in final_shares.items():
14136-                self.failUnless(isinstance(sh, str))
14137-                # feed the share through the unpacker as a sanity-check
14138-                pieces = unpack_share(sh)
14139-                (u_seqnum, u_root_hash, IV, k, N, segsize, datalen,
14140-                 pubkey, signature, share_hash_chain, block_hash_tree,
14141-                 share_data, enc_privkey) = pieces
14142-                self.failUnlessEqual(u_seqnum, 3)
14143-                self.failUnlessEqual(u_root_hash, root_hash)
14144-                self.failUnlessEqual(k, 3)
14145-                self.failUnlessEqual(N, 10)
14146-                self.failUnlessEqual(segsize, 21)
14147-                self.failUnlessEqual(datalen, len(CONTENTS))
14148-                self.failUnlessEqual(pubkey, p._pubkey.serialize())
14149-                sig_material = struct.pack(">BQ32s16s BBQQ",
14150-                                           0, p._new_seqnum, root_hash, IV,
14151-                                           k, N, segsize, datalen)
14152-                self.failUnless(p._pubkey.verify(sig_material, signature))
14153-                #self.failUnlessEqual(signature, p._privkey.sign(sig_material))
14154-                self.failUnless(isinstance(share_hash_chain, dict))
14155-                self.failUnlessEqual(len(share_hash_chain), 4) # ln2(10)++
14156-                for shnum,share_hash in share_hash_chain.items():
14157-                    self.failUnless(isinstance(shnum, int))
14158-                    self.failUnless(isinstance(share_hash, str))
14159-                    self.failUnlessEqual(len(share_hash), 32)
14160-                self.failUnless(isinstance(block_hash_tree, list))
14161-                self.failUnlessEqual(len(block_hash_tree), 1) # very small tree
14162-                self.failUnlessEqual(IV, "SALT"*4)
14163-                self.failUnlessEqual(len(share_data), len("%07d" % 1))
14164-                self.failUnlessEqual(enc_privkey, self._fn.get_encprivkey())
14165-        d.addCallback(_generated)
14166+        d.addCallback(lambda ignored:
14167+            self.failUnlessEqual(self.n.get_size(), 9))
14168         return d
14169 
14170hunk ./src/allmydata/test/test_mutable.py 943
14171-    # TODO: when we publish to 20 peers, we should get one share per peer on 10
14172-    # when we publish to 3 peers, we should get either 3 or 4 shares per peer
14173-    # when we publish to zero peers, we should get a NotEnoughSharesError
14174 
14175 class PublishMixin:
14176     def publish_one(self):
14177hunk ./src/allmydata/test/test_mutable.py 949
14178         # publish a file and create shares, which can then be manipulated
14179         # later.
14180         self.CONTENTS = "New contents go here" * 1000
14181+        self.uploadable = MutableData(self.CONTENTS)
14182+        self._storage = FakeStorage()
14183+        self._nodemaker = make_nodemaker(self._storage)
14184+        self._storage_broker = self._nodemaker.storage_broker
14185+        d = self._nodemaker.create_mutable_file(self.uploadable)
14186+        def _created(node):
14187+            self._fn = node
14188+            self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
14189+        d.addCallback(_created)
14190+        return d
14191+
14192+    def publish_mdmf(self):
14193+        # like publish_one, except that the result is guaranteed to be
14194+        # an MDMF file.
14195+        # self.CONTENTS should have more than one segment.
14196+        self.CONTENTS = "This is an MDMF file" * 100000
14197+        self.uploadable = MutableData(self.CONTENTS)
14198+        self._storage = FakeStorage()
14199+        self._nodemaker = make_nodemaker(self._storage)
14200+        self._storage_broker = self._nodemaker.storage_broker
14201+        d = self._nodemaker.create_mutable_file(self.uploadable, version=MDMF_VERSION)
14202+        def _created(node):
14203+            self._fn = node
14204+            self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
14205+        d.addCallback(_created)
14206+        return d
14207+
14208+
14209+    def publish_sdmf(self):
14210+        # like publish_one, except that the result is guaranteed to be
14211+        # an SDMF file
14212+        self.CONTENTS = "This is an SDMF file" * 1000
14213+        self.uploadable = MutableData(self.CONTENTS)
14214         self._storage = FakeStorage()
14215         self._nodemaker = make_nodemaker(self._storage)
14216         self._storage_broker = self._nodemaker.storage_broker
14217hunk ./src/allmydata/test/test_mutable.py 985
14218-        d = self._nodemaker.create_mutable_file(self.CONTENTS)
14219+        d = self._nodemaker.create_mutable_file(self.uploadable, version=SDMF_VERSION)
14220         def _created(node):
14221             self._fn = node
14222             self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
14223hunk ./src/allmydata/test/test_mutable.py 992
14224         d.addCallback(_created)
14225         return d
14226 
14227-    def publish_multiple(self):
14228+
14229+    def publish_multiple(self, version=0):
14230         self.CONTENTS = ["Contents 0",
14231                          "Contents 1",
14232                          "Contents 2",
14233hunk ./src/allmydata/test/test_mutable.py 999
14234                          "Contents 3a",
14235                          "Contents 3b"]
14236+        self.uploadables = [MutableData(d) for d in self.CONTENTS]
14237         self._copied_shares = {}
14238         self._storage = FakeStorage()
14239         self._nodemaker = make_nodemaker(self._storage)
14240hunk ./src/allmydata/test/test_mutable.py 1003
14241-        d = self._nodemaker.create_mutable_file(self.CONTENTS[0]) # seqnum=1
14242+        d = self._nodemaker.create_mutable_file(self.uploadables[0], version=version) # seqnum=1
14243         def _created(node):
14244             self._fn = node
14245             # now create multiple versions of the same file, and accumulate
14246hunk ./src/allmydata/test/test_mutable.py 1010
14247             # their shares, so we can mix and match them later.
14248             d = defer.succeed(None)
14249             d.addCallback(self._copy_shares, 0)
14250-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[1])) #s2
14251+            d.addCallback(lambda res: node.overwrite(self.uploadables[1])) #s2
14252             d.addCallback(self._copy_shares, 1)
14253hunk ./src/allmydata/test/test_mutable.py 1012
14254-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[2])) #s3
14255+            d.addCallback(lambda res: node.overwrite(self.uploadables[2])) #s3
14256             d.addCallback(self._copy_shares, 2)
14257hunk ./src/allmydata/test/test_mutable.py 1014
14258-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[3])) #s4a
14259+            d.addCallback(lambda res: node.overwrite(self.uploadables[3])) #s4a
14260             d.addCallback(self._copy_shares, 3)
14261             # now we replace all the shares with version s3, and upload a new
14262             # version to get s4b.
14263hunk ./src/allmydata/test/test_mutable.py 1020
14264             rollback = dict([(i,2) for i in range(10)])
14265             d.addCallback(lambda res: self._set_versions(rollback))
14266-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[4])) #s4b
14267+            d.addCallback(lambda res: node.overwrite(self.uploadables[4])) #s4b
14268             d.addCallback(self._copy_shares, 4)
14269             # we leave the storage in state 4
14270             return d
14271hunk ./src/allmydata/test/test_mutable.py 1027
14272         d.addCallback(_created)
14273         return d
14274 
14275+
14276     def _copy_shares(self, ignored, index):
14277         shares = self._storage._peers
14278         # we need a deep copy
14279hunk ./src/allmydata/test/test_mutable.py 1050
14280                     index = versionmap[shnum]
14281                     shares[peerid][shnum] = oldshares[index][peerid][shnum]
14282 
14283+class PausingConsumer:
14284+    implements(IConsumer)
14285+    def __init__(self):
14286+        self.data = ""
14287+        self.already_paused = False
14288+
14289+    def registerProducer(self, producer, streaming):
14290+        self.producer = producer
14291+        self.producer.resumeProducing()
14292+
14293+    def unregisterProducer(self):
14294+        self.producer = None
14295+
14296+    def _unpause(self, ignored):
14297+        self.producer.resumeProducing()
14298+
14299+    def write(self, data):
14300+        self.data += data
14301+        if not self.already_paused:
14302+           self.producer.pauseProducing()
14303+           self.already_paused = True
14304+           reactor.callLater(15, self._unpause, None)
14305+
14306 
14307 class Servermap(unittest.TestCase, PublishMixin):
14308     def setUp(self):
14309hunk ./src/allmydata/test/test_mutable.py 1078
14310         return self.publish_one()
14311 
14312-    def make_servermap(self, mode=MODE_CHECK, fn=None, sb=None):
14313+    def make_servermap(self, mode=MODE_CHECK, fn=None, sb=None,
14314+                       update_range=None):
14315         if fn is None:
14316             fn = self._fn
14317         if sb is None:
14318hunk ./src/allmydata/test/test_mutable.py 1085
14319             sb = self._storage_broker
14320         smu = ServermapUpdater(fn, sb, Monitor(),
14321-                               ServerMap(), mode)
14322+                               ServerMap(), mode, update_range=update_range)
14323         d = smu.update()
14324         return d
14325 
14326hunk ./src/allmydata/test/test_mutable.py 1151
14327         # create a new file, which is large enough to knock the privkey out
14328         # of the early part of the file
14329         LARGE = "These are Larger contents" * 200 # about 5KB
14330-        d.addCallback(lambda res: self._nodemaker.create_mutable_file(LARGE))
14331+        LARGE_uploadable = MutableData(LARGE)
14332+        d.addCallback(lambda res: self._nodemaker.create_mutable_file(LARGE_uploadable))
14333         def _created(large_fn):
14334             large_fn2 = self._nodemaker.create_from_cap(large_fn.get_uri())
14335             return self.make_servermap(MODE_WRITE, large_fn2)
14336hunk ./src/allmydata/test/test_mutable.py 1160
14337         d.addCallback(lambda sm: self.failUnlessOneRecoverable(sm, 10))
14338         return d
14339 
14340+
14341     def test_mark_bad(self):
14342         d = defer.succeed(None)
14343         ms = self.make_servermap
14344hunk ./src/allmydata/test/test_mutable.py 1206
14345         self._storage._peers = {} # delete all shares
14346         ms = self.make_servermap
14347         d = defer.succeed(None)
14348-
14349+#
14350         d.addCallback(lambda res: ms(mode=MODE_CHECK))
14351         d.addCallback(lambda sm: self.failUnlessNoneRecoverable(sm))
14352 
14353hunk ./src/allmydata/test/test_mutable.py 1258
14354         return d
14355 
14356 
14357+    def test_servermapupdater_finds_mdmf_files(self):
14358+        # setUp already published an MDMF file for us. We just need to
14359+        # make sure that when we run the ServermapUpdater, the file is
14360+        # reported to have one recoverable version.
14361+        d = defer.succeed(None)
14362+        d.addCallback(lambda ignored:
14363+            self.publish_mdmf())
14364+        d.addCallback(lambda ignored:
14365+            self.make_servermap(mode=MODE_CHECK))
14366+        # Calling make_servermap also updates the servermap in the mode
14367+        # that we specify, so we just need to see what it says.
14368+        def _check_servermap(sm):
14369+            self.failUnlessEqual(len(sm.recoverable_versions()), 1)
14370+        d.addCallback(_check_servermap)
14371+        return d
14372+
14373+
14374+    def test_fetch_update(self):
14375+        d = defer.succeed(None)
14376+        d.addCallback(lambda ignored:
14377+            self.publish_mdmf())
14378+        d.addCallback(lambda ignored:
14379+            self.make_servermap(mode=MODE_WRITE, update_range=(1, 2)))
14380+        def _check_servermap(sm):
14381+            # 10 shares
14382+            self.failUnlessEqual(len(sm.update_data), 10)
14383+            # one version
14384+            for data in sm.update_data.itervalues():
14385+                self.failUnlessEqual(len(data), 1)
14386+        d.addCallback(_check_servermap)
14387+        return d
14388+
14389+
14390+    def test_servermapupdater_finds_sdmf_files(self):
14391+        d = defer.succeed(None)
14392+        d.addCallback(lambda ignored:
14393+            self.publish_sdmf())
14394+        d.addCallback(lambda ignored:
14395+            self.make_servermap(mode=MODE_CHECK))
14396+        d.addCallback(lambda servermap:
14397+            self.failUnlessEqual(len(servermap.recoverable_versions()), 1))
14398+        return d
14399+
14400 
14401 class Roundtrip(unittest.TestCase, testutil.ShouldFailMixin, PublishMixin):
14402     def setUp(self):
14403hunk ./src/allmydata/test/test_mutable.py 1341
14404         if version is None:
14405             version = servermap.best_recoverable_version()
14406         r = Retrieve(self._fn, servermap, version)
14407-        return r.download()
14408+        c = consumer.MemoryConsumer()
14409+        d = r.download(consumer=c)
14410+        d.addCallback(lambda mc: "".join(mc.chunks))
14411+        return d
14412+
14413 
14414     def test_basic(self):
14415         d = self.make_servermap()
14416hunk ./src/allmydata/test/test_mutable.py 1422
14417         return d
14418     test_no_servers_download.timeout = 15
14419 
14420+
14421     def _test_corrupt_all(self, offset, substring,
14422hunk ./src/allmydata/test/test_mutable.py 1424
14423-                          should_succeed=False, corrupt_early=True,
14424-                          failure_checker=None):
14425+                          should_succeed=False,
14426+                          corrupt_early=True,
14427+                          failure_checker=None,
14428+                          fetch_privkey=False):
14429         d = defer.succeed(None)
14430         if corrupt_early:
14431             d.addCallback(corrupt, self._storage, offset)
14432hunk ./src/allmydata/test/test_mutable.py 1444
14433                     self.failUnlessIn(substring, "".join(allproblems))
14434                 return servermap
14435             if should_succeed:
14436-                d1 = self._fn.download_version(servermap, ver)
14437+                d1 = self._fn.download_version(servermap, ver,
14438+                                               fetch_privkey)
14439                 d1.addCallback(lambda new_contents:
14440                                self.failUnlessEqual(new_contents, self.CONTENTS))
14441             else:
14442hunk ./src/allmydata/test/test_mutable.py 1452
14443                 d1 = self.shouldFail(NotEnoughSharesError,
14444                                      "_corrupt_all(offset=%s)" % (offset,),
14445                                      substring,
14446-                                     self._fn.download_version, servermap, ver)
14447+                                     self._fn.download_version, servermap,
14448+                                                                ver,
14449+                                                                fetch_privkey)
14450             if failure_checker:
14451                 d1.addCallback(failure_checker)
14452             d1.addCallback(lambda res: servermap)
14453hunk ./src/allmydata/test/test_mutable.py 1463
14454         return d
14455 
14456     def test_corrupt_all_verbyte(self):
14457-        # when the version byte is not 0, we hit an UnknownVersionError error
14458-        # in unpack_share().
14459+        # when the version byte is not 0 or 1, we hit an UnknownVersionError
14460+        # error in unpack_share().
14461         d = self._test_corrupt_all(0, "UnknownVersionError")
14462         def _check_servermap(servermap):
14463             # and the dump should mention the problems
14464hunk ./src/allmydata/test/test_mutable.py 1470
14465             s = StringIO()
14466             dump = servermap.dump(s).getvalue()
14467-            self.failUnless("10 PROBLEMS" in dump, dump)
14468+            self.failUnless("30 PROBLEMS" in dump, dump)
14469         d.addCallback(_check_servermap)
14470         return d
14471 
14472hunk ./src/allmydata/test/test_mutable.py 1540
14473         return self._test_corrupt_all("enc_privkey", None, should_succeed=True)
14474 
14475 
14476+    def test_corrupt_all_encprivkey_late(self):
14477+        # this should work for the same reason as above, but we corrupt
14478+        # after the servermap update to exercise the error handling
14479+        # code.
14480+        # We need to remove the privkey from the node, or the retrieve
14481+        # process won't know to update it.
14482+        self._fn._privkey = None
14483+        return self._test_corrupt_all("enc_privkey",
14484+                                      None, # this shouldn't fail
14485+                                      should_succeed=True,
14486+                                      corrupt_early=False,
14487+                                      fetch_privkey=True)
14488+
14489+
14490     def test_corrupt_all_seqnum_late(self):
14491         # corrupting the seqnum between mapupdate and retrieve should result
14492         # in NotEnoughSharesError, since each share will look invalid
14493hunk ./src/allmydata/test/test_mutable.py 1560
14494         def _check(res):
14495             f = res[0]
14496             self.failUnless(f.check(NotEnoughSharesError))
14497-            self.failUnless("someone wrote to the data since we read the servermap" in str(f))
14498+            self.failUnless("uncoordinated write" in str(f))
14499         return self._test_corrupt_all(1, "ran out of peers",
14500                                       corrupt_early=False,
14501                                       failure_checker=_check)
14502hunk ./src/allmydata/test/test_mutable.py 1604
14503                             in str(servermap.problems[0]))
14504             ver = servermap.best_recoverable_version()
14505             r = Retrieve(self._fn, servermap, ver)
14506-            return r.download()
14507+            c = consumer.MemoryConsumer()
14508+            return r.download(c)
14509         d.addCallback(_do_retrieve)
14510hunk ./src/allmydata/test/test_mutable.py 1607
14511+        d.addCallback(lambda mc: "".join(mc.chunks))
14512         d.addCallback(lambda new_contents:
14513                       self.failUnlessEqual(new_contents, self.CONTENTS))
14514         return d
14515hunk ./src/allmydata/test/test_mutable.py 1612
14516 
14517-    def test_corrupt_some(self):
14518-        # corrupt the data of first five shares (so the servermap thinks
14519-        # they're good but retrieve marks them as bad), so that the
14520-        # MODE_READ set of 6 will be insufficient, forcing node.download to
14521-        # retry with more servers.
14522-        corrupt(None, self._storage, "share_data", range(5))
14523-        d = self.make_servermap()
14524+
14525+    def _test_corrupt_some(self, offset, mdmf=False):
14526+        if mdmf:
14527+            d = self.publish_mdmf()
14528+        else:
14529+            d = defer.succeed(None)
14530+        d.addCallback(lambda ignored:
14531+            corrupt(None, self._storage, offset, range(5)))
14532+        d.addCallback(lambda ignored:
14533+            self.make_servermap())
14534         def _do_retrieve(servermap):
14535             ver = servermap.best_recoverable_version()
14536             self.failUnless(ver)
14537hunk ./src/allmydata/test/test_mutable.py 1628
14538             return self._fn.download_best_version()
14539         d.addCallback(_do_retrieve)
14540         d.addCallback(lambda new_contents:
14541-                      self.failUnlessEqual(new_contents, self.CONTENTS))
14542+            self.failUnlessEqual(new_contents, self.CONTENTS))
14543         return d
14544 
14545hunk ./src/allmydata/test/test_mutable.py 1631
14546+
14547+    def test_corrupt_some(self):
14548+        # corrupt the data of first five shares (so the servermap thinks
14549+        # they're good but retrieve marks them as bad), so that the
14550+        # MODE_READ set of 6 will be insufficient, forcing node.download to
14551+        # retry with more servers.
14552+        return self._test_corrupt_some("share_data")
14553+
14554+
14555     def test_download_fails(self):
14556hunk ./src/allmydata/test/test_mutable.py 1641
14557-        corrupt(None, self._storage, "signature")
14558-        d = self.shouldFail(UnrecoverableFileError, "test_download_anyway",
14559+        d = corrupt(None, self._storage, "signature")
14560+        d.addCallback(lambda ignored:
14561+            self.shouldFail(UnrecoverableFileError, "test_download_anyway",
14562                             "no recoverable versions",
14563hunk ./src/allmydata/test/test_mutable.py 1645
14564-                            self._fn.download_best_version)
14565+                            self._fn.download_best_version))
14566+        return d
14567+
14568+
14569+
14570+    def test_corrupt_mdmf_block_hash_tree(self):
14571+        d = self.publish_mdmf()
14572+        d.addCallback(lambda ignored:
14573+            self._test_corrupt_all(("block_hash_tree", 12 * 32),
14574+                                   "block hash tree failure",
14575+                                   corrupt_early=False,
14576+                                   should_succeed=False))
14577         return d
14578 
14579 
14580hunk ./src/allmydata/test/test_mutable.py 1660
14581+    def test_corrupt_mdmf_block_hash_tree_late(self):
14582+        d = self.publish_mdmf()
14583+        d.addCallback(lambda ignored:
14584+            self._test_corrupt_all(("block_hash_tree", 12 * 32),
14585+                                   "block hash tree failure",
14586+                                   corrupt_early=True,
14587+                                   should_succeed=False))
14588+        return d
14589+
14590+
14591+    def test_corrupt_mdmf_share_data(self):
14592+        d = self.publish_mdmf()
14593+        d.addCallback(lambda ignored:
14594+            # TODO: Find out what the block size is and corrupt a
14595+            # specific block, rather than just guessing.
14596+            self._test_corrupt_all(("share_data", 12 * 40),
14597+                                    "block hash tree failure",
14598+                                    corrupt_early=True,
14599+                                    should_succeed=False))
14600+        return d
14601+
14602+
14603+    def test_corrupt_some_mdmf(self):
14604+        return self._test_corrupt_some(("share_data", 12 * 40),
14605+                                       mdmf=True)
14606+
14607+
14608 class CheckerMixin:
14609     def check_good(self, r, where):
14610         self.failUnless(r.is_healthy(), where)
14611hunk ./src/allmydata/test/test_mutable.py 1717
14612         d.addCallback(self.check_good, "test_check_good")
14613         return d
14614 
14615+    def test_check_mdmf_good(self):
14616+        d = self.publish_mdmf()
14617+        d.addCallback(lambda ignored:
14618+            self._fn.check(Monitor()))
14619+        d.addCallback(self.check_good, "test_check_mdmf_good")
14620+        return d
14621+
14622     def test_check_no_shares(self):
14623         for shares in self._storage._peers.values():
14624             shares.clear()
14625hunk ./src/allmydata/test/test_mutable.py 1731
14626         d.addCallback(self.check_bad, "test_check_no_shares")
14627         return d
14628 
14629+    def test_check_mdmf_no_shares(self):
14630+        d = self.publish_mdmf()
14631+        def _then(ignored):
14632+            for share in self._storage._peers.values():
14633+                share.clear()
14634+        d.addCallback(_then)
14635+        d.addCallback(lambda ignored:
14636+            self._fn.check(Monitor()))
14637+        d.addCallback(self.check_bad, "test_check_mdmf_no_shares")
14638+        return d
14639+
14640     def test_check_not_enough_shares(self):
14641         for shares in self._storage._peers.values():
14642             for shnum in shares.keys():
14643hunk ./src/allmydata/test/test_mutable.py 1751
14644         d.addCallback(self.check_bad, "test_check_not_enough_shares")
14645         return d
14646 
14647+    def test_check_mdmf_not_enough_shares(self):
14648+        d = self.publish_mdmf()
14649+        def _then(ignored):
14650+            for shares in self._storage._peers.values():
14651+                for shnum in shares.keys():
14652+                    if shnum > 0:
14653+                        del shares[shnum]
14654+        d.addCallback(_then)
14655+        d.addCallback(lambda ignored:
14656+            self._fn.check(Monitor()))
14657+        d.addCallback(self.check_bad, "test_check_mdmf_not_enougH_shares")
14658+        return d
14659+
14660+
14661     def test_check_all_bad_sig(self):
14662hunk ./src/allmydata/test/test_mutable.py 1766
14663-        corrupt(None, self._storage, 1) # bad sig
14664-        d = self._fn.check(Monitor())
14665+        d = corrupt(None, self._storage, 1) # bad sig
14666+        d.addCallback(lambda ignored:
14667+            self._fn.check(Monitor()))
14668         d.addCallback(self.check_bad, "test_check_all_bad_sig")
14669         return d
14670 
14671hunk ./src/allmydata/test/test_mutable.py 1772
14672+    def test_check_mdmf_all_bad_sig(self):
14673+        d = self.publish_mdmf()
14674+        d.addCallback(lambda ignored:
14675+            corrupt(None, self._storage, 1))
14676+        d.addCallback(lambda ignored:
14677+            self._fn.check(Monitor()))
14678+        d.addCallback(self.check_bad, "test_check_mdmf_all_bad_sig")
14679+        return d
14680+
14681     def test_check_all_bad_blocks(self):
14682hunk ./src/allmydata/test/test_mutable.py 1782
14683-        corrupt(None, self._storage, "share_data", [9]) # bad blocks
14684+        d = corrupt(None, self._storage, "share_data", [9]) # bad blocks
14685         # the Checker won't notice this.. it doesn't look at actual data
14686hunk ./src/allmydata/test/test_mutable.py 1784
14687-        d = self._fn.check(Monitor())
14688+        d.addCallback(lambda ignored:
14689+            self._fn.check(Monitor()))
14690         d.addCallback(self.check_good, "test_check_all_bad_blocks")
14691         return d
14692 
14693hunk ./src/allmydata/test/test_mutable.py 1789
14694+
14695+    def test_check_mdmf_all_bad_blocks(self):
14696+        d = self.publish_mdmf()
14697+        d.addCallback(lambda ignored:
14698+            corrupt(None, self._storage, "share_data"))
14699+        d.addCallback(lambda ignored:
14700+            self._fn.check(Monitor()))
14701+        d.addCallback(self.check_good, "test_check_mdmf_all_bad_blocks")
14702+        return d
14703+
14704     def test_verify_good(self):
14705         d = self._fn.check(Monitor(), verify=True)
14706         d.addCallback(self.check_good, "test_verify_good")
14707hunk ./src/allmydata/test/test_mutable.py 1803
14708         return d
14709+    test_verify_good.timeout = 15
14710 
14711     def test_verify_all_bad_sig(self):
14712hunk ./src/allmydata/test/test_mutable.py 1806
14713-        corrupt(None, self._storage, 1) # bad sig
14714-        d = self._fn.check(Monitor(), verify=True)
14715+        d = corrupt(None, self._storage, 1) # bad sig
14716+        d.addCallback(lambda ignored:
14717+            self._fn.check(Monitor(), verify=True))
14718         d.addCallback(self.check_bad, "test_verify_all_bad_sig")
14719         return d
14720 
14721hunk ./src/allmydata/test/test_mutable.py 1813
14722     def test_verify_one_bad_sig(self):
14723-        corrupt(None, self._storage, 1, [9]) # bad sig
14724-        d = self._fn.check(Monitor(), verify=True)
14725+        d = corrupt(None, self._storage, 1, [9]) # bad sig
14726+        d.addCallback(lambda ignored:
14727+            self._fn.check(Monitor(), verify=True))
14728         d.addCallback(self.check_bad, "test_verify_one_bad_sig")
14729         return d
14730 
14731hunk ./src/allmydata/test/test_mutable.py 1820
14732     def test_verify_one_bad_block(self):
14733-        corrupt(None, self._storage, "share_data", [9]) # bad blocks
14734+        d = corrupt(None, self._storage, "share_data", [9]) # bad blocks
14735         # the Verifier *will* notice this, since it examines every byte
14736hunk ./src/allmydata/test/test_mutable.py 1822
14737-        d = self._fn.check(Monitor(), verify=True)
14738+        d.addCallback(lambda ignored:
14739+            self._fn.check(Monitor(), verify=True))
14740         d.addCallback(self.check_bad, "test_verify_one_bad_block")
14741         d.addCallback(self.check_expected_failure,
14742                       CorruptShareError, "block hash tree failure",
14743hunk ./src/allmydata/test/test_mutable.py 1831
14744         return d
14745 
14746     def test_verify_one_bad_sharehash(self):
14747-        corrupt(None, self._storage, "share_hash_chain", [9], 5)
14748-        d = self._fn.check(Monitor(), verify=True)
14749+        d = corrupt(None, self._storage, "share_hash_chain", [9], 5)
14750+        d.addCallback(lambda ignored:
14751+            self._fn.check(Monitor(), verify=True))
14752         d.addCallback(self.check_bad, "test_verify_one_bad_sharehash")
14753         d.addCallback(self.check_expected_failure,
14754                       CorruptShareError, "corrupt hashes",
14755hunk ./src/allmydata/test/test_mutable.py 1841
14756         return d
14757 
14758     def test_verify_one_bad_encprivkey(self):
14759-        corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
14760-        d = self._fn.check(Monitor(), verify=True)
14761+        d = corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
14762+        d.addCallback(lambda ignored:
14763+            self._fn.check(Monitor(), verify=True))
14764         d.addCallback(self.check_bad, "test_verify_one_bad_encprivkey")
14765         d.addCallback(self.check_expected_failure,
14766                       CorruptShareError, "invalid privkey",
14767hunk ./src/allmydata/test/test_mutable.py 1851
14768         return d
14769 
14770     def test_verify_one_bad_encprivkey_uncheckable(self):
14771-        corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
14772+        d = corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
14773         readonly_fn = self._fn.get_readonly()
14774         # a read-only node has no way to validate the privkey
14775hunk ./src/allmydata/test/test_mutable.py 1854
14776-        d = readonly_fn.check(Monitor(), verify=True)
14777+        d.addCallback(lambda ignored:
14778+            readonly_fn.check(Monitor(), verify=True))
14779         d.addCallback(self.check_good,
14780                       "test_verify_one_bad_encprivkey_uncheckable")
14781         return d
14782hunk ./src/allmydata/test/test_mutable.py 1860
14783 
14784+
14785+    def test_verify_mdmf_good(self):
14786+        d = self.publish_mdmf()
14787+        d.addCallback(lambda ignored:
14788+            self._fn.check(Monitor(), verify=True))
14789+        d.addCallback(self.check_good, "test_verify_mdmf_good")
14790+        return d
14791+
14792+
14793+    def test_verify_mdmf_one_bad_block(self):
14794+        d = self.publish_mdmf()
14795+        d.addCallback(lambda ignored:
14796+            corrupt(None, self._storage, "share_data", [1]))
14797+        d.addCallback(lambda ignored:
14798+            self._fn.check(Monitor(), verify=True))
14799+        # We should find one bad block here
14800+        d.addCallback(self.check_bad, "test_verify_mdmf_one_bad_block")
14801+        d.addCallback(self.check_expected_failure,
14802+                      CorruptShareError, "block hash tree failure",
14803+                      "test_verify_mdmf_one_bad_block")
14804+        return d
14805+
14806+
14807+    def test_verify_mdmf_bad_encprivkey(self):
14808+        d = self.publish_mdmf()
14809+        d.addCallback(lambda ignored:
14810+            corrupt(None, self._storage, "enc_privkey", [0]))
14811+        d.addCallback(lambda ignored:
14812+            self._fn.check(Monitor(), verify=True))
14813+        d.addCallback(self.check_bad, "test_verify_mdmf_bad_encprivkey")
14814+        d.addCallback(self.check_expected_failure,
14815+                      CorruptShareError, "privkey",
14816+                      "test_verify_mdmf_bad_encprivkey")
14817+        return d
14818+
14819+
14820+    def test_verify_mdmf_bad_sig(self):
14821+        d = self.publish_mdmf()
14822+        d.addCallback(lambda ignored:
14823+            corrupt(None, self._storage, 1, [1]))
14824+        d.addCallback(lambda ignored:
14825+            self._fn.check(Monitor(), verify=True))
14826+        d.addCallback(self.check_bad, "test_verify_mdmf_bad_sig")
14827+        return d
14828+
14829+
14830+    def test_verify_mdmf_bad_encprivkey_uncheckable(self):
14831+        d = self.publish_mdmf()
14832+        d.addCallback(lambda ignored:
14833+            corrupt(None, self._storage, "enc_privkey", [1]))
14834+        d.addCallback(lambda ignored:
14835+            self._fn.get_readonly())
14836+        d.addCallback(lambda fn:
14837+            fn.check(Monitor(), verify=True))
14838+        d.addCallback(self.check_good,
14839+                      "test_verify_mdmf_bad_encprivkey_uncheckable")
14840+        return d
14841+
14842+
14843 class Repair(unittest.TestCase, PublishMixin, ShouldFailMixin):
14844 
14845     def get_shares(self, s):
14846hunk ./src/allmydata/test/test_mutable.py 1984
14847         current_shares = self.old_shares[-1]
14848         self.failUnlessEqual(old_shares, current_shares)
14849 
14850+
14851     def test_unrepairable_0shares(self):
14852         d = self.publish_one()
14853         def _delete_all_shares(ign):
14854hunk ./src/allmydata/test/test_mutable.py 1999
14855         d.addCallback(_check)
14856         return d
14857 
14858+    def test_mdmf_unrepairable_0shares(self):
14859+        d = self.publish_mdmf()
14860+        def _delete_all_shares(ign):
14861+            shares = self._storage._peers
14862+            for peerid in shares:
14863+                shares[peerid] = {}
14864+        d.addCallback(_delete_all_shares)
14865+        d.addCallback(lambda ign: self._fn.check(Monitor()))
14866+        d.addCallback(lambda check_results: self._fn.repair(check_results))
14867+        d.addCallback(lambda crr: self.failIf(crr.get_successful()))
14868+        return d
14869+
14870+
14871     def test_unrepairable_1share(self):
14872         d = self.publish_one()
14873         def _delete_all_shares(ign):
14874hunk ./src/allmydata/test/test_mutable.py 2028
14875         d.addCallback(_check)
14876         return d
14877 
14878+    def test_mdmf_unrepairable_1share(self):
14879+        d = self.publish_mdmf()
14880+        def _delete_all_shares(ign):
14881+            shares = self._storage._peers
14882+            for peerid in shares:
14883+                for shnum in list(shares[peerid]):
14884+                    if shnum > 0:
14885+                        del shares[peerid][shnum]
14886+        d.addCallback(_delete_all_shares)
14887+        d.addCallback(lambda ign: self._fn.check(Monitor()))
14888+        d.addCallback(lambda check_results: self._fn.repair(check_results))
14889+        def _check(crr):
14890+            self.failUnlessEqual(crr.get_successful(), False)
14891+        d.addCallback(_check)
14892+        return d
14893+
14894+    def test_repairable_5shares(self):
14895+        d = self.publish_mdmf()
14896+        def _delete_all_shares(ign):
14897+            shares = self._storage._peers
14898+            for peerid in shares:
14899+                for shnum in list(shares[peerid]):
14900+                    if shnum > 4:
14901+                        del shares[peerid][shnum]
14902+        d.addCallback(_delete_all_shares)
14903+        d.addCallback(lambda ign: self._fn.check(Monitor()))
14904+        d.addCallback(lambda check_results: self._fn.repair(check_results))
14905+        def _check(crr):
14906+            self.failUnlessEqual(crr.get_successful(), True)
14907+        d.addCallback(_check)
14908+        return d
14909+
14910+    def test_mdmf_repairable_5shares(self):
14911+        d = self.publish_mdmf()
14912+        def _delete_some_shares(ign):
14913+            shares = self._storage._peers
14914+            for peerid in shares:
14915+                for shnum in list(shares[peerid]):
14916+                    if shnum > 5:
14917+                        del shares[peerid][shnum]
14918+        d.addCallback(_delete_some_shares)
14919+        d.addCallback(lambda ign: self._fn.check(Monitor()))
14920+        def _check(cr):
14921+            self.failIf(cr.is_healthy())
14922+            self.failUnless(cr.is_recoverable())
14923+            return cr
14924+        d.addCallback(_check)
14925+        d.addCallback(lambda check_results: self._fn.repair(check_results))
14926+        def _check1(crr):
14927+            self.failUnlessEqual(crr.get_successful(), True)
14928+        d.addCallback(_check1)
14929+        return d
14930+
14931+
14932     def test_merge(self):
14933         self.old_shares = []
14934         d = self.publish_multiple()
14935hunk ./src/allmydata/test/test_mutable.py 2196
14936 class MultipleEncodings(unittest.TestCase):
14937     def setUp(self):
14938         self.CONTENTS = "New contents go here"
14939+        self.uploadable = MutableData(self.CONTENTS)
14940         self._storage = FakeStorage()
14941         self._nodemaker = make_nodemaker(self._storage, num_peers=20)
14942         self._storage_broker = self._nodemaker.storage_broker
14943hunk ./src/allmydata/test/test_mutable.py 2200
14944-        d = self._nodemaker.create_mutable_file(self.CONTENTS)
14945+        d = self._nodemaker.create_mutable_file(self.uploadable)
14946         def _created(node):
14947             self._fn = node
14948         d.addCallback(_created)
14949hunk ./src/allmydata/test/test_mutable.py 2206
14950         return d
14951 
14952-    def _encode(self, k, n, data):
14953+    def _encode(self, k, n, data, version=SDMF_VERSION):
14954         # encode 'data' into a peerid->shares dict.
14955 
14956         fn = self._fn
14957hunk ./src/allmydata/test/test_mutable.py 2226
14958         s = self._storage
14959         s._peers = {} # clear existing storage
14960         p2 = Publish(fn2, self._storage_broker, None)
14961-        d = p2.publish(data)
14962+        uploadable = MutableData(data)
14963+        d = p2.publish(uploadable)
14964         def _published(res):
14965             shares = s._peers
14966             s._peers = {}
14967hunk ./src/allmydata/test/test_mutable.py 2494
14968         self.basedir = "mutable/Problems/test_publish_surprise"
14969         self.set_up_grid()
14970         nm = self.g.clients[0].nodemaker
14971-        d = nm.create_mutable_file("contents 1")
14972+        d = nm.create_mutable_file(MutableData("contents 1"))
14973         def _created(n):
14974             d = defer.succeed(None)
14975             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
14976hunk ./src/allmydata/test/test_mutable.py 2504
14977             d.addCallback(_got_smap1)
14978             # then modify the file, leaving the old map untouched
14979             d.addCallback(lambda res: log.msg("starting winning write"))
14980-            d.addCallback(lambda res: n.overwrite("contents 2"))
14981+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
14982             # now attempt to modify the file with the old servermap. This
14983             # will look just like an uncoordinated write, in which every
14984             # single share got updated between our mapupdate and our publish
14985hunk ./src/allmydata/test/test_mutable.py 2513
14986                           self.shouldFail(UncoordinatedWriteError,
14987                                           "test_publish_surprise", None,
14988                                           n.upload,
14989-                                          "contents 2a", self.old_map))
14990+                                          MutableData("contents 2a"), self.old_map))
14991             return d
14992         d.addCallback(_created)
14993         return d
14994hunk ./src/allmydata/test/test_mutable.py 2522
14995         self.basedir = "mutable/Problems/test_retrieve_surprise"
14996         self.set_up_grid()
14997         nm = self.g.clients[0].nodemaker
14998-        d = nm.create_mutable_file("contents 1")
14999+        d = nm.create_mutable_file(MutableData("contents 1"))
15000         def _created(n):
15001             d = defer.succeed(None)
15002             d.addCallback(lambda res: n.get_servermap(MODE_READ))
15003hunk ./src/allmydata/test/test_mutable.py 2532
15004             d.addCallback(_got_smap1)
15005             # then modify the file, leaving the old map untouched
15006             d.addCallback(lambda res: log.msg("starting winning write"))
15007-            d.addCallback(lambda res: n.overwrite("contents 2"))
15008+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
15009             # now attempt to retrieve the old version with the old servermap.
15010             # This will look like someone has changed the file since we
15011             # updated the servermap.
15012hunk ./src/allmydata/test/test_mutable.py 2541
15013             d.addCallback(lambda res:
15014                           self.shouldFail(NotEnoughSharesError,
15015                                           "test_retrieve_surprise",
15016-                                          "ran out of peers: have 0 shares (k=3)",
15017+                                          "ran out of peers: have 0 of 1",
15018                                           n.download_version,
15019                                           self.old_map,
15020                                           self.old_map.best_recoverable_version(),
15021hunk ./src/allmydata/test/test_mutable.py 2550
15022         d.addCallback(_created)
15023         return d
15024 
15025+
15026     def test_unexpected_shares(self):
15027         # upload the file, take a servermap, shut down one of the servers,
15028         # upload it again (causing shares to appear on a new server), then
15029hunk ./src/allmydata/test/test_mutable.py 2560
15030         self.basedir = "mutable/Problems/test_unexpected_shares"
15031         self.set_up_grid()
15032         nm = self.g.clients[0].nodemaker
15033-        d = nm.create_mutable_file("contents 1")
15034+        d = nm.create_mutable_file(MutableData("contents 1"))
15035         def _created(n):
15036             d = defer.succeed(None)
15037             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
15038hunk ./src/allmydata/test/test_mutable.py 2572
15039                 self.g.remove_server(peer0)
15040                 # then modify the file, leaving the old map untouched
15041                 log.msg("starting winning write")
15042-                return n.overwrite("contents 2")
15043+                return n.overwrite(MutableData("contents 2"))
15044             d.addCallback(_got_smap1)
15045             # now attempt to modify the file with the old servermap. This
15046             # will look just like an uncoordinated write, in which every
15047hunk ./src/allmydata/test/test_mutable.py 2582
15048                           self.shouldFail(UncoordinatedWriteError,
15049                                           "test_surprise", None,
15050                                           n.upload,
15051-                                          "contents 2a", self.old_map))
15052+                                          MutableData("contents 2a"), self.old_map))
15053             return d
15054         d.addCallback(_created)
15055         return d
15056hunk ./src/allmydata/test/test_mutable.py 2586
15057+    test_unexpected_shares.timeout = 15
15058 
15059     def test_bad_server(self):
15060         # Break one server, then create the file: the initial publish should
15061hunk ./src/allmydata/test/test_mutable.py 2620
15062         d.addCallback(_break_peer0)
15063         # now "create" the file, using the pre-established key, and let the
15064         # initial publish finally happen
15065-        d.addCallback(lambda res: nm.create_mutable_file("contents 1"))
15066+        d.addCallback(lambda res: nm.create_mutable_file(MutableData("contents 1")))
15067         # that ought to work
15068         def _got_node(n):
15069             d = n.download_best_version()
15070hunk ./src/allmydata/test/test_mutable.py 2629
15071             def _break_peer1(res):
15072                 self.g.break_server(self.server1.get_serverid())
15073             d.addCallback(_break_peer1)
15074-            d.addCallback(lambda res: n.overwrite("contents 2"))
15075+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
15076             # that ought to work too
15077             d.addCallback(lambda res: n.download_best_version())
15078             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
15079hunk ./src/allmydata/test/test_mutable.py 2661
15080         peerids = [s.get_serverid() for s in sb.get_connected_servers()]
15081         self.g.break_server(peerids[0])
15082 
15083-        d = nm.create_mutable_file("contents 1")
15084+        d = nm.create_mutable_file(MutableData("contents 1"))
15085         def _created(n):
15086             d = n.download_best_version()
15087             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
15088hunk ./src/allmydata/test/test_mutable.py 2669
15089             def _break_second_server(res):
15090                 self.g.break_server(peerids[1])
15091             d.addCallback(_break_second_server)
15092-            d.addCallback(lambda res: n.overwrite("contents 2"))
15093+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
15094             # that ought to work too
15095             d.addCallback(lambda res: n.download_best_version())
15096             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
15097hunk ./src/allmydata/test/test_mutable.py 2687
15098 
15099         d = self.shouldFail(NotEnoughServersError,
15100                             "test_publish_all_servers_bad",
15101-                            "Ran out of non-bad servers",
15102-                            nm.create_mutable_file, "contents")
15103+                            "ran out of good servers",
15104+                            nm.create_mutable_file, MutableData("contents"))
15105         return d
15106 
15107     def test_publish_no_servers(self):
15108hunk ./src/allmydata/test/test_mutable.py 2700
15109         d = self.shouldFail(NotEnoughServersError,
15110                             "test_publish_no_servers",
15111                             "Ran out of non-bad servers",
15112-                            nm.create_mutable_file, "contents")
15113+                            nm.create_mutable_file, MutableData("contents"))
15114         return d
15115     test_publish_no_servers.timeout = 30
15116 
15117hunk ./src/allmydata/test/test_mutable.py 2718
15118         # we need some contents that are large enough to push the privkey out
15119         # of the early part of the file
15120         LARGE = "These are Larger contents" * 2000 # about 50KB
15121-        d = nm.create_mutable_file(LARGE)
15122+        LARGE_uploadable = MutableData(LARGE)
15123+        d = nm.create_mutable_file(LARGE_uploadable)
15124         def _created(n):
15125             self.uri = n.get_uri()
15126             self.n2 = nm.create_from_cap(self.uri)
15127hunk ./src/allmydata/test/test_mutable.py 2754
15128         self.basedir = "mutable/Problems/test_privkey_query_missing"
15129         self.set_up_grid(num_servers=20)
15130         nm = self.g.clients[0].nodemaker
15131-        LARGE = "These are Larger contents" * 2000 # about 50KB
15132+        LARGE = "These are Larger contents" * 2000 # about 50KiB
15133+        LARGE_uploadable = MutableData(LARGE)
15134         nm._node_cache = DevNullDictionary() # disable the nodecache
15135 
15136hunk ./src/allmydata/test/test_mutable.py 2758
15137-        d = nm.create_mutable_file(LARGE)
15138+        d = nm.create_mutable_file(LARGE_uploadable)
15139         def _created(n):
15140             self.uri = n.get_uri()
15141             self.n2 = nm.create_from_cap(self.uri)
15142hunk ./src/allmydata/test/test_mutable.py 2768
15143         d.addCallback(_created)
15144         d.addCallback(lambda res: self.n2.get_servermap(MODE_WRITE))
15145         return d
15146+
15147+
15148+    def test_block_and_hash_query_error(self):
15149+        # This tests for what happens when a query to a remote server
15150+        # fails in either the hash validation step or the block getting
15151+        # step (because of batching, this is the same actual query).
15152+        # We need to have the storage server persist up until the point
15153+        # that its prefix is validated, then suddenly die. This
15154+        # exercises some exception handling code in Retrieve.
15155+        self.basedir = "mutable/Problems/test_block_and_hash_query_error"
15156+        self.set_up_grid(num_servers=20)
15157+        nm = self.g.clients[0].nodemaker
15158+        CONTENTS = "contents" * 2000
15159+        CONTENTS_uploadable = MutableData(CONTENTS)
15160+        d = nm.create_mutable_file(CONTENTS_uploadable)
15161+        def _created(node):
15162+            self._node = node
15163+        d.addCallback(_created)
15164+        d.addCallback(lambda ignored:
15165+            self._node.get_servermap(MODE_READ))
15166+        def _then(servermap):
15167+            # we have our servermap. Now we set up the servers like the
15168+            # tests above -- the first one that gets a read call should
15169+            # start throwing errors, but only after returning its prefix
15170+            # for validation. Since we'll download without fetching the
15171+            # private key, the next query to the remote server will be
15172+            # for either a block and salt or for hashes, either of which
15173+            # will exercise the error handling code.
15174+            killer = FirstServerGetsKilled()
15175+            for s in nm.storage_broker.get_connected_servers():
15176+                s.get_rref().post_call_notifier = killer.notify
15177+            ver = servermap.best_recoverable_version()
15178+            assert ver
15179+            return self._node.download_version(servermap, ver)
15180+        d.addCallback(_then)
15181+        d.addCallback(lambda data:
15182+            self.failUnlessEqual(data, CONTENTS))
15183+        return d
15184+
15185+
15186+class FileHandle(unittest.TestCase):
15187+    def setUp(self):
15188+        self.test_data = "Test Data" * 50000
15189+        self.sio = StringIO(self.test_data)
15190+        self.uploadable = MutableFileHandle(self.sio)
15191+
15192+
15193+    def test_filehandle_read(self):
15194+        self.basedir = "mutable/FileHandle/test_filehandle_read"
15195+        chunk_size = 10
15196+        for i in xrange(0, len(self.test_data), chunk_size):
15197+            data = self.uploadable.read(chunk_size)
15198+            data = "".join(data)
15199+            start = i
15200+            end = i + chunk_size
15201+            self.failUnlessEqual(data, self.test_data[start:end])
15202+
15203+
15204+    def test_filehandle_get_size(self):
15205+        self.basedir = "mutable/FileHandle/test_filehandle_get_size"
15206+        actual_size = len(self.test_data)
15207+        size = self.uploadable.get_size()
15208+        self.failUnlessEqual(size, actual_size)
15209+
15210+
15211+    def test_filehandle_get_size_out_of_order(self):
15212+        # We should be able to call get_size whenever we want without
15213+        # disturbing the location of the seek pointer.
15214+        chunk_size = 100
15215+        data = self.uploadable.read(chunk_size)
15216+        self.failUnlessEqual("".join(data), self.test_data[:chunk_size])
15217+
15218+        # Now get the size.
15219+        size = self.uploadable.get_size()
15220+        self.failUnlessEqual(size, len(self.test_data))
15221+
15222+        # Now get more data. We should be right where we left off.
15223+        more_data = self.uploadable.read(chunk_size)
15224+        start = chunk_size
15225+        end = chunk_size * 2
15226+        self.failUnlessEqual("".join(more_data), self.test_data[start:end])
15227+
15228+
15229+    def test_filehandle_file(self):
15230+        # Make sure that the MutableFileHandle works on a file as well
15231+        # as a StringIO object, since in some cases it will be asked to
15232+        # deal with files.
15233+        self.basedir = self.mktemp()
15234+        # necessary? What am I doing wrong here?
15235+        os.mkdir(self.basedir)
15236+        f_path = os.path.join(self.basedir, "test_file")
15237+        f = open(f_path, "w")
15238+        f.write(self.test_data)
15239+        f.close()
15240+        f = open(f_path, "r")
15241+
15242+        uploadable = MutableFileHandle(f)
15243+
15244+        data = uploadable.read(len(self.test_data))
15245+        self.failUnlessEqual("".join(data), self.test_data)
15246+        size = uploadable.get_size()
15247+        self.failUnlessEqual(size, len(self.test_data))
15248+
15249+
15250+    def test_close(self):
15251+        # Make sure that the MutableFileHandle closes its handle when
15252+        # told to do so.
15253+        self.uploadable.close()
15254+        self.failUnless(self.sio.closed)
15255+
15256+
15257+class DataHandle(unittest.TestCase):
15258+    def setUp(self):
15259+        self.test_data = "Test Data" * 50000
15260+        self.uploadable = MutableData(self.test_data)
15261+
15262+
15263+    def test_datahandle_read(self):
15264+        chunk_size = 10
15265+        for i in xrange(0, len(self.test_data), chunk_size):
15266+            data = self.uploadable.read(chunk_size)
15267+            data = "".join(data)
15268+            start = i
15269+            end = i + chunk_size
15270+            self.failUnlessEqual(data, self.test_data[start:end])
15271+
15272+
15273+    def test_datahandle_get_size(self):
15274+        actual_size = len(self.test_data)
15275+        size = self.uploadable.get_size()
15276+        self.failUnlessEqual(size, actual_size)
15277+
15278+
15279+    def test_datahandle_get_size_out_of_order(self):
15280+        # We should be able to call get_size whenever we want without
15281+        # disturbing the location of the seek pointer.
15282+        chunk_size = 100
15283+        data = self.uploadable.read(chunk_size)
15284+        self.failUnlessEqual("".join(data), self.test_data[:chunk_size])
15285+
15286+        # Now get the size.
15287+        size = self.uploadable.get_size()
15288+        self.failUnlessEqual(size, len(self.test_data))
15289+
15290+        # Now get more data. We should be right where we left off.
15291+        more_data = self.uploadable.read(chunk_size)
15292+        start = chunk_size
15293+        end = chunk_size * 2
15294+        self.failUnlessEqual("".join(more_data), self.test_data[start:end])
15295+
15296+
15297+class Version(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin, \
15298+              PublishMixin):
15299+    def setUp(self):
15300+        GridTestMixin.setUp(self)
15301+        self.basedir = self.mktemp()
15302+        self.set_up_grid()
15303+        self.c = self.g.clients[0]
15304+        self.nm = self.c.nodemaker
15305+        self.data = "test data" * 100000 # about 900 KiB; MDMF
15306+        self.small_data = "test data" * 10 # about 90 B; SDMF
15307+        return self.do_upload()
15308+
15309+
15310+    def do_upload(self):
15311+        d1 = self.nm.create_mutable_file(MutableData(self.data),
15312+                                         version=MDMF_VERSION)
15313+        d2 = self.nm.create_mutable_file(MutableData(self.small_data))
15314+        dl = gatherResults([d1, d2])
15315+        def _then((n1, n2)):
15316+            assert isinstance(n1, MutableFileNode)
15317+            assert isinstance(n2, MutableFileNode)
15318+
15319+            self.mdmf_node = n1
15320+            self.sdmf_node = n2
15321+        dl.addCallback(_then)
15322+        return dl
15323+
15324+
15325+    def test_get_readonly_mutable_version(self):
15326+        # Attempting to get a mutable version of a mutable file from a
15327+        # filenode initialized with a readcap should return a readonly
15328+        # version of that same node.
15329+        ro = self.mdmf_node.get_readonly()
15330+        d = ro.get_best_mutable_version()
15331+        d.addCallback(lambda version:
15332+            self.failUnless(version.is_readonly()))
15333+        d.addCallback(lambda ignored:
15334+            self.sdmf_node.get_readonly())
15335+        d.addCallback(lambda version:
15336+            self.failUnless(version.is_readonly()))
15337+        return d
15338+
15339+
15340+    def test_get_sequence_number(self):
15341+        d = self.mdmf_node.get_best_readable_version()
15342+        d.addCallback(lambda bv:
15343+            self.failUnlessEqual(bv.get_sequence_number(), 1))
15344+        d.addCallback(lambda ignored:
15345+            self.sdmf_node.get_best_readable_version())
15346+        d.addCallback(lambda bv:
15347+            self.failUnlessEqual(bv.get_sequence_number(), 1))
15348+        # Now update. The sequence number in both cases should be 1 in
15349+        # both cases.
15350+        def _do_update(ignored):
15351+            new_data = MutableData("foo bar baz" * 100000)
15352+            new_small_data = MutableData("foo bar baz" * 10)
15353+            d1 = self.mdmf_node.overwrite(new_data)
15354+            d2 = self.sdmf_node.overwrite(new_small_data)
15355+            dl = gatherResults([d1, d2])
15356+            return dl
15357+        d.addCallback(_do_update)
15358+        d.addCallback(lambda ignored:
15359+            self.mdmf_node.get_best_readable_version())
15360+        d.addCallback(lambda bv:
15361+            self.failUnlessEqual(bv.get_sequence_number(), 2))
15362+        d.addCallback(lambda ignored:
15363+            self.sdmf_node.get_best_readable_version())
15364+        d.addCallback(lambda bv:
15365+            self.failUnlessEqual(bv.get_sequence_number(), 2))
15366+        return d
15367+
15368+
15369+    def test_version_extension_api(self):
15370+        # We need to define an API by which an uploader can set the
15371+        # extension parameters, and by which a downloader can retrieve
15372+        # extensions.
15373+        d = self.mdmf_node.get_best_mutable_version()
15374+        def _got_version(version):
15375+            hints = version.get_downloader_hints()
15376+            # Should be empty at this point.
15377+            self.failUnlessIn("k", hints)
15378+            self.failUnlessEqual(hints['k'], 3)
15379+            self.failUnlessIn('segsize', hints)
15380+            self.failUnlessEqual(hints['segsize'], 131073)
15381+        d.addCallback(_got_version)
15382+        return d
15383+
15384+
15385+    def test_extensions_from_cap(self):
15386+        # If we initialize a mutable file with a cap that has extension
15387+        # parameters in it and then grab the extension parameters using
15388+        # our API, we should see that they're set correctly.
15389+        mdmf_uri = self.mdmf_node.get_uri()
15390+        new_node = self.nm.create_from_cap(mdmf_uri)
15391+        d = new_node.get_best_mutable_version()
15392+        def _got_version(version):
15393+            hints = version.get_downloader_hints()
15394+            self.failUnlessIn("k", hints)
15395+            self.failUnlessEqual(hints["k"], 3)
15396+            self.failUnlessIn("segsize", hints)
15397+            self.failUnlessEqual(hints["segsize"], 131073)
15398+        d.addCallback(_got_version)
15399+        return d
15400+
15401+
15402+    def test_extensions_from_upload(self):
15403+        # If we create a new mutable file with some contents, we should
15404+        # get back an MDMF cap with the right hints in place.
15405+        contents = "foo bar baz" * 100000
15406+        d = self.nm.create_mutable_file(contents, version=MDMF_VERSION)
15407+        def _got_mutable_file(n):
15408+            rw_uri = n.get_uri()
15409+            expected_k = str(self.c.DEFAULT_ENCODING_PARAMETERS['k'])
15410+            self.failUnlessIn(expected_k, rw_uri)
15411+            # XXX: Get this more intelligently.
15412+            self.failUnlessIn("131073", rw_uri)
15413+
15414+            ro_uri = n.get_readonly_uri()
15415+            self.failUnlessIn(expected_k, ro_uri)
15416+            self.failUnlessIn("131073", ro_uri)
15417+        d.addCallback(_got_mutable_file)
15418+        return d
15419+
15420+
15421+    def test_cap_after_upload(self):
15422+        # If we create a new mutable file and upload things to it, and
15423+        # it's an MDMF file, we should get an MDMF cap back from that
15424+        # file and should be able to use that.
15425+        # That's essentially what MDMF node is, so just check that.
15426+        mdmf_uri = self.mdmf_node.get_uri()
15427+        cap = uri.from_string(mdmf_uri)
15428+        self.failUnless(isinstance(cap, uri.WritableMDMFFileURI))
15429+        readonly_mdmf_uri = self.mdmf_node.get_readonly_uri()
15430+        cap = uri.from_string(readonly_mdmf_uri)
15431+        self.failUnless(isinstance(cap, uri.ReadonlyMDMFFileURI))
15432+
15433+
15434+    def test_get_writekey(self):
15435+        d = self.mdmf_node.get_best_mutable_version()
15436+        d.addCallback(lambda bv:
15437+            self.failUnlessEqual(bv.get_writekey(),
15438+                                 self.mdmf_node.get_writekey()))
15439+        d.addCallback(lambda ignored:
15440+            self.sdmf_node.get_best_mutable_version())
15441+        d.addCallback(lambda bv:
15442+            self.failUnlessEqual(bv.get_writekey(),
15443+                                 self.sdmf_node.get_writekey()))
15444+        return d
15445+
15446+
15447+    def test_get_storage_index(self):
15448+        d = self.mdmf_node.get_best_mutable_version()
15449+        d.addCallback(lambda bv:
15450+            self.failUnlessEqual(bv.get_storage_index(),
15451+                                 self.mdmf_node.get_storage_index()))
15452+        d.addCallback(lambda ignored:
15453+            self.sdmf_node.get_best_mutable_version())
15454+        d.addCallback(lambda bv:
15455+            self.failUnlessEqual(bv.get_storage_index(),
15456+                                 self.sdmf_node.get_storage_index()))
15457+        return d
15458+
15459+
15460+    def test_get_readonly_version(self):
15461+        d = self.mdmf_node.get_best_readable_version()
15462+        d.addCallback(lambda bv:
15463+            self.failUnless(bv.is_readonly()))
15464+        d.addCallback(lambda ignored:
15465+            self.sdmf_node.get_best_readable_version())
15466+        d.addCallback(lambda bv:
15467+            self.failUnless(bv.is_readonly()))
15468+        return d
15469+
15470+
15471+    def test_get_mutable_version(self):
15472+        d = self.mdmf_node.get_best_mutable_version()
15473+        d.addCallback(lambda bv:
15474+            self.failIf(bv.is_readonly()))
15475+        d.addCallback(lambda ignored:
15476+            self.sdmf_node.get_best_mutable_version())
15477+        d.addCallback(lambda bv:
15478+            self.failIf(bv.is_readonly()))
15479+        return d
15480+
15481+
15482+    def test_toplevel_overwrite(self):
15483+        new_data = MutableData("foo bar baz" * 100000)
15484+        new_small_data = MutableData("foo bar baz" * 10)
15485+        d = self.mdmf_node.overwrite(new_data)
15486+        d.addCallback(lambda ignored:
15487+            self.mdmf_node.download_best_version())
15488+        d.addCallback(lambda data:
15489+            self.failUnlessEqual(data, "foo bar baz" * 100000))
15490+        d.addCallback(lambda ignored:
15491+            self.sdmf_node.overwrite(new_small_data))
15492+        d.addCallback(lambda ignored:
15493+            self.sdmf_node.download_best_version())
15494+        d.addCallback(lambda data:
15495+            self.failUnlessEqual(data, "foo bar baz" * 10))
15496+        return d
15497+
15498+
15499+    def test_toplevel_modify(self):
15500+        def modifier(old_contents, servermap, first_time):
15501+            return old_contents + "modified"
15502+        d = self.mdmf_node.modify(modifier)
15503+        d.addCallback(lambda ignored:
15504+            self.mdmf_node.download_best_version())
15505+        d.addCallback(lambda data:
15506+            self.failUnlessIn("modified", data))
15507+        d.addCallback(lambda ignored:
15508+            self.sdmf_node.modify(modifier))
15509+        d.addCallback(lambda ignored:
15510+            self.sdmf_node.download_best_version())
15511+        d.addCallback(lambda data:
15512+            self.failUnlessIn("modified", data))
15513+        return d
15514+
15515+
15516+    def test_version_modify(self):
15517+        # TODO: When we can publish multiple versions, alter this test
15518+        # to modify a version other than the best usable version, then
15519+        # test to see that the best recoverable version is that.
15520+        def modifier(old_contents, servermap, first_time):
15521+            return old_contents + "modified"
15522+        d = self.mdmf_node.modify(modifier)
15523+        d.addCallback(lambda ignored:
15524+            self.mdmf_node.download_best_version())
15525+        d.addCallback(lambda data:
15526+            self.failUnlessIn("modified", data))
15527+        d.addCallback(lambda ignored:
15528+            self.sdmf_node.modify(modifier))
15529+        d.addCallback(lambda ignored:
15530+            self.sdmf_node.download_best_version())
15531+        d.addCallback(lambda data:
15532+            self.failUnlessIn("modified", data))
15533+        return d
15534+
15535+
15536+    def test_download_version(self):
15537+        d = self.publish_multiple()
15538+        # We want to have two recoverable versions on the grid.
15539+        d.addCallback(lambda res:
15540+                      self._set_versions({0:0,2:0,4:0,6:0,8:0,
15541+                                          1:1,3:1,5:1,7:1,9:1}))
15542+        # Now try to download each version. We should get the plaintext
15543+        # associated with that version.
15544+        d.addCallback(lambda ignored:
15545+            self._fn.get_servermap(mode=MODE_READ))
15546+        def _got_servermap(smap):
15547+            versions = smap.recoverable_versions()
15548+            assert len(versions) == 2
15549+
15550+            self.servermap = smap
15551+            self.version1, self.version2 = versions
15552+            assert self.version1 != self.version2
15553+
15554+            self.version1_seqnum = self.version1[0]
15555+            self.version2_seqnum = self.version2[0]
15556+            self.version1_index = self.version1_seqnum - 1
15557+            self.version2_index = self.version2_seqnum - 1
15558+
15559+        d.addCallback(_got_servermap)
15560+        d.addCallback(lambda ignored:
15561+            self._fn.download_version(self.servermap, self.version1))
15562+        d.addCallback(lambda results:
15563+            self.failUnlessEqual(self.CONTENTS[self.version1_index],
15564+                                 results))
15565+        d.addCallback(lambda ignored:
15566+            self._fn.download_version(self.servermap, self.version2))
15567+        d.addCallback(lambda results:
15568+            self.failUnlessEqual(self.CONTENTS[self.version2_index],
15569+                                 results))
15570+        return d
15571+
15572+
15573+    def test_download_nonexistent_version(self):
15574+        d = self.mdmf_node.get_servermap(mode=MODE_WRITE)
15575+        def _set_servermap(servermap):
15576+            self.servermap = servermap
15577+        d.addCallback(_set_servermap)
15578+        d.addCallback(lambda ignored:
15579+           self.shouldFail(UnrecoverableFileError, "nonexistent version",
15580+                           None,
15581+                           self.mdmf_node.download_version, self.servermap,
15582+                           "not a version"))
15583+        return d
15584+
15585+
15586+    def test_partial_read(self):
15587+        # read only a few bytes at a time, and see that the results are
15588+        # what we expect.
15589+        d = self.mdmf_node.get_best_readable_version()
15590+        def _read_data(version):
15591+            c = consumer.MemoryConsumer()
15592+            d2 = defer.succeed(None)
15593+            for i in xrange(0, len(self.data), 10000):
15594+                d2.addCallback(lambda ignored, i=i: version.read(c, i, 10000))
15595+            d2.addCallback(lambda ignored:
15596+                self.failUnlessEqual(self.data, "".join(c.chunks)))
15597+            return d2
15598+        d.addCallback(_read_data)
15599+        return d
15600+
15601+
15602+    def test_read(self):
15603+        d = self.mdmf_node.get_best_readable_version()
15604+        def _read_data(version):
15605+            c = consumer.MemoryConsumer()
15606+            d2 = defer.succeed(None)
15607+            d2.addCallback(lambda ignored: version.read(c))
15608+            d2.addCallback(lambda ignored:
15609+                self.failUnlessEqual("".join(c.chunks), self.data))
15610+            return d2
15611+        d.addCallback(_read_data)
15612+        return d
15613+
15614+
15615+    def test_download_best_version(self):
15616+        d = self.mdmf_node.download_best_version()
15617+        d.addCallback(lambda data:
15618+            self.failUnlessEqual(data, self.data))
15619+        d.addCallback(lambda ignored:
15620+            self.sdmf_node.download_best_version())
15621+        d.addCallback(lambda data:
15622+            self.failUnlessEqual(data, self.small_data))
15623+        return d
15624+
15625+
15626+class Update(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin):
15627+    def setUp(self):
15628+        GridTestMixin.setUp(self)
15629+        self.basedir = self.mktemp()
15630+        self.set_up_grid()
15631+        self.c = self.g.clients[0]
15632+        self.nm = self.c.nodemaker
15633+        self.data = "testdata " * 100000 # about 900 KiB; MDMF
15634+        self.small_data = "test data" * 10 # about 90 B; SDMF
15635+        return self.do_upload()
15636+
15637+
15638+    def do_upload(self):
15639+        d1 = self.nm.create_mutable_file(MutableData(self.data),
15640+                                         version=MDMF_VERSION)
15641+        d2 = self.nm.create_mutable_file(MutableData(self.small_data))
15642+        dl = gatherResults([d1, d2])
15643+        def _then((n1, n2)):
15644+            assert isinstance(n1, MutableFileNode)
15645+            assert isinstance(n2, MutableFileNode)
15646+
15647+            self.mdmf_node = n1
15648+            self.sdmf_node = n2
15649+        dl.addCallback(_then)
15650+        # Make SDMF and MDMF mutable file nodes that have 255 shares.
15651+        def _make_max_shares(ign):
15652+            self.nm.default_encoding_parameters['n'] = 255
15653+            self.nm.default_encoding_parameters['k'] = 127
15654+            d1 = self.nm.create_mutable_file(MutableData(self.data),
15655+                                             version=MDMF_VERSION)
15656+            d2 = \
15657+                self.nm.create_mutable_file(MutableData(self.small_data))
15658+            return gatherResults([d1, d2])
15659+        dl.addCallback(_make_max_shares)
15660+        def _stash((n1, n2)):
15661+            assert isinstance(n1, MutableFileNode)
15662+            assert isinstance(n2, MutableFileNode)
15663+
15664+            self.mdmf_max_shares_node = n1
15665+            self.sdmf_max_shares_node = n2
15666+        dl.addCallback(_stash)
15667+        return dl
15668+
15669+    def test_append(self):
15670+        # We should be able to append data to the middle of a mutable
15671+        # file and get what we expect.
15672+        new_data = self.data + "appended"
15673+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
15674+            d = node.get_best_mutable_version()
15675+            d.addCallback(lambda mv:
15676+                mv.update(MutableData("appended"), len(self.data)))
15677+            d.addCallback(lambda ignored, node=node:
15678+                node.download_best_version())
15679+            d.addCallback(lambda results:
15680+                self.failUnlessEqual(results, new_data))
15681+        return d
15682+
15683+    def test_replace(self):
15684+        # We should be able to replace data in the middle of a mutable
15685+        # file and get what we expect back.
15686+        new_data = self.data[:100]
15687+        new_data += "appended"
15688+        new_data += self.data[108:]
15689+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
15690+            d = node.get_best_mutable_version()
15691+            d.addCallback(lambda mv:
15692+                mv.update(MutableData("appended"), 100))
15693+            d.addCallback(lambda ignored, node=node:
15694+                node.download_best_version())
15695+            d.addCallback(lambda results:
15696+                self.failUnlessEqual(results, new_data))
15697+        return d
15698+
15699+    def test_replace_beginning(self):
15700+        # We should be able to replace data at the beginning of the file
15701+        # without truncating the file
15702+        B = "beginning"
15703+        new_data = B + self.data[len(B):]
15704+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
15705+            d = node.get_best_mutable_version()
15706+            d.addCallback(lambda mv: mv.update(MutableData(B), 0))
15707+            d.addCallback(lambda ignored, node=node:
15708+                node.download_best_version())
15709+            d.addCallback(lambda results: self.failUnlessEqual(results, new_data))
15710+        return d
15711+
15712+    def test_replace_segstart1(self):
15713+        offset = 128*1024+1
15714+        new_data = "NNNN"
15715+        expected = self.data[:offset]+new_data+self.data[offset+4:]
15716+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
15717+            d = node.get_best_mutable_version()
15718+            d.addCallback(lambda mv:
15719+                mv.update(MutableData(new_data), offset))
15720+            # close around node.
15721+            d.addCallback(lambda ignored, node=node:
15722+                node.download_best_version())
15723+            def _check(results):
15724+                if results != expected:
15725+                    print
15726+                    print "got: %s ... %s" % (results[:20], results[-20:])
15727+                    print "exp: %s ... %s" % (expected[:20], expected[-20:])
15728+                    self.fail("results != expected")
15729+            d.addCallback(_check)
15730+        return d
15731+
15732+    def _check_differences(self, got, expected):
15733+        # displaying arbitrary file corruption is tricky for a
15734+        # 1MB file of repeating data,, so look for likely places
15735+        # with problems and display them separately
15736+        gotmods = [mo.span() for mo in re.finditer('([A-Z]+)', got)]
15737+        expmods = [mo.span() for mo in re.finditer('([A-Z]+)', expected)]
15738+        gotspans = ["%d:%d=%s" % (start,end,got[start:end])
15739+                    for (start,end) in gotmods]
15740+        expspans = ["%d:%d=%s" % (start,end,expected[start:end])
15741+                    for (start,end) in expmods]
15742+        #print "expecting: %s" % expspans
15743+
15744+        SEGSIZE = 128*1024
15745+        if got != expected:
15746+            print "differences:"
15747+            for segnum in range(len(expected)//SEGSIZE):
15748+                start = segnum * SEGSIZE
15749+                end = (segnum+1) * SEGSIZE
15750+                got_ends = "%s .. %s" % (got[start:start+20], got[end-20:end])
15751+                exp_ends = "%s .. %s" % (expected[start:start+20], expected[end-20:end])
15752+                if got_ends != exp_ends:
15753+                    print "expected[%d]: %s" % (start, exp_ends)
15754+                    print "got     [%d]: %s" % (start, got_ends)
15755+            if expspans != gotspans:
15756+                print "expected: %s" % expspans
15757+                print "got     : %s" % gotspans
15758+            open("EXPECTED","wb").write(expected)
15759+            open("GOT","wb").write(got)
15760+            print "wrote data to EXPECTED and GOT"
15761+            self.fail("didn't get expected data")
15762+
15763+
15764+    def test_replace_locations(self):
15765+        # exercise fencepost conditions
15766+        expected = self.data
15767+        SEGSIZE = 128*1024
15768+        suspects = range(SEGSIZE-3, SEGSIZE+1)+range(2*SEGSIZE-3, 2*SEGSIZE+1)
15769+        letters = iter("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
15770+        d = defer.succeed(None)
15771+        for offset in suspects:
15772+            new_data = letters.next()*2 # "AA", then "BB", etc
15773+            expected = expected[:offset]+new_data+expected[offset+2:]
15774+            d.addCallback(lambda ign:
15775+                          self.mdmf_node.get_best_mutable_version())
15776+            def _modify(mv, offset=offset, new_data=new_data):
15777+                # close over 'offset','new_data'
15778+                md = MutableData(new_data)
15779+                return mv.update(md, offset)
15780+            d.addCallback(_modify)
15781+            d.addCallback(lambda ignored:
15782+                          self.mdmf_node.download_best_version())
15783+            d.addCallback(self._check_differences, expected)
15784+        return d
15785+
15786+    def test_replace_locations_max_shares(self):
15787+        # exercise fencepost conditions
15788+        expected = self.data
15789+        SEGSIZE = 128*1024
15790+        suspects = range(SEGSIZE-3, SEGSIZE+1)+range(2*SEGSIZE-3, 2*SEGSIZE+1)
15791+        letters = iter("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
15792+        d = defer.succeed(None)
15793+        for offset in suspects:
15794+            new_data = letters.next()*2 # "AA", then "BB", etc
15795+            expected = expected[:offset]+new_data+expected[offset+2:]
15796+            d.addCallback(lambda ign:
15797+                          self.mdmf_max_shares_node.get_best_mutable_version())
15798+            def _modify(mv, offset=offset, new_data=new_data):
15799+                # close over 'offset','new_data'
15800+                md = MutableData(new_data)
15801+                return mv.update(md, offset)
15802+            d.addCallback(_modify)
15803+            d.addCallback(lambda ignored:
15804+                          self.mdmf_max_shares_node.download_best_version())
15805+            d.addCallback(self._check_differences, expected)
15806+        return d
15807+
15808+    def test_replace_and_extend(self):
15809+        # We should be able to replace data in the middle of a mutable
15810+        # file and extend that mutable file and get what we expect.
15811+        new_data = self.data[:100]
15812+        new_data += "modified " * 100000
15813+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
15814+            d = node.get_best_mutable_version()
15815+            d.addCallback(lambda mv:
15816+                mv.update(MutableData("modified " * 100000), 100))
15817+            d.addCallback(lambda ignored, node=node:
15818+                node.download_best_version())
15819+            d.addCallback(lambda results:
15820+                self.failUnlessEqual(results, new_data))
15821+        return d
15822+
15823+
15824+    def test_append_power_of_two(self):
15825+        # If we attempt to extend a mutable file so that its segment
15826+        # count crosses a power-of-two boundary, the update operation
15827+        # should know how to reencode the file.
15828+
15829+        # Note that the data populating self.mdmf_node is about 900 KiB
15830+        # long -- this is 7 segments in the default segment size. So we
15831+        # need to add 2 segments worth of data to push it over a
15832+        # power-of-two boundary.
15833+        segment = "a" * DEFAULT_MAX_SEGMENT_SIZE
15834+        new_data = self.data + (segment * 2)
15835+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
15836+            d = node.get_best_mutable_version()
15837+            d.addCallback(lambda mv:
15838+                mv.update(MutableData(segment * 2), len(self.data)))
15839+            d.addCallback(lambda ignored, node=node:
15840+                node.download_best_version())
15841+            d.addCallback(lambda results:
15842+                self.failUnlessEqual(results, new_data))
15843+        return d
15844+    test_append_power_of_two.timeout = 15
15845+
15846+
15847+    def test_update_sdmf(self):
15848+        # Running update on a single-segment file should still work.
15849+        new_data = self.small_data + "appended"
15850+        for node in (self.sdmf_node, self.sdmf_max_shares_node):
15851+            d = node.get_best_mutable_version()
15852+            d.addCallback(lambda mv:
15853+                mv.update(MutableData("appended"), len(self.small_data)))
15854+            d.addCallback(lambda ignored, node=node:
15855+                node.download_best_version())
15856+            d.addCallback(lambda results:
15857+                self.failUnlessEqual(results, new_data))
15858+        return d
15859+
15860+    def test_replace_in_last_segment(self):
15861+        # The wrapper should know how to handle the tail segment
15862+        # appropriately.
15863+        replace_offset = len(self.data) - 100
15864+        new_data = self.data[:replace_offset] + "replaced"
15865+        rest_offset = replace_offset + len("replaced")
15866+        new_data += self.data[rest_offset:]
15867+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
15868+            d = node.get_best_mutable_version()
15869+            d.addCallback(lambda mv:
15870+                mv.update(MutableData("replaced"), replace_offset))
15871+            d.addCallback(lambda ignored, node=node:
15872+                node.download_best_version())
15873+            d.addCallback(lambda results:
15874+                self.failUnlessEqual(results, new_data))
15875+        return d
15876+
15877+
15878+    def test_multiple_segment_replace(self):
15879+        replace_offset = 2 * DEFAULT_MAX_SEGMENT_SIZE
15880+        new_data = self.data[:replace_offset]
15881+        new_segment = "a" * DEFAULT_MAX_SEGMENT_SIZE
15882+        new_data += 2 * new_segment
15883+        new_data += "replaced"
15884+        rest_offset = len(new_data)
15885+        new_data += self.data[rest_offset:]
15886+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
15887+            d = node.get_best_mutable_version()
15888+            d.addCallback(lambda mv:
15889+                mv.update(MutableData((2 * new_segment) + "replaced"),
15890+                          replace_offset))
15891+            d.addCallback(lambda ignored, node=node:
15892+                node.download_best_version())
15893+            d.addCallback(lambda results:
15894+                self.failUnlessEqual(results, new_data))
15895+        return d
15896+
15897+class Interoperability(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin):
15898+    sdmf_old_shares = {}
15899+    sdmf_old_shares[0] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAhgcAb/adrQFrhlrRNoRpvjDuxmFebA4F0qCyqWssm61AAQ/EX4eC/1+hGOQ/h4EiKUkqxdsfzdcPlDvd11SGWZ0VHsUclZChTzuBAU2zLTXm+cG8IFhO50ly6Ey/DB44NtMKVaVzO0nU8DE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
15900+    sdmf_old_shares[1] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAhgcAb/adrQFrhlrRNoRpvjDuxmFebA4F0qCyqWssm61AAP7FHJWQoU87gQFNsy015vnBvCBYTudJcuhMvwweODbTD8Rfh4L/X6EY5D+HgSIpSSrF2x/N1w+UO93XVIZZnRUeePDXEwhqYDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
15901+    sdmf_old_shares[2] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAd8jdiCodW233N1acXhZGnulDKR3hiNsMdEIsijRPemewASoSCFpVj4utEE+eVFM146xfgC6DX39GaQ2zT3YKsWX3GiLwKtGffwqV7IlZIcBEVqMfTXSTZsY+dZm1MxxCZH0Zd33VY0yggDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
15902+    sdmf_old_shares[3] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAd8jdiCodW233N1acXhZGnulDKR3hiNsMdEIsijRPemewARoi8CrRn38KleyJWSHARFajH010k2bGPnWZtTMcQmR9GhIIWlWPi60QT55UUzXjrF+ALoNff0ZpDbNPdgqxZfcSNSplrHqtsDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
15903+    sdmf_old_shares[4] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAoIM8M4XulprmLd4gGMobS2Bv9CmwB5LpK/ySHE1QWjdwAUMA7/aVz7Mb1em0eks+biC8ZuVUhuAEkTVOAF4YulIjE8JlfW0dS1XKk62u0586QxiN38NTsluUDx8EAPTL66yRsfb1f3rRIDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
15904+    sdmf_old_shares[5] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAoIM8M4XulprmLd4gGMobS2Bv9CmwB5LpK/ySHE1QWjdwATPCZX1tHUtVypOtrtOfOkMYjd/DU7JblA8fBAD0y+uskwDv9pXPsxvV6bR6Sz5uILxm5VSG4ASRNU4AXhi6UiMUKZHBmcmEgDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
15905+    sdmf_old_shares[6] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAlyHZU7RfTJjbHu1gjabWZsTu+7nAeRVG6/ZSd4iMQ1ZgAWDSFSPvKzcFzRcuRlVgKUf0HBce1MCF8SwpUbPPEyfVJty4xLZ7DvNU/Eh/R6BarsVAagVXdp+GtEu0+fok7nilT4LchmHo8DE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
15906+    sdmf_old_shares[7] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAlyHZU7RfTJjbHu1gjabWZsTu+7nAeRVG6/ZSd4iMQ1ZgAVbcuMS2ew7zVPxIf0egWq7FQGoFV3afhrRLtPn6JO54oNIVI+8rNwXNFy5GVWApR/QcFx7UwIXxLClRs88TJ9UtLnNF4/mM0DE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
15907+    sdmf_old_shares[8] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgABUSzNKiMx0E91q51/WH6ASL0fDEOLef9oxuyBX5F5cpoABojmWkDX3k3FKfgNHIeptE3lxB8HHzxDfSD250psyfNCAAwGsKbMxbmI2NpdTozZ3SICrySwgGkatA1gsDOJmOnTzgAYmqKY7A9vQChuYa17fYSyKerIb3682jxiIneQvCMWCK5WcuI4PMeIsUAj8yxdxHvV+a9vtSCEsDVvymrrooDKX1GK98t37yoDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
15908+    sdmf_old_shares[9] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgABUSzNKiMx0E91q51/WH6ASL0fDEOLef9oxuyBX5F5cpoABojmWkDX3k3FKfgNHIeptE3lxB8HHzxDfSD250psyfNCAAwGsKbMxbmI2NpdTozZ3SICrySwgGkatA1gsDOJmOnTzgAXVnLiODzHiLFAI/MsXcR71fmvb7UghLA1b8pq66KAyl+aopjsD29AKG5hrXt9hLIp6shvfrzaPGIid5C8IxYIrjgBj1YohGgDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
15909+    sdmf_old_cap = "URI:SSK:gmjgofw6gan57gwpsow6gtrz3e:5adm6fayxmu3e4lkmfvt6lkkfix34ai2wop2ioqr4bgvvhiol3kq"
15910+    sdmf_old_contents = "This is a test file.\n"
15911+    def copy_sdmf_shares(self):
15912+        # We'll basically be short-circuiting the upload process.
15913+        servernums = self.g.servers_by_number.keys()
15914+        assert len(servernums) == 10
15915+
15916+        assignments = zip(self.sdmf_old_shares.keys(), servernums)
15917+        # Get the storage index.
15918+        cap = uri.from_string(self.sdmf_old_cap)
15919+        si = cap.get_storage_index()
15920+
15921+        # Now execute each assignment by writing the storage.
15922+        for (share, servernum) in assignments:
15923+            sharedata = base64.b64decode(self.sdmf_old_shares[share])
15924+            storedir = self.get_serverdir(servernum)
15925+            storage_path = os.path.join(storedir, "shares",
15926+                                        storage_index_to_dir(si))
15927+            fileutil.make_dirs(storage_path)
15928+            fileutil.write(os.path.join(storage_path, "%d" % share),
15929+                           sharedata)
15930+        # ...and verify that the shares are there.
15931+        shares = self.find_uri_shares(self.sdmf_old_cap)
15932+        assert len(shares) == 10
15933+
15934+    def test_new_downloader_can_read_old_shares(self):
15935+        self.basedir = "mutable/Interoperability/new_downloader_can_read_old_shares"
15936+        self.set_up_grid()
15937+        self.copy_sdmf_shares()
15938+        nm = self.g.clients[0].nodemaker
15939+        n = nm.create_from_cap(self.sdmf_old_cap)
15940+        d = n.download_best_version()
15941+        d.addCallback(self.failUnlessEqual, self.sdmf_old_contents)
15942+        return d
15943}
15944[uri: add MDMF and MDMF directory caps, add extension hint support
15945Kevan Carstensen <kevan@isnotajoke.com>**20110807004436
15946 Ignore-this: 6486b7d4dc0e849c6b1e9cdfb6318eac
15947] {
15948hunk ./src/allmydata/test/test_cli.py 1379
15949         d.addCallback(_check)
15950         return d
15951 
15952+    def _create_directory_structure(self):
15953+        # Create a simple directory structure that we can use for MDMF,
15954+        # SDMF, and immutable testing.
15955+        assert self.g
15956+
15957+        client = self.g.clients[0]
15958+        # Create a dirnode
15959+        d = client.create_dirnode()
15960+        def _got_rootnode(n):
15961+            # Add a few nodes.
15962+            self._dircap = n.get_uri()
15963+            nm = n._nodemaker
15964+            # The uploaders may run at the same time, so we need two
15965+            # MutableData instances or they'll fight over offsets &c and
15966+            # break.
15967+            mutable_data = MutableData("data" * 100000)
15968+            mutable_data2 = MutableData("data" * 100000)
15969+            # Add both kinds of mutable node.
15970+            d1 = nm.create_mutable_file(mutable_data,
15971+                                        version=MDMF_VERSION)
15972+            d2 = nm.create_mutable_file(mutable_data2,
15973+                                        version=SDMF_VERSION)
15974+            # Add an immutable node. We do this through the directory,
15975+            # with add_file.
15976+            immutable_data = upload.Data("immutable data" * 100000,
15977+                                         convergence="")
15978+            d3 = n.add_file(u"immutable", immutable_data)
15979+            ds = [d1, d2, d3]
15980+            dl = defer.DeferredList(ds)
15981+            def _made_files((r1, r2, r3)):
15982+                self.failUnless(r1[0])
15983+                self.failUnless(r2[0])
15984+                self.failUnless(r3[0])
15985+
15986+                # r1, r2, and r3 contain nodes.
15987+                mdmf_node = r1[1]
15988+                sdmf_node = r2[1]
15989+                imm_node = r3[1]
15990+
15991+                self._mdmf_uri = mdmf_node.get_uri()
15992+                self._mdmf_readonly_uri = mdmf_node.get_readonly_uri()
15993+                self._sdmf_uri = mdmf_node.get_uri()
15994+                self._sdmf_readonly_uri = sdmf_node.get_readonly_uri()
15995+                self._imm_uri = imm_node.get_uri()
15996+
15997+                d1 = n.set_node(u"mdmf", mdmf_node)
15998+                d2 = n.set_node(u"sdmf", sdmf_node)
15999+                return defer.DeferredList([d1, d2])
16000+            # We can now list the directory by listing self._dircap.
16001+            dl.addCallback(_made_files)
16002+            return dl
16003+        d.addCallback(_got_rootnode)
16004+        return d
16005+
16006+    def test_list_mdmf(self):
16007+        # 'tahoe ls' should include MDMF files.
16008+        self.basedir = "cli/List/list_mdmf"
16009+        self.set_up_grid()
16010+        d = self._create_directory_structure()
16011+        d.addCallback(lambda ignored:
16012+            self.do_cli("ls", self._dircap))
16013+        def _got_ls((rc, out, err)):
16014+            self.failUnlessEqual(rc, 0)
16015+            self.failUnlessEqual(err, "")
16016+            self.failUnlessIn("immutable", out)
16017+            self.failUnlessIn("mdmf", out)
16018+            self.failUnlessIn("sdmf", out)
16019+        d.addCallback(_got_ls)
16020+        return d
16021+
16022+    def test_list_mdmf_json(self):
16023+        # 'tahoe ls' should include MDMF caps when invoked with MDMF
16024+        # caps.
16025+        self.basedir = "cli/List/list_mdmf_json"
16026+        self.set_up_grid()
16027+        d = self._create_directory_structure()
16028+        d.addCallback(lambda ignored:
16029+            self.do_cli("ls", "--json", self._dircap))
16030+        def _got_json((rc, out, err)):
16031+            self.failUnlessEqual(rc, 0)
16032+            self.failUnlessEqual(err, "")
16033+            self.failUnlessIn(self._mdmf_uri, out)
16034+            self.failUnlessIn(self._mdmf_readonly_uri, out)
16035+            self.failUnlessIn(self._sdmf_uri, out)
16036+            self.failUnlessIn(self._sdmf_readonly_uri, out)
16037+            self.failUnlessIn(self._imm_uri, out)
16038+            self.failUnlessIn('"mutable-type": "sdmf"', out)
16039+            self.failUnlessIn('"mutable-type": "mdmf"', out)
16040+        d.addCallback(_got_json)
16041+        return d
16042+
16043 
16044 class Mv(GridTestMixin, CLITestMixin, unittest.TestCase):
16045     def test_mv_behavior(self):
16046hunk ./src/allmydata/test/test_uri.py 2
16047 
16048+import re
16049 from twisted.trial import unittest
16050 from allmydata import uri
16051 from allmydata.util import hashutil, base32
16052hunk ./src/allmydata/test/test_uri.py 259
16053         uri.CHKFileURI.init_from_string(fileURI)
16054 
16055 class Mutable(testutil.ReallyEqualMixin, unittest.TestCase):
16056-    def test_pack(self):
16057-        writekey = "\x01" * 16
16058-        fingerprint = "\x02" * 32
16059+    def setUp(self):
16060+        self.writekey = "\x01" * 16
16061+        self.fingerprint = "\x02" * 32
16062+        self.readkey = hashutil.ssk_readkey_hash(self.writekey)
16063+        self.storage_index = hashutil.ssk_storage_index_hash(self.readkey)
16064 
16065hunk ./src/allmydata/test/test_uri.py 265
16066-        u = uri.WriteableSSKFileURI(writekey, fingerprint)
16067-        self.failUnlessReallyEqual(u.writekey, writekey)
16068-        self.failUnlessReallyEqual(u.fingerprint, fingerprint)
16069+    def test_pack(self):
16070+        u = uri.WriteableSSKFileURI(self.writekey, self.fingerprint)
16071+        self.failUnlessReallyEqual(u.writekey, self.writekey)
16072+        self.failUnlessReallyEqual(u.fingerprint, self.fingerprint)
16073         self.failIf(u.is_readonly())
16074         self.failUnless(u.is_mutable())
16075         self.failUnless(IURI.providedBy(u))
16076hunk ./src/allmydata/test/test_uri.py 281
16077         self.failUnlessReallyEqual(u, u_h)
16078 
16079         u2 = uri.from_string(u.to_string())
16080-        self.failUnlessReallyEqual(u2.writekey, writekey)
16081-        self.failUnlessReallyEqual(u2.fingerprint, fingerprint)
16082+        self.failUnlessReallyEqual(u2.writekey, self.writekey)
16083+        self.failUnlessReallyEqual(u2.fingerprint, self.fingerprint)
16084         self.failIf(u2.is_readonly())
16085         self.failUnless(u2.is_mutable())
16086         self.failUnless(IURI.providedBy(u2))
16087hunk ./src/allmydata/test/test_uri.py 297
16088         self.failUnless(isinstance(u2imm, uri.UnknownURI), u2imm)
16089 
16090         u3 = u2.get_readonly()
16091-        readkey = hashutil.ssk_readkey_hash(writekey)
16092-        self.failUnlessReallyEqual(u3.fingerprint, fingerprint)
16093+        readkey = hashutil.ssk_readkey_hash(self.writekey)
16094+        self.failUnlessReallyEqual(u3.fingerprint, self.fingerprint)
16095         self.failUnlessReallyEqual(u3.readkey, readkey)
16096         self.failUnless(u3.is_readonly())
16097         self.failUnless(u3.is_mutable())
16098hunk ./src/allmydata/test/test_uri.py 317
16099         u3_h = uri.ReadonlySSKFileURI.init_from_human_encoding(he)
16100         self.failUnlessReallyEqual(u3, u3_h)
16101 
16102-        u4 = uri.ReadonlySSKFileURI(readkey, fingerprint)
16103-        self.failUnlessReallyEqual(u4.fingerprint, fingerprint)
16104+        u4 = uri.ReadonlySSKFileURI(readkey, self.fingerprint)
16105+        self.failUnlessReallyEqual(u4.fingerprint, self.fingerprint)
16106         self.failUnlessReallyEqual(u4.readkey, readkey)
16107         self.failUnless(u4.is_readonly())
16108         self.failUnless(u4.is_mutable())
16109hunk ./src/allmydata/test/test_uri.py 350
16110         self.failUnlessReallyEqual(u5, u5_h)
16111 
16112 
16113+    def test_writable_mdmf_cap(self):
16114+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
16115+        cap = u1.to_string()
16116+        u = uri.WritableMDMFFileURI.init_from_string(cap)
16117+
16118+        self.failUnless(IMutableFileURI.providedBy(u))
16119+        self.failUnlessReallyEqual(u.fingerprint, self.fingerprint)
16120+        self.failUnlessReallyEqual(u.writekey, self.writekey)
16121+        self.failUnless(u.is_mutable())
16122+        self.failIf(u.is_readonly())
16123+        self.failUnlessEqual(cap, u.to_string())
16124+
16125+        # Now get a readonly cap from the writable cap, and test that it
16126+        # degrades gracefully.
16127+        ru = u.get_readonly()
16128+        self.failUnlessReallyEqual(self.readkey, ru.readkey)
16129+        self.failUnlessReallyEqual(self.fingerprint, ru.fingerprint)
16130+        self.failUnless(ru.is_mutable())
16131+        self.failUnless(ru.is_readonly())
16132+
16133+        # Now get a verifier cap.
16134+        vu = ru.get_verify_cap()
16135+        self.failUnlessReallyEqual(self.storage_index, vu.storage_index)
16136+        self.failUnlessReallyEqual(self.fingerprint, vu.fingerprint)
16137+        self.failUnless(IVerifierURI.providedBy(vu))
16138+
16139+    def test_readonly_mdmf_cap(self):
16140+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
16141+        cap = u1.to_string()
16142+        u2 = uri.ReadonlyMDMFFileURI.init_from_string(cap)
16143+
16144+        self.failUnlessReallyEqual(u2.fingerprint, self.fingerprint)
16145+        self.failUnlessReallyEqual(u2.readkey, self.readkey)
16146+        self.failUnless(u2.is_readonly())
16147+        self.failUnless(u2.is_mutable())
16148+
16149+        vu = u2.get_verify_cap()
16150+        self.failUnlessEqual(vu.storage_index, self.storage_index)
16151+        self.failUnlessEqual(vu.fingerprint, self.fingerprint)
16152+
16153+    def test_create_writable_mdmf_cap_from_readcap(self):
16154+        # we shouldn't be able to create a writable MDMF cap given only a
16155+        # readcap.
16156+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
16157+        cap = u1.to_string()
16158+        self.failUnlessRaises(uri.BadURIError,
16159+                              uri.WritableMDMFFileURI.init_from_string,
16160+                              cap)
16161+
16162+    def test_create_writable_mdmf_cap_from_verifycap(self):
16163+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
16164+        cap = u1.to_string()
16165+        self.failUnlessRaises(uri.BadURIError,
16166+                              uri.WritableMDMFFileURI.init_from_string,
16167+                              cap)
16168+
16169+    def test_create_readonly_mdmf_cap_from_verifycap(self):
16170+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
16171+        cap = u1.to_string()
16172+        self.failUnlessRaises(uri.BadURIError,
16173+                              uri.ReadonlyMDMFFileURI.init_from_string,
16174+                              cap)
16175+
16176+    def test_mdmf_verifier_cap(self):
16177+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
16178+        self.failUnless(u1.is_readonly())
16179+        self.failIf(u1.is_mutable())
16180+        self.failUnlessReallyEqual(self.storage_index, u1.storage_index)
16181+        self.failUnlessReallyEqual(self.fingerprint, u1.fingerprint)
16182+
16183+        cap = u1.to_string()
16184+        u2 = uri.MDMFVerifierURI.init_from_string(cap)
16185+        self.failUnless(u2.is_readonly())
16186+        self.failIf(u2.is_mutable())
16187+        self.failUnlessReallyEqual(self.storage_index, u2.storage_index)
16188+        self.failUnlessReallyEqual(self.fingerprint, u2.fingerprint)
16189+
16190+        u3 = u2.get_readonly()
16191+        self.failUnlessReallyEqual(u3, u2)
16192+
16193+        u4 = u2.get_verify_cap()
16194+        self.failUnlessReallyEqual(u4, u2)
16195+
16196+    def test_mdmf_cap_extra_information(self):
16197+        # MDMF caps can be arbitrarily extended after the fingerprint
16198+        # and key/storage index fields.
16199+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
16200+        self.failUnlessEqual([], u1.get_extension_params())
16201+
16202+        cap = u1.to_string()
16203+        # Now let's append some fields. Say, 131073 (the segment size)
16204+        # and 3 (the "k" encoding parameter).
16205+        expected_extensions = []
16206+        for e in ('131073', '3'):
16207+            cap += (":%s" % e)
16208+            expected_extensions.append(e)
16209+
16210+            u2 = uri.WritableMDMFFileURI.init_from_string(cap)
16211+            self.failUnlessReallyEqual(self.writekey, u2.writekey)
16212+            self.failUnlessReallyEqual(self.fingerprint, u2.fingerprint)
16213+            self.failIf(u2.is_readonly())
16214+            self.failUnless(u2.is_mutable())
16215+
16216+            c2 = u2.to_string()
16217+            u2n = uri.WritableMDMFFileURI.init_from_string(c2)
16218+            self.failUnlessReallyEqual(u2, u2n)
16219+
16220+            # We should get the extra back when we ask for it.
16221+            self.failUnlessEqual(expected_extensions, u2.get_extension_params())
16222+
16223+            # These should be preserved through cap attenuation, too.
16224+            u3 = u2.get_readonly()
16225+            self.failUnlessReallyEqual(self.readkey, u3.readkey)
16226+            self.failUnlessReallyEqual(self.fingerprint, u3.fingerprint)
16227+            self.failUnless(u3.is_readonly())
16228+            self.failUnless(u3.is_mutable())
16229+            self.failUnlessEqual(expected_extensions, u3.get_extension_params())
16230+
16231+            c3 = u3.to_string()
16232+            u3n = uri.ReadonlyMDMFFileURI.init_from_string(c3)
16233+            self.failUnlessReallyEqual(u3, u3n)
16234+
16235+            u4 = u3.get_verify_cap()
16236+            self.failUnlessReallyEqual(self.storage_index, u4.storage_index)
16237+            self.failUnlessReallyEqual(self.fingerprint, u4.fingerprint)
16238+            self.failUnless(u4.is_readonly())
16239+            self.failIf(u4.is_mutable())
16240+
16241+            c4 = u4.to_string()
16242+            u4n = uri.MDMFVerifierURI.init_from_string(c4)
16243+            self.failUnlessReallyEqual(u4n, u4)
16244+
16245+            self.failUnlessEqual(expected_extensions, u4.get_extension_params())
16246+
16247+
16248+    def test_sdmf_cap_extra_information(self):
16249+        # For interface consistency, we define a method to get
16250+        # extensions for SDMF files as well. This method must always
16251+        # return no extensions, since SDMF files were not created with
16252+        # extensions and cannot be modified to include extensions
16253+        # without breaking older clients.
16254+        u1 = uri.WriteableSSKFileURI(self.writekey, self.fingerprint)
16255+        cap = u1.to_string()
16256+        u2 = uri.WriteableSSKFileURI.init_from_string(cap)
16257+        self.failUnlessEqual([], u2.get_extension_params())
16258+
16259+    def test_extension_character_range(self):
16260+        # As written now, we shouldn't put things other than numbers in
16261+        # the extension fields.
16262+        writecap = uri.WritableMDMFFileURI(self.writekey, self.fingerprint).to_string()
16263+        readcap  = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint).to_string()
16264+        vcap     = uri.MDMFVerifierURI(self.storage_index, self.fingerprint).to_string()
16265+        self.failUnlessRaises(uri.BadURIError,
16266+                              uri.WritableMDMFFileURI.init_from_string,
16267+                              ("%s:invalid" % writecap))
16268+        self.failUnlessRaises(uri.BadURIError,
16269+                              uri.ReadonlyMDMFFileURI.init_from_string,
16270+                              ("%s:invalid" % readcap))
16271+        self.failUnlessRaises(uri.BadURIError,
16272+                              uri.MDMFVerifierURI.init_from_string,
16273+                              ("%s:invalid" % vcap))
16274+
16275+
16276+    def test_mdmf_valid_human_encoding(self):
16277+        # What's a human encoding? Well, it's of the form:
16278+        base = "https://127.0.0.1:3456/uri/"
16279+        # With a cap on the end. For each of the cap types, we need to
16280+        # test that a valid cap (with and without the traditional
16281+        # separators) is recognized and accepted by the classes.
16282+        w1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
16283+        w2 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
16284+                                     ['131073', '3'])
16285+        r1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
16286+        r2 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
16287+                                     ['131073', '3'])
16288+        v1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
16289+        v2 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
16290+                                 ['131073', '3'])
16291+
16292+        # These will yield six different caps.
16293+        for o in (w1, w2, r1 , r2, v1, v2):
16294+            url = base + o.to_string()
16295+            o1 = o.__class__.init_from_human_encoding(url)
16296+            self.failUnlessReallyEqual(o1, o)
16297+
16298+            # Note that our cap will, by default, have : as separators.
16299+            # But it's expected that users from, e.g., the WUI, will
16300+            # have %3A as a separator. We need to make sure that the
16301+            # initialization routine handles that, too.
16302+            cap = o.to_string()
16303+            cap = re.sub(":", "%3A", cap)
16304+            url = base + cap
16305+            o2 = o.__class__.init_from_human_encoding(url)
16306+            self.failUnlessReallyEqual(o2, o)
16307+
16308+
16309+    def test_mdmf_human_encoding_invalid_base(self):
16310+        # What's a human encoding? Well, it's of the form:
16311+        base = "https://127.0.0.1:3456/foo/bar/bazuri/"
16312+        # With a cap on the end. For each of the cap types, we need to
16313+        # test that a valid cap (with and without the traditional
16314+        # separators) is recognized and accepted by the classes.
16315+        w1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
16316+        w2 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
16317+                                     ['131073', '3'])
16318+        r1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
16319+        r2 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
16320+                                     ['131073', '3'])
16321+        v1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
16322+        v2 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
16323+                                 ['131073', '3'])
16324+
16325+        # These will yield six different caps.
16326+        for o in (w1, w2, r1 , r2, v1, v2):
16327+            url = base + o.to_string()
16328+            self.failUnlessRaises(uri.BadURIError,
16329+                                  o.__class__.init_from_human_encoding,
16330+                                  url)
16331+
16332+    def test_mdmf_human_encoding_invalid_cap(self):
16333+        base = "https://127.0.0.1:3456/uri/"
16334+        # With a cap on the end. For each of the cap types, we need to
16335+        # test that a valid cap (with and without the traditional
16336+        # separators) is recognized and accepted by the classes.
16337+        w1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
16338+        w2 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
16339+                                     ['131073', '3'])
16340+        r1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
16341+        r2 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
16342+                                     ['131073', '3'])
16343+        v1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
16344+        v2 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
16345+                                 ['131073', '3'])
16346+
16347+        # These will yield six different caps.
16348+        for o in (w1, w2, r1 , r2, v1, v2):
16349+            # not exhaustive, obviously...
16350+            url = base + o.to_string() + "foobarbaz"
16351+            url2 = base + "foobarbaz" + o.to_string()
16352+            url3 = base + o.to_string()[:25] + "foo" + o.to_string()[:25]
16353+            for u in (url, url2, url3):
16354+                self.failUnlessRaises(uri.BadURIError,
16355+                                      o.__class__.init_from_human_encoding,
16356+                                      u)
16357+
16358+    def test_mdmf_from_string(self):
16359+        # Make sure that the from_string utility function works with
16360+        # MDMF caps.
16361+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
16362+        cap = u1.to_string()
16363+        self.failUnless(uri.is_uri(cap))
16364+        u2 = uri.from_string(cap)
16365+        self.failUnlessReallyEqual(u1, u2)
16366+        u3 = uri.from_string_mutable_filenode(cap)
16367+        self.failUnlessEqual(u3, u1)
16368+
16369+        # XXX: We should refactor the extension field into setUp
16370+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
16371+                                     ['131073', '3'])
16372+        cap = u1.to_string()
16373+        self.failUnless(uri.is_uri(cap))
16374+        u2 = uri.from_string(cap)
16375+        self.failUnlessReallyEqual(u1, u2)
16376+        u3 = uri.from_string_mutable_filenode(cap)
16377+        self.failUnlessEqual(u3, u1)
16378+
16379+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
16380+        cap = u1.to_string()
16381+        self.failUnless(uri.is_uri(cap))
16382+        u2 = uri.from_string(cap)
16383+        self.failUnlessReallyEqual(u1, u2)
16384+        u3 = uri.from_string_mutable_filenode(cap)
16385+        self.failUnlessEqual(u3, u1)
16386+
16387+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
16388+                                     ['131073', '3'])
16389+        cap = u1.to_string()
16390+        self.failUnless(uri.is_uri(cap))
16391+        u2 = uri.from_string(cap)
16392+        self.failUnlessReallyEqual(u1, u2)
16393+        u3 = uri.from_string_mutable_filenode(cap)
16394+        self.failUnlessEqual(u3, u1)
16395+
16396+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
16397+        cap = u1.to_string()
16398+        self.failUnless(uri.is_uri(cap))
16399+        u2 = uri.from_string(cap)
16400+        self.failUnlessReallyEqual(u1, u2)
16401+        u3 = uri.from_string_verifier(cap)
16402+        self.failUnlessEqual(u3, u1)
16403+
16404+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
16405+                                 ['131073', '3'])
16406+        cap = u1.to_string()
16407+        self.failUnless(uri.is_uri(cap))
16408+        u2 = uri.from_string(cap)
16409+        self.failUnlessReallyEqual(u1, u2)
16410+        u3 = uri.from_string_verifier(cap)
16411+        self.failUnlessEqual(u3, u1)
16412+
16413+
16414 class Dirnode(testutil.ReallyEqualMixin, unittest.TestCase):
16415     def test_pack(self):
16416         writekey = "\x01" * 16
16417hunk ./src/allmydata/test/test_uri.py 794
16418         self.failUnlessReallyEqual(u1.get_verify_cap(), None)
16419         self.failUnlessReallyEqual(u1.get_storage_index(), None)
16420         self.failUnlessReallyEqual(u1.abbrev_si(), "<LIT>")
16421+
16422+    def test_mdmf(self):
16423+        writekey = "\x01" * 16
16424+        fingerprint = "\x02" * 32
16425+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
16426+        d1 = uri.MDMFDirectoryURI(uri1)
16427+        self.failIf(d1.is_readonly())
16428+        self.failUnless(d1.is_mutable())
16429+        self.failUnless(IURI.providedBy(d1))
16430+        self.failUnless(IDirnodeURI.providedBy(d1))
16431+        d1_uri = d1.to_string()
16432+
16433+        d2 = uri.from_string(d1_uri)
16434+        self.failUnlessIsInstance(d2, uri.MDMFDirectoryURI)
16435+        self.failIf(d2.is_readonly())
16436+        self.failUnless(d2.is_mutable())
16437+        self.failUnless(IURI.providedBy(d2))
16438+        self.failUnless(IDirnodeURI.providedBy(d2))
16439+
16440+        # It doesn't make sense to ask for a deep immutable URI for a
16441+        # mutable directory, and we should get back a result to that
16442+        # effect.
16443+        d3 = uri.from_string(d2.to_string(), deep_immutable=True)
16444+        self.failUnlessIsInstance(d3, uri.UnknownURI)
16445+
16446+    def test_mdmf_with_extensions(self):
16447+        writekey = "\x01" * 16
16448+        fingerprint = "\x02" * 32
16449+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
16450+        d1 = uri.MDMFDirectoryURI(uri1)
16451+        d1_uri = d1.to_string()
16452+        # Add some extensions, verify that the URI is interpreted
16453+        # correctly.
16454+        d1_uri += ":3:131073"
16455+        uri2 = uri.from_string(d1_uri)
16456+        self.failUnlessIsInstance(uri2, uri.MDMFDirectoryURI)
16457+        self.failUnless(IURI.providedBy(uri2))
16458+        self.failUnless(IDirnodeURI.providedBy(uri2))
16459+        self.failUnless(uri1.is_mutable())
16460+        self.failIf(uri1.is_readonly())
16461+
16462+        d2_uri = uri2.to_string()
16463+        self.failUnlessIn(":3:131073", d2_uri)
16464+
16465+        # Now attenuate, verify that the extensions persist
16466+        ro_uri = uri2.get_readonly()
16467+        self.failUnlessIsInstance(ro_uri, uri.ReadonlyMDMFDirectoryURI)
16468+        self.failUnless(ro_uri.is_mutable())
16469+        self.failUnless(ro_uri.is_readonly())
16470+        self.failUnless(IURI.providedBy(ro_uri))
16471+        self.failUnless(IDirnodeURI.providedBy(ro_uri))
16472+        ro_uri_str = ro_uri.to_string()
16473+        self.failUnlessIn(":3:131073", ro_uri_str)
16474+
16475+    def test_mdmf_attenuation(self):
16476+        writekey = "\x01" * 16
16477+        fingerprint = "\x02" * 32
16478+
16479+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
16480+        d1 = uri.MDMFDirectoryURI(uri1)
16481+        self.failUnless(d1.is_mutable())
16482+        self.failIf(d1.is_readonly())
16483+        self.failUnless(IURI.providedBy(d1))
16484+        self.failUnless(IDirnodeURI.providedBy(d1))
16485+
16486+        d1_uri = d1.to_string()
16487+        d1_uri_from_fn = uri.MDMFDirectoryURI(d1.get_filenode_cap()).to_string()
16488+        self.failUnlessEqual(d1_uri_from_fn, d1_uri)
16489+
16490+        uri2 = uri.from_string(d1_uri)
16491+        self.failUnlessIsInstance(uri2, uri.MDMFDirectoryURI)
16492+        self.failUnless(IURI.providedBy(uri2))
16493+        self.failUnless(IDirnodeURI.providedBy(uri2))
16494+        self.failUnless(uri2.is_mutable())
16495+        self.failIf(uri2.is_readonly())
16496+
16497+        ro = uri2.get_readonly()
16498+        self.failUnlessIsInstance(ro, uri.ReadonlyMDMFDirectoryURI)
16499+        self.failUnless(ro.is_mutable())
16500+        self.failUnless(ro.is_readonly())
16501+        self.failUnless(IURI.providedBy(ro))
16502+        self.failUnless(IDirnodeURI.providedBy(ro))
16503+
16504+        ro_uri = ro.to_string()
16505+        n = uri.from_string(ro_uri, deep_immutable=True)
16506+        self.failUnlessIsInstance(n, uri.UnknownURI)
16507+
16508+        fn_cap = ro.get_filenode_cap()
16509+        fn_ro_cap = fn_cap.get_readonly()
16510+        d3 = uri.ReadonlyMDMFDirectoryURI(fn_ro_cap)
16511+        self.failUnlessEqual(ro.to_string(), d3.to_string())
16512+        self.failUnless(ro.is_mutable())
16513+        self.failUnless(ro.is_readonly())
16514+
16515+    def test_mdmf_verifier(self):
16516+        # I'm not sure what I want to write here yet.
16517+        writekey = "\x01" * 16
16518+        fingerprint = "\x02" * 32
16519+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
16520+        d1 = uri.MDMFDirectoryURI(uri1)
16521+        v1 = d1.get_verify_cap()
16522+        self.failUnlessIsInstance(v1, uri.MDMFDirectoryURIVerifier)
16523+        self.failIf(v1.is_mutable())
16524+
16525+        d2 = uri.from_string(d1.to_string())
16526+        v2 = d2.get_verify_cap()
16527+        self.failUnlessIsInstance(v2, uri.MDMFDirectoryURIVerifier)
16528+        self.failIf(v2.is_mutable())
16529+        self.failUnlessEqual(v2.to_string(), v1.to_string())
16530+
16531+        # Now attenuate and make sure that works correctly.
16532+        r3 = d2.get_readonly()
16533+        v3 = r3.get_verify_cap()
16534+        self.failUnlessIsInstance(v3, uri.MDMFDirectoryURIVerifier)
16535+        self.failIf(v3.is_mutable())
16536+        self.failUnlessEqual(v3.to_string(), v1.to_string())
16537+        r4 = uri.from_string(r3.to_string())
16538+        v4 = r4.get_verify_cap()
16539+        self.failUnlessIsInstance(v4, uri.MDMFDirectoryURIVerifier)
16540+        self.failIf(v4.is_mutable())
16541+        self.failUnlessEqual(v4.to_string(), v3.to_string())
16542hunk ./src/allmydata/uri.py 31
16543 SEP='(?::|%3A)'
16544 NUMBER='([0-9]+)'
16545 NUMBER_IGNORE='(?:[0-9]+)'
16546+OPTIONAL_EXTENSION_FIELD = '(' + SEP + '[0-9' + SEP + ']+|)'
16547 
16548 # "human-encoded" URIs are allowed to come with a leading
16549 # 'http://127.0.0.1:(8123|3456)/uri/' that will be ignored.
16550hunk ./src/allmydata/uri.py 297
16551     def get_verify_cap(self):
16552         return SSKVerifierURI(self.storage_index, self.fingerprint)
16553 
16554+    def get_extension_params(self):
16555+        return []
16556+
16557+    def set_extension_params(self, params):
16558+        pass
16559 
16560 class ReadonlySSKFileURI(_BaseURI):
16561     implements(IURI, IMutableFileURI)
16562hunk ./src/allmydata/uri.py 357
16563     def get_verify_cap(self):
16564         return SSKVerifierURI(self.storage_index, self.fingerprint)
16565 
16566+    def get_extension_params(self):
16567+        return []
16568+
16569+    def set_extension_params(self, params):
16570+        pass
16571 
16572 class SSKVerifierURI(_BaseURI):
16573     implements(IVerifierURI)
16574hunk ./src/allmydata/uri.py 407
16575     def get_verify_cap(self):
16576         return self
16577 
16578+    def get_extension_params(self):
16579+        return []
16580+
16581+    def set_extension_params(self, params):
16582+        pass
16583+
16584+class WritableMDMFFileURI(_BaseURI):
16585+    implements(IURI, IMutableFileURI)
16586+
16587+    BASE_STRING='URI:MDMF:'
16588+    STRING_RE=re.compile('^'+BASE_STRING+BASE32STR_128bits+':'+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
16589+    HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'MDMF'+SEP+BASE32STR_128bits+SEP+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
16590+
16591+    def __init__(self, writekey, fingerprint, params=[]):
16592+        self.writekey = writekey
16593+        self.readkey = hashutil.ssk_readkey_hash(writekey)
16594+        self.storage_index = hashutil.ssk_storage_index_hash(self.readkey)
16595+        assert len(self.storage_index) == 16
16596+        self.fingerprint = fingerprint
16597+        self.extension = params
16598+
16599+    @classmethod
16600+    def init_from_human_encoding(cls, uri):
16601+        mo = cls.HUMAN_RE.search(uri)
16602+        if not mo:
16603+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
16604+        params = filter(lambda x: x != '', re.split(SEP, mo.group(3)))
16605+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
16606+
16607+    @classmethod
16608+    def init_from_string(cls, uri):
16609+        mo = cls.STRING_RE.search(uri)
16610+        if not mo:
16611+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
16612+        params = mo.group(3)
16613+        params = filter(lambda x: x != '', params.split(":"))
16614+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
16615+
16616+    def to_string(self):
16617+        assert isinstance(self.writekey, str)
16618+        assert isinstance(self.fingerprint, str)
16619+        ret = 'URI:MDMF:%s:%s' % (base32.b2a(self.writekey),
16620+                                  base32.b2a(self.fingerprint))
16621+        if self.extension:
16622+            ret += ":"
16623+            ret += ":".join(self.extension)
16624+
16625+        return ret
16626+
16627+    def __repr__(self):
16628+        return "<%s %s>" % (self.__class__.__name__, self.abbrev())
16629+
16630+    def abbrev(self):
16631+        return base32.b2a(self.writekey[:5])
16632+
16633+    def abbrev_si(self):
16634+        return base32.b2a(self.storage_index)[:5]
16635+
16636+    def is_readonly(self):
16637+        return False
16638+
16639+    def is_mutable(self):
16640+        return True
16641+
16642+    def get_readonly(self):
16643+        return ReadonlyMDMFFileURI(self.readkey, self.fingerprint, self.extension)
16644+
16645+    def get_verify_cap(self):
16646+        return MDMFVerifierURI(self.storage_index, self.fingerprint, self.extension)
16647+
16648+    def get_extension_params(self):
16649+        return self.extension
16650+
16651+    def set_extension_params(self, params):
16652+        params = map(str, params)
16653+        self.extension = params
16654+
16655+class ReadonlyMDMFFileURI(_BaseURI):
16656+    implements(IURI, IMutableFileURI)
16657+
16658+    BASE_STRING='URI:MDMF-RO:'
16659+    STRING_RE=re.compile('^' +BASE_STRING+BASE32STR_128bits+':'+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
16660+    HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'MDMF-RO'+SEP+BASE32STR_128bits+SEP+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
16661+
16662+    def __init__(self, readkey, fingerprint, params=[]):
16663+        self.readkey = readkey
16664+        self.storage_index = hashutil.ssk_storage_index_hash(self.readkey)
16665+        assert len(self.storage_index) == 16
16666+        self.fingerprint = fingerprint
16667+        self.extension = params
16668+
16669+    @classmethod
16670+    def init_from_human_encoding(cls, uri):
16671+        mo = cls.HUMAN_RE.search(uri)
16672+        if not mo:
16673+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
16674+        params = mo.group(3)
16675+        params = filter(lambda x: x!= '', re.split(SEP, params))
16676+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
16677+
16678+    @classmethod
16679+    def init_from_string(cls, uri):
16680+        mo = cls.STRING_RE.search(uri)
16681+        if not mo:
16682+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
16683+
16684+        params = mo.group(3)
16685+        params = filter(lambda x: x != '', params.split(":"))
16686+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
16687+
16688+    def to_string(self):
16689+        assert isinstance(self.readkey, str)
16690+        assert isinstance(self.fingerprint, str)
16691+        ret = 'URI:MDMF-RO:%s:%s' % (base32.b2a(self.readkey),
16692+                                     base32.b2a(self.fingerprint))
16693+        if self.extension:
16694+            ret += ":"
16695+            ret += ":".join(self.extension)
16696+
16697+        return ret
16698+
16699+    def __repr__(self):
16700+        return "<%s %s>" % (self.__class__.__name__, self.abbrev())
16701+
16702+    def abbrev(self):
16703+        return base32.b2a(self.readkey[:5])
16704+
16705+    def abbrev_si(self):
16706+        return base32.b2a(self.storage_index)[:5]
16707+
16708+    def is_readonly(self):
16709+        return True
16710+
16711+    def is_mutable(self):
16712+        return True
16713+
16714+    def get_readonly(self):
16715+        return self
16716+
16717+    def get_verify_cap(self):
16718+        return MDMFVerifierURI(self.storage_index, self.fingerprint, self.extension)
16719+
16720+    def get_extension_params(self):
16721+        return self.extension
16722+
16723+    def set_extension_params(self, params):
16724+        params = map(str, params)
16725+        self.extension = params
16726+
16727+class MDMFVerifierURI(_BaseURI):
16728+    implements(IVerifierURI)
16729+
16730+    BASE_STRING='URI:MDMF-Verifier:'
16731+    STRING_RE=re.compile('^'+BASE_STRING+BASE32STR_128bits+':'+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
16732+    HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'MDMF-Verifier'+SEP+BASE32STR_128bits+SEP+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
16733+
16734+    def __init__(self, storage_index, fingerprint, params=[]):
16735+        assert len(storage_index) == 16
16736+        self.storage_index = storage_index
16737+        self.fingerprint = fingerprint
16738+        self.extension = params
16739+
16740+    @classmethod
16741+    def init_from_human_encoding(cls, uri):
16742+        mo = cls.HUMAN_RE.search(uri)
16743+        if not mo:
16744+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
16745+        params = mo.group(3)
16746+        params = filter(lambda x: x != '', re.split(SEP, params))
16747+        return cls(si_a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
16748+
16749+    @classmethod
16750+    def init_from_string(cls, uri):
16751+        mo = cls.STRING_RE.search(uri)
16752+        if not mo:
16753+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
16754+        params = mo.group(3)
16755+        params = filter(lambda x: x != '', params.split(":"))
16756+        return cls(si_a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
16757+
16758+    def to_string(self):
16759+        assert isinstance(self.storage_index, str)
16760+        assert isinstance(self.fingerprint, str)
16761+        ret = 'URI:MDMF-Verifier:%s:%s' % (si_b2a(self.storage_index),
16762+                                           base32.b2a(self.fingerprint))
16763+        if self.extension:
16764+            ret += ':'
16765+            ret += ":".join(self.extension)
16766+
16767+        return ret
16768+
16769+    def is_readonly(self):
16770+        return True
16771+
16772+    def is_mutable(self):
16773+        return False
16774+
16775+    def get_readonly(self):
16776+        return self
16777+
16778+    def get_verify_cap(self):
16779+        return self
16780+
16781+    def get_extension_params(self):
16782+        return self.extension
16783+
16784 class _DirectoryBaseURI(_BaseURI):
16785     implements(IURI, IDirnodeURI)
16786     def __init__(self, filenode_uri=None):
16787hunk ./src/allmydata/uri.py 750
16788         return None
16789 
16790 
16791+class MDMFDirectoryURI(_DirectoryBaseURI):
16792+    implements(IDirectoryURI)
16793+
16794+    BASE_STRING='URI:DIR2-MDMF:'
16795+    BASE_STRING_RE=re.compile('^'+BASE_STRING)
16796+    BASE_HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'DIR2-MDMF'+SEP)
16797+    INNER_URI_CLASS=WritableMDMFFileURI
16798+
16799+    def __init__(self, filenode_uri=None):
16800+        if filenode_uri:
16801+            assert not filenode_uri.is_readonly()
16802+        _DirectoryBaseURI.__init__(self, filenode_uri)
16803+
16804+    def is_readonly(self):
16805+        return False
16806+
16807+    def get_readonly(self):
16808+        return ReadonlyMDMFDirectoryURI(self._filenode_uri.get_readonly())
16809+
16810+    def get_verify_cap(self):
16811+        return MDMFDirectoryURIVerifier(self._filenode_uri.get_verify_cap())
16812+
16813+
16814+class ReadonlyMDMFDirectoryURI(_DirectoryBaseURI):
16815+    implements(IReadonlyDirectoryURI)
16816+
16817+    BASE_STRING='URI:DIR2-MDMF-RO:'
16818+    BASE_STRING_RE=re.compile('^'+BASE_STRING)
16819+    BASE_HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'DIR2-MDMF-RO'+SEP)
16820+    INNER_URI_CLASS=ReadonlyMDMFFileURI
16821+
16822+    def __init__(self, filenode_uri=None):
16823+        if filenode_uri:
16824+            assert filenode_uri.is_readonly()
16825+        _DirectoryBaseURI.__init__(self, filenode_uri)
16826+
16827+    def is_readonly(self):
16828+        return True
16829+
16830+    def get_readonly(self):
16831+        return self
16832+
16833+    def get_verify_cap(self):
16834+        return MDMFDirectoryURIVerifier(self._filenode_uri.get_verify_cap())
16835+
16836 def wrap_dirnode_cap(filecap):
16837     if isinstance(filecap, WriteableSSKFileURI):
16838         return DirectoryURI(filecap)
16839hunk ./src/allmydata/uri.py 804
16840         return ImmutableDirectoryURI(filecap)
16841     if isinstance(filecap, LiteralFileURI):
16842         return LiteralDirectoryURI(filecap)
16843+    if isinstance(filecap, WritableMDMFFileURI):
16844+        return MDMFDirectoryURI(filecap)
16845+    if isinstance(filecap, ReadonlyMDMFFileURI):
16846+        return ReadonlyMDMFDirectoryURI(filecap)
16847     assert False, "cannot interpret as a directory cap: %s" % filecap.__class__
16848 
16849hunk ./src/allmydata/uri.py 810
16850+class MDMFDirectoryURIVerifier(_DirectoryBaseURI):
16851+    implements(IVerifierURI)
16852+
16853+    BASE_STRING='URI:DIR2-MDMF-Verifier:'
16854+    BASE_STRING_RE=re.compile('^'+BASE_STRING)
16855+    BASE_HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'DIR2-MDMF-Verifier'+SEP)
16856+    INNER_URI_CLASS=MDMFVerifierURI
16857+
16858+    def __init__(self, filenode_uri=None):
16859+        if filenode_uri:
16860+            assert IVerifierURI.providedBy(filenode_uri)
16861+        self._filenode_uri = filenode_uri
16862+
16863+    def get_filenode_cap(self):
16864+        return self._filenode_uri
16865+
16866+    def is_mutable(self):
16867+        return False
16868 
16869 class DirectoryURIVerifier(_DirectoryBaseURI):
16870     implements(IVerifierURI)
16871hunk ./src/allmydata/uri.py 915
16872             kind = "URI:SSK-RO readcap to a mutable file"
16873         elif s.startswith('URI:SSK-Verifier:'):
16874             return SSKVerifierURI.init_from_string(s)
16875+        elif s.startswith('URI:MDMF:'):
16876+            return WritableMDMFFileURI.init_from_string(s)
16877+        elif s.startswith('URI:MDMF-RO:'):
16878+            return ReadonlyMDMFFileURI.init_from_string(s)
16879+        elif s.startswith('URI:MDMF-Verifier:'):
16880+            return MDMFVerifierURI.init_from_string(s)
16881         elif s.startswith('URI:DIR2:'):
16882             if can_be_writeable:
16883                 return DirectoryURI.init_from_string(s)
16884hunk ./src/allmydata/uri.py 935
16885             return ImmutableDirectoryURI.init_from_string(s)
16886         elif s.startswith('URI:DIR2-LIT:'):
16887             return LiteralDirectoryURI.init_from_string(s)
16888+        elif s.startswith('URI:DIR2-MDMF:'):
16889+            if can_be_writeable:
16890+                return MDMFDirectoryURI.init_from_string(s)
16891+            kind = "URI:DIR2-MDMF directory writecap"
16892+        elif s.startswith('URI:DIR2-MDMF-RO:'):
16893+            if can_be_mutable:
16894+                return ReadonlyMDMFDirectoryURI.init_from_string(s)
16895+            kind = "URI:DIR2-MDMF-RO readcap to a mutable directory"
16896         elif s.startswith('x-tahoe-future-test-writeable:') and not can_be_writeable:
16897             # For testing how future writeable caps would behave in read-only contexts.
16898             kind = "x-tahoe-future-test-writeable: testing cap"
16899}
16900[test: fix assorted tests broken by MDMF changes
16901Kevan Carstensen <kevan@isnotajoke.com>**20110807004459
16902 Ignore-this: 9a0dc7e5c74bfe840a9fce278619a103
16903] {
16904hunk ./src/allmydata/test/common.py 20
16905 from allmydata.mutable.common import CorruptShareError
16906 from allmydata.mutable.layout import unpack_header
16907 from allmydata.mutable.publish import MutableData
16908-from allmydata.storage.server import storage_index_to_dir
16909 from allmydata.storage.mutable import MutableShareFile
16910 from allmydata.util import hashutil, log, fileutil, pollmixin
16911 from allmydata.util.assertutil import precondition
16912hunk ./src/allmydata/test/common.py 23
16913+from allmydata.util.consumer import download_to_data
16914 from allmydata.stats import StatsGathererService
16915 from allmydata.key_generator import KeyGeneratorService
16916 import allmydata.test.common_util as testutil
16917hunk ./src/allmydata/test/test_checker.py 12
16918 from allmydata.test.no_network import GridTestMixin
16919 from allmydata.immutable.upload import Data
16920 from allmydata.test.common_web import WebRenderingMixin
16921+from allmydata.mutable.publish import MutableData
16922 
16923 class FakeClient:
16924     def get_storage_broker(self):
16925hunk ./src/allmydata/test/test_checker.py 292
16926         def _stash_immutable(ur):
16927             self.imm = c0.create_node_from_uri(ur.uri)
16928         d.addCallback(_stash_immutable)
16929-        d.addCallback(lambda ign: c0.create_mutable_file("contents"))
16930+        d.addCallback(lambda ign:
16931+            c0.create_mutable_file(MutableData("contents")))
16932         def _stash_mutable(node):
16933             self.mut = node
16934         d.addCallback(_stash_mutable)
16935hunk ./src/allmydata/test/test_cli.py 13
16936 from allmydata.util import fileutil, hashutil, base32
16937 from allmydata import uri
16938 from allmydata.immutable import upload
16939+from allmydata.interfaces import MDMF_VERSION, SDMF_VERSION
16940+from allmydata.mutable.publish import MutableData
16941 from allmydata.dirnode import normalize
16942 
16943 # Test that the scripts can be imported.
16944hunk ./src/allmydata/test/test_cli.py 2150
16945             self.do_cli("cp", replacement_file_path, "tahoe:test_file.txt"))
16946         def _check_error_message((rc, out, err)):
16947             self.failUnlessEqual(rc, 1)
16948-            self.failUnlessIn("need write capability to publish", err)
16949+            self.failUnlessIn("replace or update requested with read-only cap", err)
16950         d.addCallback(_check_error_message)
16951         # Make extra sure that that didn't work.
16952         d.addCallback(lambda ignored:
16953hunk ./src/allmydata/test/test_cli.py 2712
16954         self.set_up_grid()
16955         c0 = self.g.clients[0]
16956         DATA = "data" * 100
16957-        d = c0.create_mutable_file(DATA)
16958+        DATA_uploadable = MutableData(DATA)
16959+        d = c0.create_mutable_file(DATA_uploadable)
16960         def _stash_uri(n):
16961             self.uri = n.get_uri()
16962         d.addCallback(_stash_uri)
16963hunk ./src/allmydata/test/test_cli.py 2814
16964                                            upload.Data("literal",
16965                                                         convergence="")))
16966         d.addCallback(_stash_uri, "small")
16967-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"1"))
16968+        d.addCallback(lambda ign:
16969+            c0.create_mutable_file(MutableData(DATA+"1")))
16970         d.addCallback(lambda fn: self.rootnode.set_node(u"mutable", fn))
16971         d.addCallback(_stash_uri, "mutable")
16972 
16973hunk ./src/allmydata/test/test_deepcheck.py 9
16974 from twisted.internet import threads # CLI tests use deferToThread
16975 from allmydata.immutable import upload
16976 from allmydata.mutable.common import UnrecoverableFileError
16977+from allmydata.mutable.publish import MutableData
16978 from allmydata.util import idlib
16979 from allmydata.util import base32
16980 from allmydata.scripts import runner
16981hunk ./src/allmydata/test/test_deepcheck.py 38
16982         self.basedir = "deepcheck/MutableChecker/good"
16983         self.set_up_grid()
16984         CONTENTS = "a little bit of data"
16985-        d = self.g.clients[0].create_mutable_file(CONTENTS)
16986+        CONTENTS_uploadable = MutableData(CONTENTS)
16987+        d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable)
16988         def _created(node):
16989             self.node = node
16990             self.fileurl = "uri/" + urllib.quote(node.get_uri())
16991hunk ./src/allmydata/test/test_deepcheck.py 61
16992         self.basedir = "deepcheck/MutableChecker/corrupt"
16993         self.set_up_grid()
16994         CONTENTS = "a little bit of data"
16995-        d = self.g.clients[0].create_mutable_file(CONTENTS)
16996+        CONTENTS_uploadable = MutableData(CONTENTS)
16997+        d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable)
16998         def _stash_and_corrupt(node):
16999             self.node = node
17000             self.fileurl = "uri/" + urllib.quote(node.get_uri())
17001hunk ./src/allmydata/test/test_deepcheck.py 99
17002         self.basedir = "deepcheck/MutableChecker/delete_share"
17003         self.set_up_grid()
17004         CONTENTS = "a little bit of data"
17005-        d = self.g.clients[0].create_mutable_file(CONTENTS)
17006+        CONTENTS_uploadable = MutableData(CONTENTS)
17007+        d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable)
17008         def _stash_and_delete(node):
17009             self.node = node
17010             self.fileurl = "uri/" + urllib.quote(node.get_uri())
17011hunk ./src/allmydata/test/test_deepcheck.py 223
17012             self.root = n
17013             self.root_uri = n.get_uri()
17014         d.addCallback(_created_root)
17015-        d.addCallback(lambda ign: c0.create_mutable_file("mutable file contents"))
17016+        d.addCallback(lambda ign:
17017+            c0.create_mutable_file(MutableData("mutable file contents")))
17018         d.addCallback(lambda n: self.root.set_node(u"mutable", n))
17019         def _created_mutable(n):
17020             self.mutable = n
17021hunk ./src/allmydata/test/test_deepcheck.py 965
17022     def create_mangled(self, ignored, name):
17023         nodetype, mangletype = name.split("-", 1)
17024         if nodetype == "mutable":
17025-            d = self.g.clients[0].create_mutable_file("mutable file contents")
17026+            mutable_uploadable = MutableData("mutable file contents")
17027+            d = self.g.clients[0].create_mutable_file(mutable_uploadable)
17028             d.addCallback(lambda n: self.root.set_node(unicode(name), n))
17029         elif nodetype == "large":
17030             large = upload.Data("Lots of data\n" * 1000 + name + "\n", None)
17031hunk ./src/allmydata/test/test_hung_server.py 10
17032 from allmydata.util.consumer import download_to_data
17033 from allmydata.immutable import upload
17034 from allmydata.mutable.common import UnrecoverableFileError
17035+from allmydata.mutable.publish import MutableData
17036 from allmydata.storage.common import storage_index_to_dir
17037 from allmydata.test.no_network import GridTestMixin
17038 from allmydata.test.common import ShouldFailMixin
17039hunk ./src/allmydata/test/test_hung_server.py 110
17040         self.servers = self.servers[5:] + self.servers[:5]
17041 
17042         if mutable:
17043-            d = nm.create_mutable_file(mutable_plaintext)
17044+            uploadable = MutableData(mutable_plaintext)
17045+            d = nm.create_mutable_file(uploadable)
17046             def _uploaded_mutable(node):
17047                 self.uri = node.get_uri()
17048                 self.shares = self.find_uri_shares(self.uri)
17049hunk ./src/allmydata/test/test_system.py 26
17050 from allmydata.monitor import Monitor
17051 from allmydata.mutable.common import NotWriteableError
17052 from allmydata.mutable import layout as mutable_layout
17053+from allmydata.mutable.publish import MutableData
17054 from foolscap.api import DeadReferenceError
17055 from twisted.python.failure import Failure
17056 from twisted.web.client import getPage
17057hunk ./src/allmydata/test/test_system.py 467
17058     def test_mutable(self):
17059         self.basedir = "system/SystemTest/test_mutable"
17060         DATA = "initial contents go here."  # 25 bytes % 3 != 0
17061+        DATA_uploadable = MutableData(DATA)
17062         NEWDATA = "new contents yay"
17063hunk ./src/allmydata/test/test_system.py 469
17064+        NEWDATA_uploadable = MutableData(NEWDATA)
17065         NEWERDATA = "this is getting old"
17066hunk ./src/allmydata/test/test_system.py 471
17067+        NEWERDATA_uploadable = MutableData(NEWERDATA)
17068 
17069         d = self.set_up_nodes(use_key_generator=True)
17070 
17071hunk ./src/allmydata/test/test_system.py 478
17072         def _create_mutable(res):
17073             c = self.clients[0]
17074             log.msg("starting create_mutable_file")
17075-            d1 = c.create_mutable_file(DATA)
17076+            d1 = c.create_mutable_file(DATA_uploadable)
17077             def _done(res):
17078                 log.msg("DONE: %s" % (res,))
17079                 self._mutable_node_1 = res
17080hunk ./src/allmydata/test/test_system.py 565
17081             self.failUnlessEqual(res, DATA)
17082             # replace the data
17083             log.msg("starting replace1")
17084-            d1 = newnode.overwrite(NEWDATA)
17085+            d1 = newnode.overwrite(NEWDATA_uploadable)
17086             d1.addCallback(lambda res: newnode.download_best_version())
17087             return d1
17088         d.addCallback(_check_download_3)
17089hunk ./src/allmydata/test/test_system.py 579
17090             newnode2 = self.clients[3].create_node_from_uri(uri)
17091             self._newnode3 = self.clients[3].create_node_from_uri(uri)
17092             log.msg("starting replace2")
17093-            d1 = newnode1.overwrite(NEWERDATA)
17094+            d1 = newnode1.overwrite(NEWERDATA_uploadable)
17095             d1.addCallback(lambda res: newnode2.download_best_version())
17096             return d1
17097         d.addCallback(_check_download_4)
17098hunk ./src/allmydata/test/test_system.py 649
17099         def _check_empty_file(res):
17100             # make sure we can create empty files, this usually screws up the
17101             # segsize math
17102-            d1 = self.clients[2].create_mutable_file("")
17103+            d1 = self.clients[2].create_mutable_file(MutableData(""))
17104             d1.addCallback(lambda newnode: newnode.download_best_version())
17105             d1.addCallback(lambda res: self.failUnlessEqual("", res))
17106             return d1
17107hunk ./src/allmydata/test/test_system.py 680
17108                                  self.key_generator_svc.key_generator.pool_size + size_delta)
17109 
17110         d.addCallback(check_kg_poolsize, 0)
17111-        d.addCallback(lambda junk: self.clients[3].create_mutable_file('hello, world'))
17112+        d.addCallback(lambda junk:
17113+            self.clients[3].create_mutable_file(MutableData('hello, world')))
17114         d.addCallback(check_kg_poolsize, -1)
17115         d.addCallback(lambda junk: self.clients[3].create_dirnode())
17116         d.addCallback(check_kg_poolsize, -2)
17117}
17118[immutable/filenode: fix pyflakes warnings
17119Kevan Carstensen <kevan@isnotajoke.com>**20110807004514
17120 Ignore-this: e8d875bf8b1c5571e31b0eff42ecf64c
17121] hunk ./src/allmydata/immutable/filenode.py 11
17122 
17123 from allmydata import uri
17124 from twisted.internet.interfaces import IConsumer
17125-from twisted.protocols import basic
17126-from foolscap.api import eventually
17127-from allmydata.interfaces import IImmutableFileNode, ICheckable, \
17128-     IDownloadTarget, IUploadResults
17129-from allmydata.util import dictutil, log, base32, consumer
17130-from allmydata.immutable.checker import Checker
17131+from allmydata.interfaces import IImmutableFileNode, IUploadResults
17132+from allmydata.util import consumer
17133 from allmydata.check_results import CheckResults, CheckAndRepairResults
17134 from allmydata.util.dictutil import DictOfSets
17135 from pycryptopp.cipher.aes import AES
17136
17137Context:
17138
17139[test_runner.py: fix a bug in CreateNode.do_create introduced in changeset [5114] when the tahoe.cfg file has been written with CRLF line endings. refs #1385
17140david-sarah@jacaranda.org**20110804003032
17141 Ignore-this: 7b7afdcf99da6671afac2d42828883eb
17142] 
17143[test_client.py: repair Basic.test_error_on_old_config_files. refs #1385
17144david-sarah@jacaranda.org**20110803235036
17145 Ignore-this: 31e2a9c3febe55948de7e144353663e
17146] 
17147[test_checker.py: increase timeout for TooParallel.test_immutable again. The ARM buildslave took 38 seconds, so 40 seconds is too close to the edge; make it 80.
17148david-sarah@jacaranda.org**20110803214042
17149 Ignore-this: 2d8026a6b25534e01738f78d6c7495cb
17150] 
17151[test_runner.py: fix RunNode.test_introducer to not rely on the mtime of introducer.furl to detect when the node has restarted. Instead we detect when node.url has been written. refs #1385
17152david-sarah@jacaranda.org**20110803180917
17153 Ignore-this: 11ddc43b107beca42cb78af88c5c394c
17154] 
17155[Further improve error message about old config files. refs #1385
17156david-sarah@jacaranda.org**20110803174546
17157 Ignore-this: 9d6cc3c288d9863dce58faafb3855917
17158] 
17159[Slightly improve error message about old config files (avoid unnecessary Unicode escaping). refs #1385
17160david-sarah@jacaranda.org**20110803163848
17161 Ignore-this: a3e3930fba7ccf90b8db3d2ed5829df4
17162] 
17163[test_checker.py: increase timeout for TooParallel.test_immutable (was consistently failing on ARM buildslave).
17164david-sarah@jacaranda.org**20110803163213
17165 Ignore-this: d0efceaf12628e8791862b80c85b5d56
17166] 
17167[Fix the bug that prevents an introducer from starting when introducer.furl already exists. Also remove some dead code that used to read old config files, and rename 'warn_about_old_config_files' to reflect that it's not a warning. refs #1385
17168david-sarah@jacaranda.org**20110803013212
17169 Ignore-this: 2d6cd14bd06a7493b26f2027aff78f4d
17170] 
17171[test_runner.py: modify RunNode.test_introducer to test that starting an introducer works when the introducer.furl file already exists. refs #1385
17172david-sarah@jacaranda.org**20110803012704
17173 Ignore-this: 8cf7f27ac4bfbb5ad8ca4a974106d437
17174] 
17175[verifier: correct a bug introduced in changeset [5106] that caused us to only verify the first block of a file. refs #1395
17176david-sarah@jacaranda.org**20110802172437
17177 Ignore-this: 87fb77854a839ff217dce73544775b11
17178] 
17179[test_repairer: add a deterministic test of share data corruption that always flips the bits of the last byte of the share data. refs #1395
17180david-sarah@jacaranda.org**20110802175841
17181 Ignore-this: 72f54603785007e88220c8d979e08be7
17182] 
17183[verifier: serialize the fetching of blocks within a share so that we don't use too much RAM
17184zooko@zooko.com**20110802063703
17185 Ignore-this: debd9bac07dcbb6803f835a9e2eabaa1
17186 
17187 Shares are still verified in parallel, but within a share, don't request a
17188 block until the previous block has been verified and the memory we used to hold
17189 it has been freed up.
17190 
17191 Patch originally due to Brian. This version has a mockery-patchery-style test
17192 which is "low tech" (it implements the patching inline in the test code instead
17193 of using an extension of the mock.patch() function from the mock library) and
17194 which unpatches in case of exception.
17195 
17196 fixes #1395
17197] 
17198[add docs about timing-channel attacks
17199Brian Warner <warner@lothar.com>**20110802044541
17200 Ignore-this: 73114d5f5ed9ce252597b707dba3a194
17201] 
17202['test-coverage' now needs PYTHONPATH=. to find TOP/twisted/plugins/
17203Brian Warner <warner@lothar.com>**20110802041952
17204 Ignore-this: d40f1f4cb426ea1c362fc961baedde2
17205] 
17206[remove nodeid from WriteBucketProxy classes and customers
17207warner@lothar.com**20110801224317
17208 Ignore-this: e55334bb0095de11711eeb3af827e8e8
17209 refs #1363
17210] 
17211[remove get_serverid() from ReadBucketProxy and customers, including Checker
17212warner@lothar.com**20110801224307
17213 Ignore-this: 837aba457bc853e4fd413ab1a94519cb
17214 and debug.py dump-share commands
17215 refs #1363
17216] 
17217[reject old-style (pre-Tahoe-LAFS-v1.3) configuration files
17218zooko@zooko.com**20110801232423
17219 Ignore-this: b58218fcc064cc75ad8f05ed0c38902b
17220 Check for the existence of any of them and if any are found raise exception which will abort the startup of the node.
17221 This is a backwards-incompatible change for anyone who is still using old-style configuration files.
17222 fixes #1385
17223] 
17224[whitespace-cleanup
17225zooko@zooko.com**20110725015546
17226 Ignore-this: 442970d0545183b97adc7bd66657876c
17227] 
17228[tests: use fileutil.write() instead of open() to ensure timely close even without CPython-style reference counting
17229zooko@zooko.com**20110331145427
17230 Ignore-this: 75aae4ab8e5fa0ad698f998aaa1888ce
17231 Some of these already had an explicit close() but I went ahead and replaced them with fileutil.write() as well for the sake of uniformity.
17232] 
17233[Address Kevan's comment in #776 about Options classes missed when adding 'self.command_name'. refs #776, #1359
17234david-sarah@jacaranda.org**20110801221317
17235 Ignore-this: 8881d42cf7e6a1d15468291b0cb8fab9
17236] 
17237[docs/frontends/webapi.rst: change some more instances of 'delete' or 'remove' to 'unlink', change some section titles, and use two blank lines between all sections. refs #776, #1104
17238david-sarah@jacaranda.org**20110801220919
17239 Ignore-this: 572327591137bb05c24c44812d4b163f
17240] 
17241[cleanup: implement rm as a synonym for unlink rather than vice-versa. refs #776
17242david-sarah@jacaranda.org**20110801220108
17243 Ignore-this: 598dcbed870f4f6bb9df62de9111b343
17244] 
17245[docs/webapi.rst: address Kevan's comments about use of 'delete' on ref #1104
17246david-sarah@jacaranda.org**20110801205356
17247 Ignore-this: 4fbf03864934753c951ddeff64392491
17248] 
17249[docs: some changes of 'delete' or 'rm' to 'unlink'. refs #1104
17250david-sarah@jacaranda.org**20110713002722
17251 Ignore-this: 304d2a330d5e6e77d5f1feed7814b21c
17252] 
17253[WUI: change the label of the button to unlink a file from 'del' to 'unlink'. Also change some internal names to 'unlink', and allow 't=unlink' as a synonym for 't=delete' in the web-API interface. Incidentally, improve a test to check for the rename button as well as the unlink button. fixes #1104
17254david-sarah@jacaranda.org**20110713001218
17255 Ignore-this: 3eef6b3f81b94a9c0020a38eb20aa069
17256] 
17257[src/allmydata/web/filenode.py: delete a stale comment that was made incorrect by changeset [3133].
17258david-sarah@jacaranda.org**20110801203009
17259 Ignore-this: b3912e95a874647027efdc97822dd10e
17260] 
17261[fix typo introduced during rebasing of 'remove get_serverid from
17262Brian Warner <warner@lothar.com>**20110801200341
17263 Ignore-this: 4235b0f585c0533892193941dbbd89a8
17264 DownloadStatus.add_dyhb_request and customers' patch, to fix test failure.
17265] 
17266[remove get_serverid from DownloadStatus.add_dyhb_request and customers
17267zooko@zooko.com**20110801185401
17268 Ignore-this: db188c18566d2d0ab39a80c9dc8f6be6
17269 This patch is a rebase of a patch originally written by Brian. I didn't change any of the intent of Brian's patch, just ported it to current trunk.
17270 refs #1363
17271] 
17272[remove get_serverid from DownloadStatus.add_block_request and customers
17273zooko@zooko.com**20110801185344
17274 Ignore-this: 8bfa8201d6147f69b0fbe31beea9c1e
17275 This is a rebase of a patch Brian originally wrote. I haven't changed the intent of that patch, just ported it to trunk.
17276 refs #1363
17277] 
17278[apply zooko's advice: storage_client get_known_servers() returns a frozenset, caller sorts
17279warner@lothar.com**20110801174452
17280 Ignore-this: 2aa13ea6cbed4e9084bd604bf8633692
17281 refs #1363
17282] 
17283[test_immutable.Test: rewrite to use NoNetworkGrid, now takes 2.7s not 97s
17284warner@lothar.com**20110801174444
17285 Ignore-this: 54f30b5d7461d2b3514e2a0172f3a98c
17286 remove now-unused ShareManglingMixin
17287 refs #1363
17288] 
17289[DownloadStatus.add_known_share wants to be used by Finder, web.status
17290warner@lothar.com**20110801174436
17291 Ignore-this: 1433bcd73099a579abe449f697f35f9
17292 refs #1363
17293] 
17294[replace IServer.name() with get_name(), and get_longname()
17295warner@lothar.com**20110801174428
17296 Ignore-this: e5a6f7f6687fd7732ddf41cfdd7c491b
17297 
17298 This patch was originally written by Brian, but was re-recorded by Zooko to use
17299 darcs replace instead of hunks for any file in which it would result in fewer
17300 total hunks.
17301 refs #1363
17302] 
17303[upload.py: apply David-Sarah's advice rename (un)contacted(2) trackers to first_pass/second_pass/next_pass
17304zooko@zooko.com**20110801174143
17305 Ignore-this: e36e1420bba0620a0107bd90032a5198
17306 This patch was written by Brian but was re-recorded by Zooko (with David-Sarah looking on) to use darcs replace instead of editing to rename the three variables to their new names.
17307 refs #1363
17308] 
17309[Coalesce multiple Share.loop() calls, make downloads faster. Closes #1268.
17310Brian Warner <warner@lothar.com>**20110801151834
17311 Ignore-this: 48530fce36c01c0ff708f61c2de7e67a
17312] 
17313[src/allmydata/_auto_deps.py: 'i686' is another way of spelling x86.
17314david-sarah@jacaranda.org**20110801034035
17315 Ignore-this: 6971e0621db2fba794d86395b4d51038
17316] 
17317[tahoe_rm.py: better error message when there is no path. refs #1292
17318david-sarah@jacaranda.org**20110122064212
17319 Ignore-this: ff3bb2c9f376250e5fd77eb009e09018
17320] 
17321[test_cli.py: Test for error message when 'tahoe rm' is invoked without a path. refs #1292
17322david-sarah@jacaranda.org**20110104105108
17323 Ignore-this: 29ec2f2e0251e446db96db002ad5dd7d
17324] 
17325[src/allmydata/__init__.py: suppress a spurious warning from 'bin/tahoe --version[-and-path]' about twisted-web and twisted-core packages.
17326david-sarah@jacaranda.org**20110801005209
17327 Ignore-this: 50e7cd53cca57b1870d9df0361c7c709
17328] 
17329[test_cli.py: use to_str on fields loaded using simplejson.loads in new tests. refs #1304
17330david-sarah@jacaranda.org**20110730032521
17331 Ignore-this: d1d6dfaefd1b4e733181bf127c79c00b
17332] 
17333[cli: make 'tahoe cp' overwrite mutable files in-place
17334Kevan Carstensen <kevan@isnotajoke.com>**20110729202039
17335 Ignore-this: b2ad21a19439722f05c49bfd35b01855
17336] 
17337[SFTP: write an error message to standard error for unrecognized shell commands. Change the existing message for shell sessions to be written to standard error, and refactor some duplicated code. Also change the lines of the error messages to end in CRLF, and take into account Kevan's review comments. fixes #1442, #1446
17338david-sarah@jacaranda.org**20110729233102
17339 Ignore-this: d2f2bb4664f25007d1602bf7333e2cdd
17340] 
17341[src/allmydata/scripts/cli.py: fix pyflakes warning.
17342david-sarah@jacaranda.org**20110728021402
17343 Ignore-this: 94050140ddb99865295973f49927c509
17344] 
17345[Fix the help synopses of CLI commands to include [options] in the right place. fixes #1359, fixes #636
17346david-sarah@jacaranda.org**20110724225440
17347 Ignore-this: 2a8e488a5f63dabfa9db9efd83768a5
17348] 
17349[encodingutil: argv and output encodings are always the same on all platforms. Lose the unnecessary generality of them being different. fixes #1120
17350david-sarah@jacaranda.org**20110629185356
17351 Ignore-this: 5ebacbe6903dfa83ffd3ff8436a97787
17352] 
17353[docs/man/tahoe.1: add man page. fixes #1420
17354david-sarah@jacaranda.org**20110724171728
17355 Ignore-this: fc7601ec7f25494288d6141d0ae0004c
17356] 
17357[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
17358david-sarah@jacaranda.org**20110721234941
17359 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
17360] 
17361[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
17362david-sarah@jacaranda.org**20110722000320
17363 Ignore-this: 55cd558b791526113db3f83c00ec328a
17364] 
17365[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
17366david-sarah@jacaranda.org**20110721233658
17367 Ignore-this: 81b41745477163c9b39c0b59db91cc62
17368] 
17369[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
17370david-sarah@jacaranda.org**20110722035402
17371 Ignore-this: 5d03f544c4154f088e26c7107494bf39
17372] 
17373[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
17374david-sarah@jacaranda.org**20110722024907
17375 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
17376] 
17377[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
17378david-sarah@jacaranda.org**20110718005949
17379 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
17380] 
17381[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
17382david-sarah@jacaranda.org**20110717194315
17383 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
17384] 
17385[README.txt: say that quickstart.rst is in the docs directory.
17386david-sarah@jacaranda.org**20110717192400
17387 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
17388] 
17389[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
17390zooko@zooko.com**20110717114226
17391 Ignore-this: df222120d41447ce4102616921626c82
17392 fixes #1383
17393] 
17394[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
17395david-sarah@jacaranda.org**20110716181813
17396 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
17397] 
17398[docs: add missing link in NEWS.rst
17399zooko@zooko.com**20110712153307
17400 Ignore-this: be7b7eb81c03700b739daa1027d72b35
17401] 
17402[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
17403zooko@zooko.com**20110712153229
17404 Ignore-this: 723c4f9e2211027c79d711715d972c5
17405 Also remove a couple of vestigial references to figleaf, which is long gone.
17406 fixes #1409 (remove contrib/fuse)
17407] 
17408[add Protovis.js-based download-status timeline visualization
17409Brian Warner <warner@lothar.com>**20110629222606
17410 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
17411 
17412 provide status overlap info on the webapi t=json output, add decode/decrypt
17413 rate tooltips, add zoomin/zoomout buttons
17414] 
17415[add more download-status data, fix tests
17416Brian Warner <warner@lothar.com>**20110629222555
17417 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
17418] 
17419[prepare for viz: improve DownloadStatus events
17420Brian Warner <warner@lothar.com>**20110629222542
17421 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
17422 
17423 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
17424] 
17425[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
17426zooko@zooko.com**20110629185711
17427 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
17428] 
17429[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
17430david-sarah@jacaranda.org**20110130235809
17431 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
17432] 
17433[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
17434david-sarah@jacaranda.org**20110626054124
17435 Ignore-this: abb864427a1b91bd10d5132b4589fd90
17436] 
17437[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
17438david-sarah@jacaranda.org**20110623205528
17439 Ignore-this: c63e23146c39195de52fb17c7c49b2da
17440] 
17441[Rename test_package_initialization.py to (much shorter) test_import.py .
17442Brian Warner <warner@lothar.com>**20110611190234
17443 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
17444 
17445 The former name was making my 'ls' listings hard to read, by forcing them
17446 down to just two columns.
17447] 
17448[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
17449zooko@zooko.com**20110611163741
17450 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
17451 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
17452 fixes #1412
17453] 
17454[wui: right-align the size column in the WUI
17455zooko@zooko.com**20110611153758
17456 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
17457 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
17458 fixes #1412
17459] 
17460[docs: three minor fixes
17461zooko@zooko.com**20110610121656
17462 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
17463 CREDITS for arc for stats tweak
17464 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
17465 English usage tweak
17466] 
17467[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
17468david-sarah@jacaranda.org**20110609223719
17469 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
17470] 
17471[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
17472wilcoxjg@gmail.com**20110527120135
17473 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
17474 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
17475 NEWS.rst, stats.py: documentation of change to get_latencies
17476 stats.rst: now documents percentile modification in get_latencies
17477 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
17478 fixes #1392
17479] 
17480[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
17481david-sarah@jacaranda.org**20110517011214
17482 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
17483] 
17484[docs: convert NEWS to NEWS.rst and change all references to it.
17485david-sarah@jacaranda.org**20110517010255
17486 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
17487] 
17488[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
17489david-sarah@jacaranda.org**20110512140559
17490 Ignore-this: 784548fc5367fac5450df1c46890876d
17491] 
17492[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
17493david-sarah@jacaranda.org**20110130164923
17494 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
17495] 
17496[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
17497zooko@zooko.com**20110128142006
17498 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
17499 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
17500] 
17501[M-x whitespace-cleanup
17502zooko@zooko.com**20110510193653
17503 Ignore-this: dea02f831298c0f65ad096960e7df5c7
17504] 
17505[docs: fix typo in running.rst, thanks to arch_o_median
17506zooko@zooko.com**20110510193633
17507 Ignore-this: ca06de166a46abbc61140513918e79e8
17508] 
17509[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
17510david-sarah@jacaranda.org**20110204204902
17511 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
17512] 
17513[relnotes.txt: forseeable -> foreseeable. refs #1342
17514david-sarah@jacaranda.org**20110204204116
17515 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
17516] 
17517[replace remaining .html docs with .rst docs
17518zooko@zooko.com**20110510191650
17519 Ignore-this: d557d960a986d4ac8216d1677d236399
17520 Remove install.html (long since deprecated).
17521 Also replace some obsolete references to install.html with references to quickstart.rst.
17522 Fix some broken internal references within docs/historical/historical_known_issues.txt.
17523 Thanks to Ravi Pinjala and Patrick McDonald.
17524 refs #1227
17525] 
17526[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
17527zooko@zooko.com**20110428055232
17528 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
17529] 
17530[munin tahoe_files plugin: fix incorrect file count
17531francois@ctrlaltdel.ch**20110428055312
17532 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
17533 fixes #1391
17534] 
17535[corrected "k must never be smaller than N" to "k must never be greater than N"
17536secorp@allmydata.org**20110425010308
17537 Ignore-this: 233129505d6c70860087f22541805eac
17538] 
17539[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
17540david-sarah@jacaranda.org**20110411190738
17541 Ignore-this: 7847d26bc117c328c679f08a7baee519
17542] 
17543[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
17544david-sarah@jacaranda.org**20110410155844
17545 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
17546] 
17547[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
17548david-sarah@jacaranda.org**20110410155705
17549 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
17550] 
17551[remove unused variable detected by pyflakes
17552zooko@zooko.com**20110407172231
17553 Ignore-this: 7344652d5e0720af822070d91f03daf9
17554] 
17555[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
17556david-sarah@jacaranda.org**20110401202750
17557 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
17558] 
17559[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
17560Brian Warner <warner@lothar.com>**20110325232511
17561 Ignore-this: d5307faa6900f143193bfbe14e0f01a
17562] 
17563[control.py: remove all uses of s.get_serverid()
17564warner@lothar.com**20110227011203
17565 Ignore-this: f80a787953bd7fa3d40e828bde00e855
17566] 
17567[web: remove some uses of s.get_serverid(), not all
17568warner@lothar.com**20110227011159
17569 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
17570] 
17571[immutable/downloader/fetcher.py: remove all get_serverid() calls
17572warner@lothar.com**20110227011156
17573 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
17574] 
17575[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
17576warner@lothar.com**20110227011153
17577 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
17578 
17579 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
17580 _shares_from_server dict was being popped incorrectly (using shnum as the
17581 index instead of serverid). I'm still thinking through the consequences of
17582 this bug. It was probably benign and really hard to detect. I think it would
17583 cause us to incorrectly believe that we're pulling too many shares from a
17584 server, and thus prefer a different server rather than asking for a second
17585 share from the first server. The diversity code is intended to spread out the
17586 number of shares simultaneously being requested from each server, but with
17587 this bug, it might be spreading out the total number of shares requested at
17588 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
17589 segment, so the effect doesn't last very long).
17590] 
17591[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
17592warner@lothar.com**20110227011150
17593 Ignore-this: d8d56dd8e7b280792b40105e13664554
17594 
17595 test_download.py: create+check MyShare instances better, make sure they share
17596 Server objects, now that finder.py cares
17597] 
17598[immutable/downloader/finder.py: reduce use of get_serverid(), one left
17599warner@lothar.com**20110227011146
17600 Ignore-this: 5785be173b491ae8a78faf5142892020
17601] 
17602[immutable/offloaded.py: reduce use of get_serverid() a bit more
17603warner@lothar.com**20110227011142
17604 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
17605] 
17606[immutable/upload.py: reduce use of get_serverid()
17607warner@lothar.com**20110227011138
17608 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
17609] 
17610[immutable/checker.py: remove some uses of s.get_serverid(), not all
17611warner@lothar.com**20110227011134
17612 Ignore-this: e480a37efa9e94e8016d826c492f626e
17613] 
17614[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
17615warner@lothar.com**20110227011132
17616 Ignore-this: 6078279ddf42b179996a4b53bee8c421
17617 MockIServer stubs
17618] 
17619[upload.py: rearrange _make_trackers a bit, no behavior changes
17620warner@lothar.com**20110227011128
17621 Ignore-this: 296d4819e2af452b107177aef6ebb40f
17622] 
17623[happinessutil.py: finally rename merge_peers to merge_servers
17624warner@lothar.com**20110227011124
17625 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
17626] 
17627[test_upload.py: factor out FakeServerTracker
17628warner@lothar.com**20110227011120
17629 Ignore-this: 6c182cba90e908221099472cc159325b
17630] 
17631[test_upload.py: server-vs-tracker cleanup
17632warner@lothar.com**20110227011115
17633 Ignore-this: 2915133be1a3ba456e8603885437e03
17634] 
17635[happinessutil.py: server-vs-tracker cleanup
17636warner@lothar.com**20110227011111
17637 Ignore-this: b856c84033562d7d718cae7cb01085a9
17638] 
17639[upload.py: more tracker-vs-server cleanup
17640warner@lothar.com**20110227011107
17641 Ignore-this: bb75ed2afef55e47c085b35def2de315
17642] 
17643[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
17644warner@lothar.com**20110227011103
17645 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
17646] 
17647[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
17648warner@lothar.com**20110227011100
17649 Ignore-this: 7ea858755cbe5896ac212a925840fe68
17650 
17651 No behavioral changes, just updating variable/method names and log messages.
17652 The effects outside these three files should be minimal: some exception
17653 messages changed (to say "server" instead of "peer"), and some internal class
17654 names were changed. A few things still use "peer" to minimize external
17655 changes, like UploadResults.timings["peer_selection"] and
17656 happinessutil.merge_peers, which can be changed later.
17657] 
17658[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
17659warner@lothar.com**20110227011056
17660 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
17661] 
17662[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
17663warner@lothar.com**20110227011051
17664 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
17665] 
17666[test: increase timeout on a network test because Francois's ARM machine hit that timeout
17667zooko@zooko.com**20110317165909
17668 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
17669 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
17670] 
17671[docs/configuration.rst: add a "Frontend Configuration" section
17672Brian Warner <warner@lothar.com>**20110222014323
17673 Ignore-this: 657018aa501fe4f0efef9851628444ca
17674 
17675 this points to docs/frontends/*.rst, which were previously underlinked
17676] 
17677[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
17678"Brian Warner <warner@lothar.com>"**20110221061544
17679 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
17680] 
17681[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
17682david-sarah@jacaranda.org**20110221015817
17683 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
17684] 
17685[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
17686david-sarah@jacaranda.org**20110221020125
17687 Ignore-this: b0744ed58f161bf188e037bad077fc48
17688] 
17689[Refactor StorageFarmBroker handling of servers
17690Brian Warner <warner@lothar.com>**20110221015804
17691 Ignore-this: 842144ed92f5717699b8f580eab32a51
17692 
17693 Pass around IServer instance instead of (peerid, rref) tuple. Replace
17694 "descriptor" with "server". Other replacements:
17695 
17696  get_all_servers -> get_connected_servers/get_known_servers
17697  get_servers_for_index -> get_servers_for_psi (now returns IServers)
17698 
17699 This change still needs to be pushed further down: lots of code is now
17700 getting the IServer and then distributing (peerid, rref) internally.
17701 Instead, it ought to distribute the IServer internally and delay
17702 extracting a serverid or rref until the last moment.
17703 
17704 no_network.py was updated to retain parallelism.
17705] 
17706[TAG allmydata-tahoe-1.8.2
17707warner@lothar.com**20110131020101] 
17708Patch bundle hash:
1770965272e922c9ca76128254f0f1d71ebe623cc5d89