Ticket #999: work-in-progress-2011-07-15_19_15.darcs.patch

File work-in-progress-2011-07-15_19_15.darcs.patch, 249.5 KB (added by zooko, at 2011-07-15T19:16:16Z)
Line 
126 patches for repository /home/zooko/playground/tahoe-lafs/pristine:
2
3Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
4  * storage: new mocking tests of storage server read and write
5  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
6
7Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
8  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
9  sloppy not for production
10
11Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
12  * snapshot of progress on backend implementation (not suitable for trunk)
13
14Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
15  * checkpoint patch
16
17Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
18  * checkpoint4
19
20Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
21  * checkpoint5
22
23Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
24  * checkpoint 6
25
26Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
27  * checkpoint 7
28
29Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
30  * checkpoint8
31    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
32
33Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
34  * checkpoint 9
35
36Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
37  * checkpoint10
38
39Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
40  * jacp 11
41
42Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
43  * checkpoint12 testing correct behavior with regard to incoming and final
44
45Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
46  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
47
48Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
49  * adding comments to clarify what I'm about to do.
50
51Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
52  * branching back, no longer attempting to mock inside TestServerFSBackend
53
54Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
55  * checkpoint12 TestServerFSBackend no longer mocks filesystem
56
57Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
58  * JACP
59
60Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
61  * testing get incoming
62
63Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
64  * ImmutableShareFile does not know its StorageIndex
65
66Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
67  * get_incoming correctly reports the 0 share after it has arrived
68
69Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
70  * jacp14
71
72Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
73  * jacp14 or so
74
75Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
76  * temporary work-in-progress patch to be unrecorded
77  tidy up a few tests, work done in pair-programming with Zancas
78
79Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
80  * work in progress intended to be unrecorded and never committed to trunk
81  switch from os.path.join to filepath
82  incomplete refactoring of common "stay in your subtree" tester code into a superclass
83 
84
85Fri Jul 15 13:15:00 MDT 2011  zooko@zooko.com
86  * another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
87  In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
88
89New patches:
90
91[storage: new mocking tests of storage server read and write
92wilcoxjg@gmail.com**20110325203514
93 Ignore-this: df65c3c4f061dd1516f88662023fdb41
94 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
95] {
96addfile ./src/allmydata/test/test_server.py
97hunk ./src/allmydata/test/test_server.py 1
98+from twisted.trial import unittest
99+
100+from StringIO import StringIO
101+
102+from allmydata.test.common_util import ReallyEqualMixin
103+
104+import mock
105+
106+# This is the code that we're going to be testing.
107+from allmydata.storage.server import StorageServer
108+
109+# The following share file contents was generated with
110+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
111+# with share data == 'a'.
112+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
113+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
114+
115+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
116+
117+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
118+    @mock.patch('__builtin__.open')
119+    def test_create_server(self, mockopen):
120+        """ This tests whether a server instance can be constructed. """
121+
122+        def call_open(fname, mode):
123+            if fname == 'testdir/bucket_counter.state':
124+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
125+            elif fname == 'testdir/lease_checker.state':
126+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
127+            elif fname == 'testdir/lease_checker.history':
128+                return StringIO()
129+        mockopen.side_effect = call_open
130+
131+        # Now begin the test.
132+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
133+
134+        # You passed!
135+
136+class TestServer(unittest.TestCase, ReallyEqualMixin):
137+    @mock.patch('__builtin__.open')
138+    def setUp(self, mockopen):
139+        def call_open(fname, mode):
140+            if fname == 'testdir/bucket_counter.state':
141+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
142+            elif fname == 'testdir/lease_checker.state':
143+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
144+            elif fname == 'testdir/lease_checker.history':
145+                return StringIO()
146+        mockopen.side_effect = call_open
147+
148+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
149+
150+
151+    @mock.patch('time.time')
152+    @mock.patch('os.mkdir')
153+    @mock.patch('__builtin__.open')
154+    @mock.patch('os.listdir')
155+    @mock.patch('os.path.isdir')
156+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
157+        """Handle a report of corruption."""
158+
159+        def call_listdir(dirname):
160+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
161+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
162+
163+        mocklistdir.side_effect = call_listdir
164+
165+        class MockFile:
166+            def __init__(self):
167+                self.buffer = ''
168+                self.pos = 0
169+            def write(self, instring):
170+                begin = self.pos
171+                padlen = begin - len(self.buffer)
172+                if padlen > 0:
173+                    self.buffer += '\x00' * padlen
174+                end = self.pos + len(instring)
175+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
176+                self.pos = end
177+            def close(self):
178+                pass
179+            def seek(self, pos):
180+                self.pos = pos
181+            def read(self, numberbytes):
182+                return self.buffer[self.pos:self.pos+numberbytes]
183+            def tell(self):
184+                return self.pos
185+
186+        mocktime.return_value = 0
187+
188+        sharefile = MockFile()
189+        def call_open(fname, mode):
190+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
191+            return sharefile
192+
193+        mockopen.side_effect = call_open
194+        # Now begin the test.
195+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
196+        print bs
197+        bs[0].remote_write(0, 'a')
198+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
199+
200+
201+    @mock.patch('os.path.exists')
202+    @mock.patch('os.path.getsize')
203+    @mock.patch('__builtin__.open')
204+    @mock.patch('os.listdir')
205+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
206+        """ This tests whether the code correctly finds and reads
207+        shares written out by old (Tahoe-LAFS <= v1.8.2)
208+        servers. There is a similar test in test_download, but that one
209+        is from the perspective of the client and exercises a deeper
210+        stack of code. This one is for exercising just the
211+        StorageServer object. """
212+
213+        def call_listdir(dirname):
214+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
215+            return ['0']
216+
217+        mocklistdir.side_effect = call_listdir
218+
219+        def call_open(fname, mode):
220+            self.failUnlessReallyEqual(fname, sharefname)
221+            self.failUnless('r' in mode, mode)
222+            self.failUnless('b' in mode, mode)
223+
224+            return StringIO(share_file_data)
225+        mockopen.side_effect = call_open
226+
227+        datalen = len(share_file_data)
228+        def call_getsize(fname):
229+            self.failUnlessReallyEqual(fname, sharefname)
230+            return datalen
231+        mockgetsize.side_effect = call_getsize
232+
233+        def call_exists(fname):
234+            self.failUnlessReallyEqual(fname, sharefname)
235+            return True
236+        mockexists.side_effect = call_exists
237+
238+        # Now begin the test.
239+        bs = self.s.remote_get_buckets('teststorage_index')
240+
241+        self.failUnlessEqual(len(bs), 1)
242+        b = bs[0]
243+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
244+        # If you try to read past the end you get the as much data as is there.
245+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
246+        # If you start reading past the end of the file you get the empty string.
247+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
248}
249[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
250wilcoxjg@gmail.com**20110624202850
251 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
252 sloppy not for production
253] {
254move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
255hunk ./src/allmydata/storage/crawler.py 13
256     pass
257 
258 class ShareCrawler(service.MultiService):
259-    """A ShareCrawler subclass is attached to a StorageServer, and
260+    """A subcless of ShareCrawler is attached to a StorageServer, and
261     periodically walks all of its shares, processing each one in some
262     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
263     since large servers can easily have a terabyte of shares, in several
264hunk ./src/allmydata/storage/crawler.py 31
265     We assume that the normal upload/download/get_buckets traffic of a tahoe
266     grid will cause the prefixdir contents to be mostly cached in the kernel,
267     or that the number of buckets in each prefixdir will be small enough to
268-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
269+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
270     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
271     prefix. On this server, each prefixdir took 130ms-200ms to list the first
272     time, and 17ms to list the second time.
273hunk ./src/allmydata/storage/crawler.py 68
274     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
275     minimum_cycle_time = 300 # don't run a cycle faster than this
276 
277-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
278+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
279         service.MultiService.__init__(self)
280         if allowed_cpu_percentage is not None:
281             self.allowed_cpu_percentage = allowed_cpu_percentage
282hunk ./src/allmydata/storage/crawler.py 72
283-        self.server = server
284-        self.sharedir = server.sharedir
285-        self.statefile = statefile
286+        self.backend = backend
287         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
288                          for i in range(2**10)]
289         self.prefixes.sort()
290hunk ./src/allmydata/storage/crawler.py 446
291 
292     minimum_cycle_time = 60*60 # we don't need this more than once an hour
293 
294-    def __init__(self, server, statefile, num_sample_prefixes=1):
295-        ShareCrawler.__init__(self, server, statefile)
296+    def __init__(self, statefile, num_sample_prefixes=1):
297+        ShareCrawler.__init__(self, statefile)
298         self.num_sample_prefixes = num_sample_prefixes
299 
300     def add_initial_state(self):
301hunk ./src/allmydata/storage/expirer.py 15
302     removed.
303 
304     I collect statistics on the leases and make these available to a web
305-    status page, including::
306+    status page, including:
307 
308     Space recovered during this cycle-so-far:
309      actual (only if expiration_enabled=True):
310hunk ./src/allmydata/storage/expirer.py 51
311     slow_start = 360 # wait 6 minutes after startup
312     minimum_cycle_time = 12*60*60 # not more than twice per day
313 
314-    def __init__(self, server, statefile, historyfile,
315+    def __init__(self, statefile, historyfile,
316                  expiration_enabled, mode,
317                  override_lease_duration, # used if expiration_mode=="age"
318                  cutoff_date, # used if expiration_mode=="cutoff-date"
319hunk ./src/allmydata/storage/expirer.py 71
320         else:
321             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
322         self.sharetypes_to_expire = sharetypes
323-        ShareCrawler.__init__(self, server, statefile)
324+        ShareCrawler.__init__(self, statefile)
325 
326     def add_initial_state(self):
327         # we fill ["cycle-to-date"] here (even though they will be reset in
328hunk ./src/allmydata/storage/immutable.py 44
329     sharetype = "immutable"
330 
331     def __init__(self, filename, max_size=None, create=False):
332-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
333+        """ If max_size is not None then I won't allow more than
334+        max_size to be written to me. If create=True then max_size
335+        must not be None. """
336         precondition((max_size is not None) or (not create), max_size, create)
337         self.home = filename
338         self._max_size = max_size
339hunk ./src/allmydata/storage/immutable.py 87
340 
341     def read_share_data(self, offset, length):
342         precondition(offset >= 0)
343-        # reads beyond the end of the data are truncated. Reads that start
344-        # beyond the end of the data return an empty string. I wonder why
345-        # Python doesn't do the following computation for me?
346+        # Reads beyond the end of the data are truncated. Reads that start
347+        # beyond the end of the data return an empty string.
348         seekpos = self._data_offset+offset
349         fsize = os.path.getsize(self.home)
350         actuallength = max(0, min(length, fsize-seekpos))
351hunk ./src/allmydata/storage/immutable.py 198
352             space_freed += os.stat(self.home)[stat.ST_SIZE]
353             self.unlink()
354         return space_freed
355+class NullBucketWriter(Referenceable):
356+    implements(RIBucketWriter)
357 
358hunk ./src/allmydata/storage/immutable.py 201
359+    def remote_write(self, offset, data):
360+        return
361 
362 class BucketWriter(Referenceable):
363     implements(RIBucketWriter)
364hunk ./src/allmydata/storage/server.py 7
365 from twisted.application import service
366 
367 from zope.interface import implements
368-from allmydata.interfaces import RIStorageServer, IStatsProducer
369+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
370 from allmydata.util import fileutil, idlib, log, time_format
371 import allmydata # for __full_version__
372 
373hunk ./src/allmydata/storage/server.py 16
374 from allmydata.storage.lease import LeaseInfo
375 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
376      create_mutable_sharefile
377-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
378+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
379 from allmydata.storage.crawler import BucketCountingCrawler
380 from allmydata.storage.expirer import LeaseCheckingCrawler
381 
382hunk ./src/allmydata/storage/server.py 20
383+from zope.interface import implements
384+
385+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
386+# be started and stopped.
387+class Backend(service.MultiService):
388+    implements(IStatsProducer)
389+    def __init__(self):
390+        service.MultiService.__init__(self)
391+
392+    def get_bucket_shares(self):
393+        """XXX"""
394+        raise NotImplementedError
395+
396+    def get_share(self):
397+        """XXX"""
398+        raise NotImplementedError
399+
400+    def make_bucket_writer(self):
401+        """XXX"""
402+        raise NotImplementedError
403+
404+class NullBackend(Backend):
405+    def __init__(self):
406+        Backend.__init__(self)
407+
408+    def get_available_space(self):
409+        return None
410+
411+    def get_bucket_shares(self, storage_index):
412+        return set()
413+
414+    def get_share(self, storage_index, sharenum):
415+        return None
416+
417+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
418+        return NullBucketWriter()
419+
420+class FSBackend(Backend):
421+    def __init__(self, storedir, readonly=False, reserved_space=0):
422+        Backend.__init__(self)
423+
424+        self._setup_storage(storedir, readonly, reserved_space)
425+        self._setup_corruption_advisory()
426+        self._setup_bucket_counter()
427+        self._setup_lease_checkerf()
428+
429+    def _setup_storage(self, storedir, readonly, reserved_space):
430+        self.storedir = storedir
431+        self.readonly = readonly
432+        self.reserved_space = int(reserved_space)
433+        if self.reserved_space:
434+            if self.get_available_space() is None:
435+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
436+                        umid="0wZ27w", level=log.UNUSUAL)
437+
438+        self.sharedir = os.path.join(self.storedir, "shares")
439+        fileutil.make_dirs(self.sharedir)
440+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
441+        self._clean_incomplete()
442+
443+    def _clean_incomplete(self):
444+        fileutil.rm_dir(self.incomingdir)
445+        fileutil.make_dirs(self.incomingdir)
446+
447+    def _setup_corruption_advisory(self):
448+        # we don't actually create the corruption-advisory dir until necessary
449+        self.corruption_advisory_dir = os.path.join(self.storedir,
450+                                                    "corruption-advisories")
451+
452+    def _setup_bucket_counter(self):
453+        statefile = os.path.join(self.storedir, "bucket_counter.state")
454+        self.bucket_counter = BucketCountingCrawler(statefile)
455+        self.bucket_counter.setServiceParent(self)
456+
457+    def _setup_lease_checkerf(self):
458+        statefile = os.path.join(self.storedir, "lease_checker.state")
459+        historyfile = os.path.join(self.storedir, "lease_checker.history")
460+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
461+                                   expiration_enabled, expiration_mode,
462+                                   expiration_override_lease_duration,
463+                                   expiration_cutoff_date,
464+                                   expiration_sharetypes)
465+        self.lease_checker.setServiceParent(self)
466+
467+    def get_available_space(self):
468+        if self.readonly:
469+            return 0
470+        return fileutil.get_available_space(self.storedir, self.reserved_space)
471+
472+    def get_bucket_shares(self, storage_index):
473+        """Return a list of (shnum, pathname) tuples for files that hold
474+        shares for this storage_index. In each tuple, 'shnum' will always be
475+        the integer form of the last component of 'pathname'."""
476+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
477+        try:
478+            for f in os.listdir(storagedir):
479+                if NUM_RE.match(f):
480+                    filename = os.path.join(storagedir, f)
481+                    yield (int(f), filename)
482+        except OSError:
483+            # Commonly caused by there being no buckets at all.
484+            pass
485+
486 # storage/
487 # storage/shares/incoming
488 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
489hunk ./src/allmydata/storage/server.py 143
490     name = 'storage'
491     LeaseCheckerClass = LeaseCheckingCrawler
492 
493-    def __init__(self, storedir, nodeid, reserved_space=0,
494-                 discard_storage=False, readonly_storage=False,
495+    def __init__(self, nodeid, backend, reserved_space=0,
496+                 readonly_storage=False,
497                  stats_provider=None,
498                  expiration_enabled=False,
499                  expiration_mode="age",
500hunk ./src/allmydata/storage/server.py 155
501         assert isinstance(nodeid, str)
502         assert len(nodeid) == 20
503         self.my_nodeid = nodeid
504-        self.storedir = storedir
505-        sharedir = os.path.join(storedir, "shares")
506-        fileutil.make_dirs(sharedir)
507-        self.sharedir = sharedir
508-        # we don't actually create the corruption-advisory dir until necessary
509-        self.corruption_advisory_dir = os.path.join(storedir,
510-                                                    "corruption-advisories")
511-        self.reserved_space = int(reserved_space)
512-        self.no_storage = discard_storage
513-        self.readonly_storage = readonly_storage
514         self.stats_provider = stats_provider
515         if self.stats_provider:
516             self.stats_provider.register_producer(self)
517hunk ./src/allmydata/storage/server.py 158
518-        self.incomingdir = os.path.join(sharedir, 'incoming')
519-        self._clean_incomplete()
520-        fileutil.make_dirs(self.incomingdir)
521         self._active_writers = weakref.WeakKeyDictionary()
522hunk ./src/allmydata/storage/server.py 159
523+        self.backend = backend
524+        self.backend.setServiceParent(self)
525         log.msg("StorageServer created", facility="tahoe.storage")
526 
527hunk ./src/allmydata/storage/server.py 163
528-        if reserved_space:
529-            if self.get_available_space() is None:
530-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
531-                        umin="0wZ27w", level=log.UNUSUAL)
532-
533         self.latencies = {"allocate": [], # immutable
534                           "write": [],
535                           "close": [],
536hunk ./src/allmydata/storage/server.py 174
537                           "renew": [],
538                           "cancel": [],
539                           }
540-        self.add_bucket_counter()
541-
542-        statefile = os.path.join(self.storedir, "lease_checker.state")
543-        historyfile = os.path.join(self.storedir, "lease_checker.history")
544-        klass = self.LeaseCheckerClass
545-        self.lease_checker = klass(self, statefile, historyfile,
546-                                   expiration_enabled, expiration_mode,
547-                                   expiration_override_lease_duration,
548-                                   expiration_cutoff_date,
549-                                   expiration_sharetypes)
550-        self.lease_checker.setServiceParent(self)
551 
552     def __repr__(self):
553         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
554hunk ./src/allmydata/storage/server.py 178
555 
556-    def add_bucket_counter(self):
557-        statefile = os.path.join(self.storedir, "bucket_counter.state")
558-        self.bucket_counter = BucketCountingCrawler(self, statefile)
559-        self.bucket_counter.setServiceParent(self)
560-
561     def count(self, name, delta=1):
562         if self.stats_provider:
563             self.stats_provider.count("storage_server." + name, delta)
564hunk ./src/allmydata/storage/server.py 233
565             kwargs["facility"] = "tahoe.storage"
566         return log.msg(*args, **kwargs)
567 
568-    def _clean_incomplete(self):
569-        fileutil.rm_dir(self.incomingdir)
570-
571     def get_stats(self):
572         # remember: RIStatsProvider requires that our return dict
573         # contains numeric values.
574hunk ./src/allmydata/storage/server.py 269
575             stats['storage_server.total_bucket_count'] = bucket_count
576         return stats
577 
578-    def get_available_space(self):
579-        """Returns available space for share storage in bytes, or None if no
580-        API to get this information is available."""
581-
582-        if self.readonly_storage:
583-            return 0
584-        return fileutil.get_available_space(self.storedir, self.reserved_space)
585-
586     def allocated_size(self):
587         space = 0
588         for bw in self._active_writers:
589hunk ./src/allmydata/storage/server.py 276
590         return space
591 
592     def remote_get_version(self):
593-        remaining_space = self.get_available_space()
594+        remaining_space = self.backend.get_available_space()
595         if remaining_space is None:
596             # We're on a platform that has no API to get disk stats.
597             remaining_space = 2**64
598hunk ./src/allmydata/storage/server.py 301
599         self.count("allocate")
600         alreadygot = set()
601         bucketwriters = {} # k: shnum, v: BucketWriter
602-        si_dir = storage_index_to_dir(storage_index)
603-        si_s = si_b2a(storage_index)
604 
605hunk ./src/allmydata/storage/server.py 302
606+        si_s = si_b2a(storage_index)
607         log.msg("storage: allocate_buckets %s" % si_s)
608 
609         # in this implementation, the lease information (including secrets)
610hunk ./src/allmydata/storage/server.py 316
611 
612         max_space_per_bucket = allocated_size
613 
614-        remaining_space = self.get_available_space()
615+        remaining_space = self.backend.get_available_space()
616         limited = remaining_space is not None
617         if limited:
618             # this is a bit conservative, since some of this allocated_size()
619hunk ./src/allmydata/storage/server.py 329
620         # they asked about: this will save them a lot of work. Add or update
621         # leases for all of them: if they want us to hold shares for this
622         # file, they'll want us to hold leases for this file.
623-        for (shnum, fn) in self._get_bucket_shares(storage_index):
624+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
625             alreadygot.add(shnum)
626             sf = ShareFile(fn)
627             sf.add_or_renew_lease(lease_info)
628hunk ./src/allmydata/storage/server.py 335
629 
630         for shnum in sharenums:
631-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
632-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
633-            if os.path.exists(finalhome):
634+            share = self.backend.get_share(storage_index, shnum)
635+
636+            if not share:
637+                if (not limited) or (remaining_space >= max_space_per_bucket):
638+                    # ok! we need to create the new share file.
639+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
640+                                      max_space_per_bucket, lease_info, canary)
641+                    bucketwriters[shnum] = bw
642+                    self._active_writers[bw] = 1
643+                    if limited:
644+                        remaining_space -= max_space_per_bucket
645+                else:
646+                    # bummer! not enough space to accept this bucket
647+                    pass
648+
649+            elif share.is_complete():
650                 # great! we already have it. easy.
651                 pass
652hunk ./src/allmydata/storage/server.py 353
653-            elif os.path.exists(incominghome):
654+            elif not share.is_complete():
655                 # Note that we don't create BucketWriters for shnums that
656                 # have a partial share (in incoming/), so if a second upload
657                 # occurs while the first is still in progress, the second
658hunk ./src/allmydata/storage/server.py 359
659                 # uploader will use different storage servers.
660                 pass
661-            elif (not limited) or (remaining_space >= max_space_per_bucket):
662-                # ok! we need to create the new share file.
663-                bw = BucketWriter(self, incominghome, finalhome,
664-                                  max_space_per_bucket, lease_info, canary)
665-                if self.no_storage:
666-                    bw.throw_out_all_data = True
667-                bucketwriters[shnum] = bw
668-                self._active_writers[bw] = 1
669-                if limited:
670-                    remaining_space -= max_space_per_bucket
671-            else:
672-                # bummer! not enough space to accept this bucket
673-                pass
674-
675-        if bucketwriters:
676-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
677 
678         self.add_latency("allocate", time.time() - start)
679         return alreadygot, bucketwriters
680hunk ./src/allmydata/storage/server.py 437
681             self.stats_provider.count('storage_server.bytes_added', consumed_size)
682         del self._active_writers[bw]
683 
684-    def _get_bucket_shares(self, storage_index):
685-        """Return a list of (shnum, pathname) tuples for files that hold
686-        shares for this storage_index. In each tuple, 'shnum' will always be
687-        the integer form of the last component of 'pathname'."""
688-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
689-        try:
690-            for f in os.listdir(storagedir):
691-                if NUM_RE.match(f):
692-                    filename = os.path.join(storagedir, f)
693-                    yield (int(f), filename)
694-        except OSError:
695-            # Commonly caused by there being no buckets at all.
696-            pass
697 
698     def remote_get_buckets(self, storage_index):
699         start = time.time()
700hunk ./src/allmydata/storage/server.py 444
701         si_s = si_b2a(storage_index)
702         log.msg("storage: get_buckets %s" % si_s)
703         bucketreaders = {} # k: sharenum, v: BucketReader
704-        for shnum, filename in self._get_bucket_shares(storage_index):
705+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
706             bucketreaders[shnum] = BucketReader(self, filename,
707                                                 storage_index, shnum)
708         self.add_latency("get", time.time() - start)
709hunk ./src/allmydata/test/test_backends.py 10
710 import mock
711 
712 # This is the code that we're going to be testing.
713-from allmydata.storage.server import StorageServer
714+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
715 
716 # The following share file contents was generated with
717 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
718hunk ./src/allmydata/test/test_backends.py 21
719 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
720 
721 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
722+    @mock.patch('time.time')
723+    @mock.patch('os.mkdir')
724+    @mock.patch('__builtin__.open')
725+    @mock.patch('os.listdir')
726+    @mock.patch('os.path.isdir')
727+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
728+        """ This tests whether a server instance can be constructed
729+        with a null backend. The server instance fails the test if it
730+        tries to read or write to the file system. """
731+
732+        # Now begin the test.
733+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
734+
735+        self.failIf(mockisdir.called)
736+        self.failIf(mocklistdir.called)
737+        self.failIf(mockopen.called)
738+        self.failIf(mockmkdir.called)
739+
740+        # You passed!
741+
742+    @mock.patch('time.time')
743+    @mock.patch('os.mkdir')
744     @mock.patch('__builtin__.open')
745hunk ./src/allmydata/test/test_backends.py 44
746-    def test_create_server(self, mockopen):
747-        """ This tests whether a server instance can be constructed. """
748+    @mock.patch('os.listdir')
749+    @mock.patch('os.path.isdir')
750+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
751+        """ This tests whether a server instance can be constructed
752+        with a filesystem backend. To pass the test, it has to use the
753+        filesystem in only the prescribed ways. """
754 
755         def call_open(fname, mode):
756             if fname == 'testdir/bucket_counter.state':
757hunk ./src/allmydata/test/test_backends.py 58
758                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
759             elif fname == 'testdir/lease_checker.history':
760                 return StringIO()
761+            else:
762+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
763         mockopen.side_effect = call_open
764 
765         # Now begin the test.
766hunk ./src/allmydata/test/test_backends.py 63
767-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
768+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
769+
770+        self.failIf(mockisdir.called)
771+        self.failIf(mocklistdir.called)
772+        self.failIf(mockopen.called)
773+        self.failIf(mockmkdir.called)
774+        self.failIf(mocktime.called)
775 
776         # You passed!
777 
778hunk ./src/allmydata/test/test_backends.py 73
779-class TestServer(unittest.TestCase, ReallyEqualMixin):
780+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
781+    def setUp(self):
782+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
783+
784+    @mock.patch('os.mkdir')
785+    @mock.patch('__builtin__.open')
786+    @mock.patch('os.listdir')
787+    @mock.patch('os.path.isdir')
788+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
789+        """ Write a new share. """
790+
791+        # Now begin the test.
792+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
793+        bs[0].remote_write(0, 'a')
794+        self.failIf(mockisdir.called)
795+        self.failIf(mocklistdir.called)
796+        self.failIf(mockopen.called)
797+        self.failIf(mockmkdir.called)
798+
799+    @mock.patch('os.path.exists')
800+    @mock.patch('os.path.getsize')
801+    @mock.patch('__builtin__.open')
802+    @mock.patch('os.listdir')
803+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
804+        """ This tests whether the code correctly finds and reads
805+        shares written out by old (Tahoe-LAFS <= v1.8.2)
806+        servers. There is a similar test in test_download, but that one
807+        is from the perspective of the client and exercises a deeper
808+        stack of code. This one is for exercising just the
809+        StorageServer object. """
810+
811+        # Now begin the test.
812+        bs = self.s.remote_get_buckets('teststorage_index')
813+
814+        self.failUnlessEqual(len(bs), 0)
815+        self.failIf(mocklistdir.called)
816+        self.failIf(mockopen.called)
817+        self.failIf(mockgetsize.called)
818+        self.failIf(mockexists.called)
819+
820+
821+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
822     @mock.patch('__builtin__.open')
823     def setUp(self, mockopen):
824         def call_open(fname, mode):
825hunk ./src/allmydata/test/test_backends.py 126
826                 return StringIO()
827         mockopen.side_effect = call_open
828 
829-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
830-
831+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
832 
833     @mock.patch('time.time')
834     @mock.patch('os.mkdir')
835hunk ./src/allmydata/test/test_backends.py 134
836     @mock.patch('os.listdir')
837     @mock.patch('os.path.isdir')
838     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
839-        """Handle a report of corruption."""
840+        """ Write a new share. """
841 
842         def call_listdir(dirname):
843             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
844hunk ./src/allmydata/test/test_backends.py 173
845         mockopen.side_effect = call_open
846         # Now begin the test.
847         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
848-        print bs
849         bs[0].remote_write(0, 'a')
850         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
851 
852hunk ./src/allmydata/test/test_backends.py 176
853-
854     @mock.patch('os.path.exists')
855     @mock.patch('os.path.getsize')
856     @mock.patch('__builtin__.open')
857hunk ./src/allmydata/test/test_backends.py 218
858 
859         self.failUnlessEqual(len(bs), 1)
860         b = bs[0]
861+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
862         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
863         # If you try to read past the end you get the as much data as is there.
864         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
865hunk ./src/allmydata/test/test_backends.py 224
866         # If you start reading past the end of the file you get the empty string.
867         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
868+
869+
870}
871[snapshot of progress on backend implementation (not suitable for trunk)
872wilcoxjg@gmail.com**20110626053244
873 Ignore-this: 50c764af791c2b99ada8289546806a0a
874] {
875adddir ./src/allmydata/storage/backends
876adddir ./src/allmydata/storage/backends/das
877move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
878adddir ./src/allmydata/storage/backends/null
879hunk ./src/allmydata/interfaces.py 270
880         store that on disk.
881         """
882 
883+class IStorageBackend(Interface):
884+    """
885+    Objects of this kind live on the server side and are used by the
886+    storage server object.
887+    """
888+    def get_available_space(self, reserved_space):
889+        """ Returns available space for share storage in bytes, or
890+        None if this information is not available or if the available
891+        space is unlimited.
892+
893+        If the backend is configured for read-only mode then this will
894+        return 0.
895+
896+        reserved_space is how many bytes to subtract from the answer, so
897+        you can pass how many bytes you would like to leave unused on this
898+        filesystem as reserved_space. """
899+
900+    def get_bucket_shares(self):
901+        """XXX"""
902+
903+    def get_share(self):
904+        """XXX"""
905+
906+    def make_bucket_writer(self):
907+        """XXX"""
908+
909+class IStorageBackendShare(Interface):
910+    """
911+    This object contains as much as all of the share data.  It is intended
912+    for lazy evaluation such that in many use cases substantially less than
913+    all of the share data will be accessed.
914+    """
915+    def is_complete(self):
916+        """
917+        Returns the share state, or None if the share does not exist.
918+        """
919+
920 class IStorageBucketWriter(Interface):
921     """
922     Objects of this kind live on the client side.
923hunk ./src/allmydata/interfaces.py 2492
924 
925 class EmptyPathnameComponentError(Exception):
926     """The webapi disallows empty pathname components."""
927+
928+class IShareStore(Interface):
929+    pass
930+
931addfile ./src/allmydata/storage/backends/__init__.py
932addfile ./src/allmydata/storage/backends/das/__init__.py
933addfile ./src/allmydata/storage/backends/das/core.py
934hunk ./src/allmydata/storage/backends/das/core.py 1
935+from allmydata.interfaces import IStorageBackend
936+from allmydata.storage.backends.base import Backend
937+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
938+from allmydata.util.assertutil import precondition
939+
940+import os, re, weakref, struct, time
941+
942+from foolscap.api import Referenceable
943+from twisted.application import service
944+
945+from zope.interface import implements
946+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
947+from allmydata.util import fileutil, idlib, log, time_format
948+import allmydata # for __full_version__
949+
950+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
951+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
952+from allmydata.storage.lease import LeaseInfo
953+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
954+     create_mutable_sharefile
955+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
956+from allmydata.storage.crawler import FSBucketCountingCrawler
957+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
958+
959+from zope.interface import implements
960+
961+class DASCore(Backend):
962+    implements(IStorageBackend)
963+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
964+        Backend.__init__(self)
965+
966+        self._setup_storage(storedir, readonly, reserved_space)
967+        self._setup_corruption_advisory()
968+        self._setup_bucket_counter()
969+        self._setup_lease_checkerf(expiration_policy)
970+
971+    def _setup_storage(self, storedir, readonly, reserved_space):
972+        self.storedir = storedir
973+        self.readonly = readonly
974+        self.reserved_space = int(reserved_space)
975+        if self.reserved_space:
976+            if self.get_available_space() is None:
977+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
978+                        umid="0wZ27w", level=log.UNUSUAL)
979+
980+        self.sharedir = os.path.join(self.storedir, "shares")
981+        fileutil.make_dirs(self.sharedir)
982+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
983+        self._clean_incomplete()
984+
985+    def _clean_incomplete(self):
986+        fileutil.rm_dir(self.incomingdir)
987+        fileutil.make_dirs(self.incomingdir)
988+
989+    def _setup_corruption_advisory(self):
990+        # we don't actually create the corruption-advisory dir until necessary
991+        self.corruption_advisory_dir = os.path.join(self.storedir,
992+                                                    "corruption-advisories")
993+
994+    def _setup_bucket_counter(self):
995+        statefname = os.path.join(self.storedir, "bucket_counter.state")
996+        self.bucket_counter = FSBucketCountingCrawler(statefname)
997+        self.bucket_counter.setServiceParent(self)
998+
999+    def _setup_lease_checkerf(self, expiration_policy):
1000+        statefile = os.path.join(self.storedir, "lease_checker.state")
1001+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1002+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1003+        self.lease_checker.setServiceParent(self)
1004+
1005+    def get_available_space(self):
1006+        if self.readonly:
1007+            return 0
1008+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1009+
1010+    def get_shares(self, storage_index):
1011+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1012+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1013+        try:
1014+            for f in os.listdir(finalstoragedir):
1015+                if NUM_RE.match(f):
1016+                    filename = os.path.join(finalstoragedir, f)
1017+                    yield FSBShare(filename, int(f))
1018+        except OSError:
1019+            # Commonly caused by there being no buckets at all.
1020+            pass
1021+       
1022+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1023+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1024+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1025+        return bw
1026+       
1027+
1028+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1029+# and share data. The share data is accessed by RIBucketWriter.write and
1030+# RIBucketReader.read . The lease information is not accessible through these
1031+# interfaces.
1032+
1033+# The share file has the following layout:
1034+#  0x00: share file version number, four bytes, current version is 1
1035+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1036+#  0x08: number of leases, four bytes big-endian
1037+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1038+#  A+0x0c = B: first lease. Lease format is:
1039+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1040+#   B+0x04: renew secret, 32 bytes (SHA256)
1041+#   B+0x24: cancel secret, 32 bytes (SHA256)
1042+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1043+#   B+0x48: next lease, or end of record
1044+
1045+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1046+# but it is still filled in by storage servers in case the storage server
1047+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1048+# share file is moved from one storage server to another. The value stored in
1049+# this field is truncated, so if the actual share data length is >= 2**32,
1050+# then the value stored in this field will be the actual share data length
1051+# modulo 2**32.
1052+
1053+class ImmutableShare:
1054+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1055+    sharetype = "immutable"
1056+
1057+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1058+        """ If max_size is not None then I won't allow more than
1059+        max_size to be written to me. If create=True then max_size
1060+        must not be None. """
1061+        precondition((max_size is not None) or (not create), max_size, create)
1062+        self.shnum = shnum
1063+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1064+        self._max_size = max_size
1065+        if create:
1066+            # touch the file, so later callers will see that we're working on
1067+            # it. Also construct the metadata.
1068+            assert not os.path.exists(self.fname)
1069+            fileutil.make_dirs(os.path.dirname(self.fname))
1070+            f = open(self.fname, 'wb')
1071+            # The second field -- the four-byte share data length -- is no
1072+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1073+            # there in case someone downgrades a storage server from >=
1074+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1075+            # server to another, etc. We do saturation -- a share data length
1076+            # larger than 2**32-1 (what can fit into the field) is marked as
1077+            # the largest length that can fit into the field. That way, even
1078+            # if this does happen, the old < v1.3.0 server will still allow
1079+            # clients to read the first part of the share.
1080+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1081+            f.close()
1082+            self._lease_offset = max_size + 0x0c
1083+            self._num_leases = 0
1084+        else:
1085+            f = open(self.fname, 'rb')
1086+            filesize = os.path.getsize(self.fname)
1087+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1088+            f.close()
1089+            if version != 1:
1090+                msg = "sharefile %s had version %d but we wanted 1" % \
1091+                      (self.fname, version)
1092+                raise UnknownImmutableContainerVersionError(msg)
1093+            self._num_leases = num_leases
1094+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1095+        self._data_offset = 0xc
1096+
1097+    def unlink(self):
1098+        os.unlink(self.fname)
1099+
1100+    def read_share_data(self, offset, length):
1101+        precondition(offset >= 0)
1102+        # Reads beyond the end of the data are truncated. Reads that start
1103+        # beyond the end of the data return an empty string.
1104+        seekpos = self._data_offset+offset
1105+        fsize = os.path.getsize(self.fname)
1106+        actuallength = max(0, min(length, fsize-seekpos))
1107+        if actuallength == 0:
1108+            return ""
1109+        f = open(self.fname, 'rb')
1110+        f.seek(seekpos)
1111+        return f.read(actuallength)
1112+
1113+    def write_share_data(self, offset, data):
1114+        length = len(data)
1115+        precondition(offset >= 0, offset)
1116+        if self._max_size is not None and offset+length > self._max_size:
1117+            raise DataTooLargeError(self._max_size, offset, length)
1118+        f = open(self.fname, 'rb+')
1119+        real_offset = self._data_offset+offset
1120+        f.seek(real_offset)
1121+        assert f.tell() == real_offset
1122+        f.write(data)
1123+        f.close()
1124+
1125+    def _write_lease_record(self, f, lease_number, lease_info):
1126+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1127+        f.seek(offset)
1128+        assert f.tell() == offset
1129+        f.write(lease_info.to_immutable_data())
1130+
1131+    def _read_num_leases(self, f):
1132+        f.seek(0x08)
1133+        (num_leases,) = struct.unpack(">L", f.read(4))
1134+        return num_leases
1135+
1136+    def _write_num_leases(self, f, num_leases):
1137+        f.seek(0x08)
1138+        f.write(struct.pack(">L", num_leases))
1139+
1140+    def _truncate_leases(self, f, num_leases):
1141+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1142+
1143+    def get_leases(self):
1144+        """Yields a LeaseInfo instance for all leases."""
1145+        f = open(self.fname, 'rb')
1146+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1147+        f.seek(self._lease_offset)
1148+        for i in range(num_leases):
1149+            data = f.read(self.LEASE_SIZE)
1150+            if data:
1151+                yield LeaseInfo().from_immutable_data(data)
1152+
1153+    def add_lease(self, lease_info):
1154+        f = open(self.fname, 'rb+')
1155+        num_leases = self._read_num_leases(f)
1156+        self._write_lease_record(f, num_leases, lease_info)
1157+        self._write_num_leases(f, num_leases+1)
1158+        f.close()
1159+
1160+    def renew_lease(self, renew_secret, new_expire_time):
1161+        for i,lease in enumerate(self.get_leases()):
1162+            if constant_time_compare(lease.renew_secret, renew_secret):
1163+                # yup. See if we need to update the owner time.
1164+                if new_expire_time > lease.expiration_time:
1165+                    # yes
1166+                    lease.expiration_time = new_expire_time
1167+                    f = open(self.fname, 'rb+')
1168+                    self._write_lease_record(f, i, lease)
1169+                    f.close()
1170+                return
1171+        raise IndexError("unable to renew non-existent lease")
1172+
1173+    def add_or_renew_lease(self, lease_info):
1174+        try:
1175+            self.renew_lease(lease_info.renew_secret,
1176+                             lease_info.expiration_time)
1177+        except IndexError:
1178+            self.add_lease(lease_info)
1179+
1180+
1181+    def cancel_lease(self, cancel_secret):
1182+        """Remove a lease with the given cancel_secret. If the last lease is
1183+        cancelled, the file will be removed. Return the number of bytes that
1184+        were freed (by truncating the list of leases, and possibly by
1185+        deleting the file. Raise IndexError if there was no lease with the
1186+        given cancel_secret.
1187+        """
1188+
1189+        leases = list(self.get_leases())
1190+        num_leases_removed = 0
1191+        for i,lease in enumerate(leases):
1192+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1193+                leases[i] = None
1194+                num_leases_removed += 1
1195+        if not num_leases_removed:
1196+            raise IndexError("unable to find matching lease to cancel")
1197+        if num_leases_removed:
1198+            # pack and write out the remaining leases. We write these out in
1199+            # the same order as they were added, so that if we crash while
1200+            # doing this, we won't lose any non-cancelled leases.
1201+            leases = [l for l in leases if l] # remove the cancelled leases
1202+            f = open(self.fname, 'rb+')
1203+            for i,lease in enumerate(leases):
1204+                self._write_lease_record(f, i, lease)
1205+            self._write_num_leases(f, len(leases))
1206+            self._truncate_leases(f, len(leases))
1207+            f.close()
1208+        space_freed = self.LEASE_SIZE * num_leases_removed
1209+        if not len(leases):
1210+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1211+            self.unlink()
1212+        return space_freed
1213hunk ./src/allmydata/storage/backends/das/expirer.py 2
1214 import time, os, pickle, struct
1215-from allmydata.storage.crawler import ShareCrawler
1216-from allmydata.storage.shares import get_share_file
1217+from allmydata.storage.crawler import FSShareCrawler
1218 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1219      UnknownImmutableContainerVersionError
1220 from twisted.python import log as twlog
1221hunk ./src/allmydata/storage/backends/das/expirer.py 7
1222 
1223-class LeaseCheckingCrawler(ShareCrawler):
1224+class FSLeaseCheckingCrawler(FSShareCrawler):
1225     """I examine the leases on all shares, determining which are still valid
1226     and which have expired. I can remove the expired leases (if so
1227     configured), and the share will be deleted when the last lease is
1228hunk ./src/allmydata/storage/backends/das/expirer.py 50
1229     slow_start = 360 # wait 6 minutes after startup
1230     minimum_cycle_time = 12*60*60 # not more than twice per day
1231 
1232-    def __init__(self, statefile, historyfile,
1233-                 expiration_enabled, mode,
1234-                 override_lease_duration, # used if expiration_mode=="age"
1235-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1236-                 sharetypes):
1237+    def __init__(self, statefile, historyfile, expiration_policy):
1238         self.historyfile = historyfile
1239hunk ./src/allmydata/storage/backends/das/expirer.py 52
1240-        self.expiration_enabled = expiration_enabled
1241-        self.mode = mode
1242+        self.expiration_enabled = expiration_policy['enabled']
1243+        self.mode = expiration_policy['mode']
1244         self.override_lease_duration = None
1245         self.cutoff_date = None
1246         if self.mode == "age":
1247hunk ./src/allmydata/storage/backends/das/expirer.py 57
1248-            assert isinstance(override_lease_duration, (int, type(None)))
1249-            self.override_lease_duration = override_lease_duration # seconds
1250+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1251+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1252         elif self.mode == "cutoff-date":
1253hunk ./src/allmydata/storage/backends/das/expirer.py 60
1254-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1255+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1256             assert cutoff_date is not None
1257hunk ./src/allmydata/storage/backends/das/expirer.py 62
1258-            self.cutoff_date = cutoff_date
1259+            self.cutoff_date = expiration_policy['cutoff_date']
1260         else:
1261hunk ./src/allmydata/storage/backends/das/expirer.py 64
1262-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1263-        self.sharetypes_to_expire = sharetypes
1264-        ShareCrawler.__init__(self, statefile)
1265+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1266+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1267+        FSShareCrawler.__init__(self, statefile)
1268 
1269     def add_initial_state(self):
1270         # we fill ["cycle-to-date"] here (even though they will be reset in
1271hunk ./src/allmydata/storage/backends/das/expirer.py 156
1272 
1273     def process_share(self, sharefilename):
1274         # first, find out what kind of a share it is
1275-        sf = get_share_file(sharefilename)
1276+        f = open(sharefilename, "rb")
1277+        prefix = f.read(32)
1278+        f.close()
1279+        if prefix == MutableShareFile.MAGIC:
1280+            sf = MutableShareFile(sharefilename)
1281+        else:
1282+            # otherwise assume it's immutable
1283+            sf = FSBShare(sharefilename)
1284         sharetype = sf.sharetype
1285         now = time.time()
1286         s = self.stat(sharefilename)
1287addfile ./src/allmydata/storage/backends/null/__init__.py
1288addfile ./src/allmydata/storage/backends/null/core.py
1289hunk ./src/allmydata/storage/backends/null/core.py 1
1290+from allmydata.storage.backends.base import Backend
1291+
1292+class NullCore(Backend):
1293+    def __init__(self):
1294+        Backend.__init__(self)
1295+
1296+    def get_available_space(self):
1297+        return None
1298+
1299+    def get_shares(self, storage_index):
1300+        return set()
1301+
1302+    def get_share(self, storage_index, sharenum):
1303+        return None
1304+
1305+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1306+        return NullBucketWriter()
1307hunk ./src/allmydata/storage/crawler.py 12
1308 class TimeSliceExceeded(Exception):
1309     pass
1310 
1311-class ShareCrawler(service.MultiService):
1312+class FSShareCrawler(service.MultiService):
1313     """A subcless of ShareCrawler is attached to a StorageServer, and
1314     periodically walks all of its shares, processing each one in some
1315     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1316hunk ./src/allmydata/storage/crawler.py 68
1317     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1318     minimum_cycle_time = 300 # don't run a cycle faster than this
1319 
1320-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1321+    def __init__(self, statefname, allowed_cpu_percentage=None):
1322         service.MultiService.__init__(self)
1323         if allowed_cpu_percentage is not None:
1324             self.allowed_cpu_percentage = allowed_cpu_percentage
1325hunk ./src/allmydata/storage/crawler.py 72
1326-        self.backend = backend
1327+        self.statefname = statefname
1328         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1329                          for i in range(2**10)]
1330         self.prefixes.sort()
1331hunk ./src/allmydata/storage/crawler.py 192
1332         #                            of the last bucket to be processed, or
1333         #                            None if we are sleeping between cycles
1334         try:
1335-            f = open(self.statefile, "rb")
1336+            f = open(self.statefname, "rb")
1337             state = pickle.load(f)
1338             f.close()
1339         except EnvironmentError:
1340hunk ./src/allmydata/storage/crawler.py 230
1341         else:
1342             last_complete_prefix = self.prefixes[lcpi]
1343         self.state["last-complete-prefix"] = last_complete_prefix
1344-        tmpfile = self.statefile + ".tmp"
1345+        tmpfile = self.statefname + ".tmp"
1346         f = open(tmpfile, "wb")
1347         pickle.dump(self.state, f)
1348         f.close()
1349hunk ./src/allmydata/storage/crawler.py 433
1350         pass
1351 
1352 
1353-class BucketCountingCrawler(ShareCrawler):
1354+class FSBucketCountingCrawler(FSShareCrawler):
1355     """I keep track of how many buckets are being managed by this server.
1356     This is equivalent to the number of distributed files and directories for
1357     which I am providing storage. The actual number of files+directories in
1358hunk ./src/allmydata/storage/crawler.py 446
1359 
1360     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1361 
1362-    def __init__(self, statefile, num_sample_prefixes=1):
1363-        ShareCrawler.__init__(self, statefile)
1364+    def __init__(self, statefname, num_sample_prefixes=1):
1365+        FSShareCrawler.__init__(self, statefname)
1366         self.num_sample_prefixes = num_sample_prefixes
1367 
1368     def add_initial_state(self):
1369hunk ./src/allmydata/storage/immutable.py 14
1370 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1371      DataTooLargeError
1372 
1373-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1374-# and share data. The share data is accessed by RIBucketWriter.write and
1375-# RIBucketReader.read . The lease information is not accessible through these
1376-# interfaces.
1377-
1378-# The share file has the following layout:
1379-#  0x00: share file version number, four bytes, current version is 1
1380-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1381-#  0x08: number of leases, four bytes big-endian
1382-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1383-#  A+0x0c = B: first lease. Lease format is:
1384-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1385-#   B+0x04: renew secret, 32 bytes (SHA256)
1386-#   B+0x24: cancel secret, 32 bytes (SHA256)
1387-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1388-#   B+0x48: next lease, or end of record
1389-
1390-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1391-# but it is still filled in by storage servers in case the storage server
1392-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1393-# share file is moved from one storage server to another. The value stored in
1394-# this field is truncated, so if the actual share data length is >= 2**32,
1395-# then the value stored in this field will be the actual share data length
1396-# modulo 2**32.
1397-
1398-class ShareFile:
1399-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1400-    sharetype = "immutable"
1401-
1402-    def __init__(self, filename, max_size=None, create=False):
1403-        """ If max_size is not None then I won't allow more than
1404-        max_size to be written to me. If create=True then max_size
1405-        must not be None. """
1406-        precondition((max_size is not None) or (not create), max_size, create)
1407-        self.home = filename
1408-        self._max_size = max_size
1409-        if create:
1410-            # touch the file, so later callers will see that we're working on
1411-            # it. Also construct the metadata.
1412-            assert not os.path.exists(self.home)
1413-            fileutil.make_dirs(os.path.dirname(self.home))
1414-            f = open(self.home, 'wb')
1415-            # The second field -- the four-byte share data length -- is no
1416-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1417-            # there in case someone downgrades a storage server from >=
1418-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1419-            # server to another, etc. We do saturation -- a share data length
1420-            # larger than 2**32-1 (what can fit into the field) is marked as
1421-            # the largest length that can fit into the field. That way, even
1422-            # if this does happen, the old < v1.3.0 server will still allow
1423-            # clients to read the first part of the share.
1424-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1425-            f.close()
1426-            self._lease_offset = max_size + 0x0c
1427-            self._num_leases = 0
1428-        else:
1429-            f = open(self.home, 'rb')
1430-            filesize = os.path.getsize(self.home)
1431-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1432-            f.close()
1433-            if version != 1:
1434-                msg = "sharefile %s had version %d but we wanted 1" % \
1435-                      (filename, version)
1436-                raise UnknownImmutableContainerVersionError(msg)
1437-            self._num_leases = num_leases
1438-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1439-        self._data_offset = 0xc
1440-
1441-    def unlink(self):
1442-        os.unlink(self.home)
1443-
1444-    def read_share_data(self, offset, length):
1445-        precondition(offset >= 0)
1446-        # Reads beyond the end of the data are truncated. Reads that start
1447-        # beyond the end of the data return an empty string.
1448-        seekpos = self._data_offset+offset
1449-        fsize = os.path.getsize(self.home)
1450-        actuallength = max(0, min(length, fsize-seekpos))
1451-        if actuallength == 0:
1452-            return ""
1453-        f = open(self.home, 'rb')
1454-        f.seek(seekpos)
1455-        return f.read(actuallength)
1456-
1457-    def write_share_data(self, offset, data):
1458-        length = len(data)
1459-        precondition(offset >= 0, offset)
1460-        if self._max_size is not None and offset+length > self._max_size:
1461-            raise DataTooLargeError(self._max_size, offset, length)
1462-        f = open(self.home, 'rb+')
1463-        real_offset = self._data_offset+offset
1464-        f.seek(real_offset)
1465-        assert f.tell() == real_offset
1466-        f.write(data)
1467-        f.close()
1468-
1469-    def _write_lease_record(self, f, lease_number, lease_info):
1470-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1471-        f.seek(offset)
1472-        assert f.tell() == offset
1473-        f.write(lease_info.to_immutable_data())
1474-
1475-    def _read_num_leases(self, f):
1476-        f.seek(0x08)
1477-        (num_leases,) = struct.unpack(">L", f.read(4))
1478-        return num_leases
1479-
1480-    def _write_num_leases(self, f, num_leases):
1481-        f.seek(0x08)
1482-        f.write(struct.pack(">L", num_leases))
1483-
1484-    def _truncate_leases(self, f, num_leases):
1485-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1486-
1487-    def get_leases(self):
1488-        """Yields a LeaseInfo instance for all leases."""
1489-        f = open(self.home, 'rb')
1490-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1491-        f.seek(self._lease_offset)
1492-        for i in range(num_leases):
1493-            data = f.read(self.LEASE_SIZE)
1494-            if data:
1495-                yield LeaseInfo().from_immutable_data(data)
1496-
1497-    def add_lease(self, lease_info):
1498-        f = open(self.home, 'rb+')
1499-        num_leases = self._read_num_leases(f)
1500-        self._write_lease_record(f, num_leases, lease_info)
1501-        self._write_num_leases(f, num_leases+1)
1502-        f.close()
1503-
1504-    def renew_lease(self, renew_secret, new_expire_time):
1505-        for i,lease in enumerate(self.get_leases()):
1506-            if constant_time_compare(lease.renew_secret, renew_secret):
1507-                # yup. See if we need to update the owner time.
1508-                if new_expire_time > lease.expiration_time:
1509-                    # yes
1510-                    lease.expiration_time = new_expire_time
1511-                    f = open(self.home, 'rb+')
1512-                    self._write_lease_record(f, i, lease)
1513-                    f.close()
1514-                return
1515-        raise IndexError("unable to renew non-existent lease")
1516-
1517-    def add_or_renew_lease(self, lease_info):
1518-        try:
1519-            self.renew_lease(lease_info.renew_secret,
1520-                             lease_info.expiration_time)
1521-        except IndexError:
1522-            self.add_lease(lease_info)
1523-
1524-
1525-    def cancel_lease(self, cancel_secret):
1526-        """Remove a lease with the given cancel_secret. If the last lease is
1527-        cancelled, the file will be removed. Return the number of bytes that
1528-        were freed (by truncating the list of leases, and possibly by
1529-        deleting the file. Raise IndexError if there was no lease with the
1530-        given cancel_secret.
1531-        """
1532-
1533-        leases = list(self.get_leases())
1534-        num_leases_removed = 0
1535-        for i,lease in enumerate(leases):
1536-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1537-                leases[i] = None
1538-                num_leases_removed += 1
1539-        if not num_leases_removed:
1540-            raise IndexError("unable to find matching lease to cancel")
1541-        if num_leases_removed:
1542-            # pack and write out the remaining leases. We write these out in
1543-            # the same order as they were added, so that if we crash while
1544-            # doing this, we won't lose any non-cancelled leases.
1545-            leases = [l for l in leases if l] # remove the cancelled leases
1546-            f = open(self.home, 'rb+')
1547-            for i,lease in enumerate(leases):
1548-                self._write_lease_record(f, i, lease)
1549-            self._write_num_leases(f, len(leases))
1550-            self._truncate_leases(f, len(leases))
1551-            f.close()
1552-        space_freed = self.LEASE_SIZE * num_leases_removed
1553-        if not len(leases):
1554-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1555-            self.unlink()
1556-        return space_freed
1557-class NullBucketWriter(Referenceable):
1558-    implements(RIBucketWriter)
1559-
1560-    def remote_write(self, offset, data):
1561-        return
1562-
1563 class BucketWriter(Referenceable):
1564     implements(RIBucketWriter)
1565 
1566hunk ./src/allmydata/storage/immutable.py 17
1567-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1568+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1569         self.ss = ss
1570hunk ./src/allmydata/storage/immutable.py 19
1571-        self.incominghome = incominghome
1572-        self.finalhome = finalhome
1573         self._max_size = max_size # don't allow the client to write more than this
1574         self._canary = canary
1575         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1576hunk ./src/allmydata/storage/immutable.py 24
1577         self.closed = False
1578         self.throw_out_all_data = False
1579-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1580+        self._sharefile = immutableshare
1581         # also, add our lease to the file now, so that other ones can be
1582         # added by simultaneous uploaders
1583         self._sharefile.add_lease(lease_info)
1584hunk ./src/allmydata/storage/server.py 16
1585 from allmydata.storage.lease import LeaseInfo
1586 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1587      create_mutable_sharefile
1588-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1589-from allmydata.storage.crawler import BucketCountingCrawler
1590-from allmydata.storage.expirer import LeaseCheckingCrawler
1591 
1592 from zope.interface import implements
1593 
1594hunk ./src/allmydata/storage/server.py 19
1595-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1596-# be started and stopped.
1597-class Backend(service.MultiService):
1598-    implements(IStatsProducer)
1599-    def __init__(self):
1600-        service.MultiService.__init__(self)
1601-
1602-    def get_bucket_shares(self):
1603-        """XXX"""
1604-        raise NotImplementedError
1605-
1606-    def get_share(self):
1607-        """XXX"""
1608-        raise NotImplementedError
1609-
1610-    def make_bucket_writer(self):
1611-        """XXX"""
1612-        raise NotImplementedError
1613-
1614-class NullBackend(Backend):
1615-    def __init__(self):
1616-        Backend.__init__(self)
1617-
1618-    def get_available_space(self):
1619-        return None
1620-
1621-    def get_bucket_shares(self, storage_index):
1622-        return set()
1623-
1624-    def get_share(self, storage_index, sharenum):
1625-        return None
1626-
1627-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1628-        return NullBucketWriter()
1629-
1630-class FSBackend(Backend):
1631-    def __init__(self, storedir, readonly=False, reserved_space=0):
1632-        Backend.__init__(self)
1633-
1634-        self._setup_storage(storedir, readonly, reserved_space)
1635-        self._setup_corruption_advisory()
1636-        self._setup_bucket_counter()
1637-        self._setup_lease_checkerf()
1638-
1639-    def _setup_storage(self, storedir, readonly, reserved_space):
1640-        self.storedir = storedir
1641-        self.readonly = readonly
1642-        self.reserved_space = int(reserved_space)
1643-        if self.reserved_space:
1644-            if self.get_available_space() is None:
1645-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1646-                        umid="0wZ27w", level=log.UNUSUAL)
1647-
1648-        self.sharedir = os.path.join(self.storedir, "shares")
1649-        fileutil.make_dirs(self.sharedir)
1650-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1651-        self._clean_incomplete()
1652-
1653-    def _clean_incomplete(self):
1654-        fileutil.rm_dir(self.incomingdir)
1655-        fileutil.make_dirs(self.incomingdir)
1656-
1657-    def _setup_corruption_advisory(self):
1658-        # we don't actually create the corruption-advisory dir until necessary
1659-        self.corruption_advisory_dir = os.path.join(self.storedir,
1660-                                                    "corruption-advisories")
1661-
1662-    def _setup_bucket_counter(self):
1663-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1664-        self.bucket_counter = BucketCountingCrawler(statefile)
1665-        self.bucket_counter.setServiceParent(self)
1666-
1667-    def _setup_lease_checkerf(self):
1668-        statefile = os.path.join(self.storedir, "lease_checker.state")
1669-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1670-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1671-                                   expiration_enabled, expiration_mode,
1672-                                   expiration_override_lease_duration,
1673-                                   expiration_cutoff_date,
1674-                                   expiration_sharetypes)
1675-        self.lease_checker.setServiceParent(self)
1676-
1677-    def get_available_space(self):
1678-        if self.readonly:
1679-            return 0
1680-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1681-
1682-    def get_bucket_shares(self, storage_index):
1683-        """Return a list of (shnum, pathname) tuples for files that hold
1684-        shares for this storage_index. In each tuple, 'shnum' will always be
1685-        the integer form of the last component of 'pathname'."""
1686-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1687-        try:
1688-            for f in os.listdir(storagedir):
1689-                if NUM_RE.match(f):
1690-                    filename = os.path.join(storagedir, f)
1691-                    yield (int(f), filename)
1692-        except OSError:
1693-            # Commonly caused by there being no buckets at all.
1694-            pass
1695-
1696 # storage/
1697 # storage/shares/incoming
1698 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1699hunk ./src/allmydata/storage/server.py 32
1700 # $SHARENUM matches this regex:
1701 NUM_RE=re.compile("^[0-9]+$")
1702 
1703-
1704-
1705 class StorageServer(service.MultiService, Referenceable):
1706     implements(RIStorageServer, IStatsProducer)
1707     name = 'storage'
1708hunk ./src/allmydata/storage/server.py 35
1709-    LeaseCheckerClass = LeaseCheckingCrawler
1710 
1711     def __init__(self, nodeid, backend, reserved_space=0,
1712                  readonly_storage=False,
1713hunk ./src/allmydata/storage/server.py 38
1714-                 stats_provider=None,
1715-                 expiration_enabled=False,
1716-                 expiration_mode="age",
1717-                 expiration_override_lease_duration=None,
1718-                 expiration_cutoff_date=None,
1719-                 expiration_sharetypes=("mutable", "immutable")):
1720+                 stats_provider=None ):
1721         service.MultiService.__init__(self)
1722         assert isinstance(nodeid, str)
1723         assert len(nodeid) == 20
1724hunk ./src/allmydata/storage/server.py 217
1725         # they asked about: this will save them a lot of work. Add or update
1726         # leases for all of them: if they want us to hold shares for this
1727         # file, they'll want us to hold leases for this file.
1728-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1729-            alreadygot.add(shnum)
1730-            sf = ShareFile(fn)
1731-            sf.add_or_renew_lease(lease_info)
1732-
1733-        for shnum in sharenums:
1734-            share = self.backend.get_share(storage_index, shnum)
1735+        for share in self.backend.get_shares(storage_index):
1736+            alreadygot.add(share.shnum)
1737+            share.add_or_renew_lease(lease_info)
1738 
1739hunk ./src/allmydata/storage/server.py 221
1740-            if not share:
1741-                if (not limited) or (remaining_space >= max_space_per_bucket):
1742-                    # ok! we need to create the new share file.
1743-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1744-                                      max_space_per_bucket, lease_info, canary)
1745-                    bucketwriters[shnum] = bw
1746-                    self._active_writers[bw] = 1
1747-                    if limited:
1748-                        remaining_space -= max_space_per_bucket
1749-                else:
1750-                    # bummer! not enough space to accept this bucket
1751-                    pass
1752+        for shnum in (sharenums - alreadygot):
1753+            if (not limited) or (remaining_space >= max_space_per_bucket):
1754+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1755+                self.backend.set_storage_server(self)
1756+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1757+                                                     max_space_per_bucket, lease_info, canary)
1758+                bucketwriters[shnum] = bw
1759+                self._active_writers[bw] = 1
1760+                if limited:
1761+                    remaining_space -= max_space_per_bucket
1762 
1763hunk ./src/allmydata/storage/server.py 232
1764-            elif share.is_complete():
1765-                # great! we already have it. easy.
1766-                pass
1767-            elif not share.is_complete():
1768-                # Note that we don't create BucketWriters for shnums that
1769-                # have a partial share (in incoming/), so if a second upload
1770-                # occurs while the first is still in progress, the second
1771-                # uploader will use different storage servers.
1772-                pass
1773+        #XXX We SHOULD DOCUMENT LATER.
1774 
1775         self.add_latency("allocate", time.time() - start)
1776         return alreadygot, bucketwriters
1777hunk ./src/allmydata/storage/server.py 238
1778 
1779     def _iter_share_files(self, storage_index):
1780-        for shnum, filename in self._get_bucket_shares(storage_index):
1781+        for shnum, filename in self._get_shares(storage_index):
1782             f = open(filename, 'rb')
1783             header = f.read(32)
1784             f.close()
1785hunk ./src/allmydata/storage/server.py 318
1786         si_s = si_b2a(storage_index)
1787         log.msg("storage: get_buckets %s" % si_s)
1788         bucketreaders = {} # k: sharenum, v: BucketReader
1789-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1790+        for shnum, filename in self.backend.get_shares(storage_index):
1791             bucketreaders[shnum] = BucketReader(self, filename,
1792                                                 storage_index, shnum)
1793         self.add_latency("get", time.time() - start)
1794hunk ./src/allmydata/storage/server.py 334
1795         # since all shares get the same lease data, we just grab the leases
1796         # from the first share
1797         try:
1798-            shnum, filename = self._get_bucket_shares(storage_index).next()
1799+            shnum, filename = self._get_shares(storage_index).next()
1800             sf = ShareFile(filename)
1801             return sf.get_leases()
1802         except StopIteration:
1803hunk ./src/allmydata/storage/shares.py 1
1804-#! /usr/bin/python
1805-
1806-from allmydata.storage.mutable import MutableShareFile
1807-from allmydata.storage.immutable import ShareFile
1808-
1809-def get_share_file(filename):
1810-    f = open(filename, "rb")
1811-    prefix = f.read(32)
1812-    f.close()
1813-    if prefix == MutableShareFile.MAGIC:
1814-        return MutableShareFile(filename)
1815-    # otherwise assume it's immutable
1816-    return ShareFile(filename)
1817-
1818rmfile ./src/allmydata/storage/shares.py
1819hunk ./src/allmydata/test/common_util.py 20
1820 
1821 def flip_one_bit(s, offset=0, size=None):
1822     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1823-    than offset+size. """
1824+    than offset+size. Return the new string. """
1825     if size is None:
1826         size=len(s)-offset
1827     i = randrange(offset, offset+size)
1828hunk ./src/allmydata/test/test_backends.py 7
1829 
1830 from allmydata.test.common_util import ReallyEqualMixin
1831 
1832-import mock
1833+import mock, os
1834 
1835 # This is the code that we're going to be testing.
1836hunk ./src/allmydata/test/test_backends.py 10
1837-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1838+from allmydata.storage.server import StorageServer
1839+
1840+from allmydata.storage.backends.das.core import DASCore
1841+from allmydata.storage.backends.null.core import NullCore
1842+
1843 
1844 # The following share file contents was generated with
1845 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1846hunk ./src/allmydata/test/test_backends.py 22
1847 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1848 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1849 
1850-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1851+tempdir = 'teststoredir'
1852+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1853+sharefname = os.path.join(sharedirname, '0')
1854 
1855 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1856     @mock.patch('time.time')
1857hunk ./src/allmydata/test/test_backends.py 58
1858         filesystem in only the prescribed ways. """
1859 
1860         def call_open(fname, mode):
1861-            if fname == 'testdir/bucket_counter.state':
1862-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1863-            elif fname == 'testdir/lease_checker.state':
1864-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1865-            elif fname == 'testdir/lease_checker.history':
1866+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1867+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1868+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1869+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1870+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1871                 return StringIO()
1872             else:
1873                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1874hunk ./src/allmydata/test/test_backends.py 124
1875     @mock.patch('__builtin__.open')
1876     def setUp(self, mockopen):
1877         def call_open(fname, mode):
1878-            if fname == 'testdir/bucket_counter.state':
1879-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1880-            elif fname == 'testdir/lease_checker.state':
1881-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1882-            elif fname == 'testdir/lease_checker.history':
1883+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1884+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1885+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1886+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1887+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1888                 return StringIO()
1889         mockopen.side_effect = call_open
1890hunk ./src/allmydata/test/test_backends.py 131
1891-
1892-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1893+        expiration_policy = {'enabled' : False,
1894+                             'mode' : 'age',
1895+                             'override_lease_duration' : None,
1896+                             'cutoff_date' : None,
1897+                             'sharetypes' : None}
1898+        testbackend = DASCore(tempdir, expiration_policy)
1899+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1900 
1901     @mock.patch('time.time')
1902     @mock.patch('os.mkdir')
1903hunk ./src/allmydata/test/test_backends.py 148
1904         """ Write a new share. """
1905 
1906         def call_listdir(dirname):
1907-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1908-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1909+            self.failUnlessReallyEqual(dirname, sharedirname)
1910+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1911 
1912         mocklistdir.side_effect = call_listdir
1913 
1914hunk ./src/allmydata/test/test_backends.py 178
1915 
1916         sharefile = MockFile()
1917         def call_open(fname, mode):
1918-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1919+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1920             return sharefile
1921 
1922         mockopen.side_effect = call_open
1923hunk ./src/allmydata/test/test_backends.py 200
1924         StorageServer object. """
1925 
1926         def call_listdir(dirname):
1927-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1928+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1929             return ['0']
1930 
1931         mocklistdir.side_effect = call_listdir
1932}
1933[checkpoint patch
1934wilcoxjg@gmail.com**20110626165715
1935 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1936] {
1937hunk ./src/allmydata/storage/backends/das/core.py 21
1938 from allmydata.storage.lease import LeaseInfo
1939 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1940      create_mutable_sharefile
1941-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1942+from allmydata.storage.immutable import BucketWriter, BucketReader
1943 from allmydata.storage.crawler import FSBucketCountingCrawler
1944 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1945 
1946hunk ./src/allmydata/storage/backends/das/core.py 27
1947 from zope.interface import implements
1948 
1949+# $SHARENUM matches this regex:
1950+NUM_RE=re.compile("^[0-9]+$")
1951+
1952 class DASCore(Backend):
1953     implements(IStorageBackend)
1954     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1955hunk ./src/allmydata/storage/backends/das/core.py 80
1956         return fileutil.get_available_space(self.storedir, self.reserved_space)
1957 
1958     def get_shares(self, storage_index):
1959-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1960+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
1961         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1962         try:
1963             for f in os.listdir(finalstoragedir):
1964hunk ./src/allmydata/storage/backends/das/core.py 86
1965                 if NUM_RE.match(f):
1966                     filename = os.path.join(finalstoragedir, f)
1967-                    yield FSBShare(filename, int(f))
1968+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
1969         except OSError:
1970             # Commonly caused by there being no buckets at all.
1971             pass
1972hunk ./src/allmydata/storage/backends/das/core.py 95
1973         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1974         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1975         return bw
1976+
1977+    def set_storage_server(self, ss):
1978+        self.ss = ss
1979         
1980 
1981 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
1982hunk ./src/allmydata/storage/server.py 29
1983 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
1984 # base-32 chars).
1985 
1986-# $SHARENUM matches this regex:
1987-NUM_RE=re.compile("^[0-9]+$")
1988 
1989 class StorageServer(service.MultiService, Referenceable):
1990     implements(RIStorageServer, IStatsProducer)
1991}
1992[checkpoint4
1993wilcoxjg@gmail.com**20110628202202
1994 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
1995] {
1996hunk ./src/allmydata/storage/backends/das/core.py 96
1997         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1998         return bw
1999 
2000+    def make_bucket_reader(self, share):
2001+        return BucketReader(self.ss, share)
2002+
2003     def set_storage_server(self, ss):
2004         self.ss = ss
2005         
2006hunk ./src/allmydata/storage/backends/das/core.py 138
2007         must not be None. """
2008         precondition((max_size is not None) or (not create), max_size, create)
2009         self.shnum = shnum
2010+        self.storage_index = storageindex
2011         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2012         self._max_size = max_size
2013         if create:
2014hunk ./src/allmydata/storage/backends/das/core.py 173
2015             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2016         self._data_offset = 0xc
2017 
2018+    def get_shnum(self):
2019+        return self.shnum
2020+
2021     def unlink(self):
2022         os.unlink(self.fname)
2023 
2024hunk ./src/allmydata/storage/backends/null/core.py 2
2025 from allmydata.storage.backends.base import Backend
2026+from allmydata.storage.immutable import BucketWriter, BucketReader
2027 
2028 class NullCore(Backend):
2029     def __init__(self):
2030hunk ./src/allmydata/storage/backends/null/core.py 17
2031     def get_share(self, storage_index, sharenum):
2032         return None
2033 
2034-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2035-        return NullBucketWriter()
2036+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2037+       
2038+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2039+
2040+    def set_storage_server(self, ss):
2041+        self.ss = ss
2042+
2043+class ImmutableShare:
2044+    sharetype = "immutable"
2045+
2046+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2047+        """ If max_size is not None then I won't allow more than
2048+        max_size to be written to me. If create=True then max_size
2049+        must not be None. """
2050+        precondition((max_size is not None) or (not create), max_size, create)
2051+        self.shnum = shnum
2052+        self.storage_index = storageindex
2053+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2054+        self._max_size = max_size
2055+        if create:
2056+            # touch the file, so later callers will see that we're working on
2057+            # it. Also construct the metadata.
2058+            assert not os.path.exists(self.fname)
2059+            fileutil.make_dirs(os.path.dirname(self.fname))
2060+            f = open(self.fname, 'wb')
2061+            # The second field -- the four-byte share data length -- is no
2062+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2063+            # there in case someone downgrades a storage server from >=
2064+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2065+            # server to another, etc. We do saturation -- a share data length
2066+            # larger than 2**32-1 (what can fit into the field) is marked as
2067+            # the largest length that can fit into the field. That way, even
2068+            # if this does happen, the old < v1.3.0 server will still allow
2069+            # clients to read the first part of the share.
2070+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2071+            f.close()
2072+            self._lease_offset = max_size + 0x0c
2073+            self._num_leases = 0
2074+        else:
2075+            f = open(self.fname, 'rb')
2076+            filesize = os.path.getsize(self.fname)
2077+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2078+            f.close()
2079+            if version != 1:
2080+                msg = "sharefile %s had version %d but we wanted 1" % \
2081+                      (self.fname, version)
2082+                raise UnknownImmutableContainerVersionError(msg)
2083+            self._num_leases = num_leases
2084+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2085+        self._data_offset = 0xc
2086+
2087+    def get_shnum(self):
2088+        return self.shnum
2089+
2090+    def unlink(self):
2091+        os.unlink(self.fname)
2092+
2093+    def read_share_data(self, offset, length):
2094+        precondition(offset >= 0)
2095+        # Reads beyond the end of the data are truncated. Reads that start
2096+        # beyond the end of the data return an empty string.
2097+        seekpos = self._data_offset+offset
2098+        fsize = os.path.getsize(self.fname)
2099+        actuallength = max(0, min(length, fsize-seekpos))
2100+        if actuallength == 0:
2101+            return ""
2102+        f = open(self.fname, 'rb')
2103+        f.seek(seekpos)
2104+        return f.read(actuallength)
2105+
2106+    def write_share_data(self, offset, data):
2107+        length = len(data)
2108+        precondition(offset >= 0, offset)
2109+        if self._max_size is not None and offset+length > self._max_size:
2110+            raise DataTooLargeError(self._max_size, offset, length)
2111+        f = open(self.fname, 'rb+')
2112+        real_offset = self._data_offset+offset
2113+        f.seek(real_offset)
2114+        assert f.tell() == real_offset
2115+        f.write(data)
2116+        f.close()
2117+
2118+    def _write_lease_record(self, f, lease_number, lease_info):
2119+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2120+        f.seek(offset)
2121+        assert f.tell() == offset
2122+        f.write(lease_info.to_immutable_data())
2123+
2124+    def _read_num_leases(self, f):
2125+        f.seek(0x08)
2126+        (num_leases,) = struct.unpack(">L", f.read(4))
2127+        return num_leases
2128+
2129+    def _write_num_leases(self, f, num_leases):
2130+        f.seek(0x08)
2131+        f.write(struct.pack(">L", num_leases))
2132+
2133+    def _truncate_leases(self, f, num_leases):
2134+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2135+
2136+    def get_leases(self):
2137+        """Yields a LeaseInfo instance for all leases."""
2138+        f = open(self.fname, 'rb')
2139+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2140+        f.seek(self._lease_offset)
2141+        for i in range(num_leases):
2142+            data = f.read(self.LEASE_SIZE)
2143+            if data:
2144+                yield LeaseInfo().from_immutable_data(data)
2145+
2146+    def add_lease(self, lease_info):
2147+        f = open(self.fname, 'rb+')
2148+        num_leases = self._read_num_leases(f)
2149+        self._write_lease_record(f, num_leases, lease_info)
2150+        self._write_num_leases(f, num_leases+1)
2151+        f.close()
2152+
2153+    def renew_lease(self, renew_secret, new_expire_time):
2154+        for i,lease in enumerate(self.get_leases()):
2155+            if constant_time_compare(lease.renew_secret, renew_secret):
2156+                # yup. See if we need to update the owner time.
2157+                if new_expire_time > lease.expiration_time:
2158+                    # yes
2159+                    lease.expiration_time = new_expire_time
2160+                    f = open(self.fname, 'rb+')
2161+                    self._write_lease_record(f, i, lease)
2162+                    f.close()
2163+                return
2164+        raise IndexError("unable to renew non-existent lease")
2165+
2166+    def add_or_renew_lease(self, lease_info):
2167+        try:
2168+            self.renew_lease(lease_info.renew_secret,
2169+                             lease_info.expiration_time)
2170+        except IndexError:
2171+            self.add_lease(lease_info)
2172+
2173+
2174+    def cancel_lease(self, cancel_secret):
2175+        """Remove a lease with the given cancel_secret. If the last lease is
2176+        cancelled, the file will be removed. Return the number of bytes that
2177+        were freed (by truncating the list of leases, and possibly by
2178+        deleting the file. Raise IndexError if there was no lease with the
2179+        given cancel_secret.
2180+        """
2181+
2182+        leases = list(self.get_leases())
2183+        num_leases_removed = 0
2184+        for i,lease in enumerate(leases):
2185+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2186+                leases[i] = None
2187+                num_leases_removed += 1
2188+        if not num_leases_removed:
2189+            raise IndexError("unable to find matching lease to cancel")
2190+        if num_leases_removed:
2191+            # pack and write out the remaining leases. We write these out in
2192+            # the same order as they were added, so that if we crash while
2193+            # doing this, we won't lose any non-cancelled leases.
2194+            leases = [l for l in leases if l] # remove the cancelled leases
2195+            f = open(self.fname, 'rb+')
2196+            for i,lease in enumerate(leases):
2197+                self._write_lease_record(f, i, lease)
2198+            self._write_num_leases(f, len(leases))
2199+            self._truncate_leases(f, len(leases))
2200+            f.close()
2201+        space_freed = self.LEASE_SIZE * num_leases_removed
2202+        if not len(leases):
2203+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2204+            self.unlink()
2205+        return space_freed
2206hunk ./src/allmydata/storage/immutable.py 114
2207 class BucketReader(Referenceable):
2208     implements(RIBucketReader)
2209 
2210-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2211+    def __init__(self, ss, share):
2212         self.ss = ss
2213hunk ./src/allmydata/storage/immutable.py 116
2214-        self._share_file = ShareFile(sharefname)
2215-        self.storage_index = storage_index
2216-        self.shnum = shnum
2217+        self._share_file = share
2218+        self.storage_index = share.storage_index
2219+        self.shnum = share.shnum
2220 
2221     def __repr__(self):
2222         return "<%s %s %s>" % (self.__class__.__name__,
2223hunk ./src/allmydata/storage/server.py 316
2224         si_s = si_b2a(storage_index)
2225         log.msg("storage: get_buckets %s" % si_s)
2226         bucketreaders = {} # k: sharenum, v: BucketReader
2227-        for shnum, filename in self.backend.get_shares(storage_index):
2228-            bucketreaders[shnum] = BucketReader(self, filename,
2229-                                                storage_index, shnum)
2230+        self.backend.set_storage_server(self)
2231+        for share in self.backend.get_shares(storage_index):
2232+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2233         self.add_latency("get", time.time() - start)
2234         return bucketreaders
2235 
2236hunk ./src/allmydata/test/test_backends.py 25
2237 tempdir = 'teststoredir'
2238 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2239 sharefname = os.path.join(sharedirname, '0')
2240+expiration_policy = {'enabled' : False,
2241+                     'mode' : 'age',
2242+                     'override_lease_duration' : None,
2243+                     'cutoff_date' : None,
2244+                     'sharetypes' : None}
2245 
2246 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2247     @mock.patch('time.time')
2248hunk ./src/allmydata/test/test_backends.py 43
2249         tries to read or write to the file system. """
2250 
2251         # Now begin the test.
2252-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2253+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2254 
2255         self.failIf(mockisdir.called)
2256         self.failIf(mocklistdir.called)
2257hunk ./src/allmydata/test/test_backends.py 74
2258         mockopen.side_effect = call_open
2259 
2260         # Now begin the test.
2261-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2262+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2263 
2264         self.failIf(mockisdir.called)
2265         self.failIf(mocklistdir.called)
2266hunk ./src/allmydata/test/test_backends.py 86
2267 
2268 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2269     def setUp(self):
2270-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2271+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2272 
2273     @mock.patch('os.mkdir')
2274     @mock.patch('__builtin__.open')
2275hunk ./src/allmydata/test/test_backends.py 136
2276             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2277                 return StringIO()
2278         mockopen.side_effect = call_open
2279-        expiration_policy = {'enabled' : False,
2280-                             'mode' : 'age',
2281-                             'override_lease_duration' : None,
2282-                             'cutoff_date' : None,
2283-                             'sharetypes' : None}
2284         testbackend = DASCore(tempdir, expiration_policy)
2285         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2286 
2287}
2288[checkpoint5
2289wilcoxjg@gmail.com**20110705034626
2290 Ignore-this: 255780bd58299b0aa33c027e9d008262
2291] {
2292addfile ./src/allmydata/storage/backends/base.py
2293hunk ./src/allmydata/storage/backends/base.py 1
2294+from twisted.application import service
2295+
2296+class Backend(service.MultiService):
2297+    def __init__(self):
2298+        service.MultiService.__init__(self)
2299hunk ./src/allmydata/storage/backends/null/core.py 19
2300 
2301     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2302         
2303+        immutableshare = ImmutableShare()
2304         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2305 
2306     def set_storage_server(self, ss):
2307hunk ./src/allmydata/storage/backends/null/core.py 28
2308 class ImmutableShare:
2309     sharetype = "immutable"
2310 
2311-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2312+    def __init__(self):
2313         """ If max_size is not None then I won't allow more than
2314         max_size to be written to me. If create=True then max_size
2315         must not be None. """
2316hunk ./src/allmydata/storage/backends/null/core.py 32
2317-        precondition((max_size is not None) or (not create), max_size, create)
2318-        self.shnum = shnum
2319-        self.storage_index = storageindex
2320-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2321-        self._max_size = max_size
2322-        if create:
2323-            # touch the file, so later callers will see that we're working on
2324-            # it. Also construct the metadata.
2325-            assert not os.path.exists(self.fname)
2326-            fileutil.make_dirs(os.path.dirname(self.fname))
2327-            f = open(self.fname, 'wb')
2328-            # The second field -- the four-byte share data length -- is no
2329-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2330-            # there in case someone downgrades a storage server from >=
2331-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2332-            # server to another, etc. We do saturation -- a share data length
2333-            # larger than 2**32-1 (what can fit into the field) is marked as
2334-            # the largest length that can fit into the field. That way, even
2335-            # if this does happen, the old < v1.3.0 server will still allow
2336-            # clients to read the first part of the share.
2337-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2338-            f.close()
2339-            self._lease_offset = max_size + 0x0c
2340-            self._num_leases = 0
2341-        else:
2342-            f = open(self.fname, 'rb')
2343-            filesize = os.path.getsize(self.fname)
2344-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2345-            f.close()
2346-            if version != 1:
2347-                msg = "sharefile %s had version %d but we wanted 1" % \
2348-                      (self.fname, version)
2349-                raise UnknownImmutableContainerVersionError(msg)
2350-            self._num_leases = num_leases
2351-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2352-        self._data_offset = 0xc
2353+        pass
2354 
2355     def get_shnum(self):
2356         return self.shnum
2357hunk ./src/allmydata/storage/backends/null/core.py 54
2358         return f.read(actuallength)
2359 
2360     def write_share_data(self, offset, data):
2361-        length = len(data)
2362-        precondition(offset >= 0, offset)
2363-        if self._max_size is not None and offset+length > self._max_size:
2364-            raise DataTooLargeError(self._max_size, offset, length)
2365-        f = open(self.fname, 'rb+')
2366-        real_offset = self._data_offset+offset
2367-        f.seek(real_offset)
2368-        assert f.tell() == real_offset
2369-        f.write(data)
2370-        f.close()
2371+        pass
2372 
2373     def _write_lease_record(self, f, lease_number, lease_info):
2374         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2375hunk ./src/allmydata/storage/backends/null/core.py 84
2376             if data:
2377                 yield LeaseInfo().from_immutable_data(data)
2378 
2379-    def add_lease(self, lease_info):
2380-        f = open(self.fname, 'rb+')
2381-        num_leases = self._read_num_leases(f)
2382-        self._write_lease_record(f, num_leases, lease_info)
2383-        self._write_num_leases(f, num_leases+1)
2384-        f.close()
2385+    def add_lease(self, lease):
2386+        pass
2387 
2388     def renew_lease(self, renew_secret, new_expire_time):
2389         for i,lease in enumerate(self.get_leases()):
2390hunk ./src/allmydata/test/test_backends.py 32
2391                      'sharetypes' : None}
2392 
2393 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2394-    @mock.patch('time.time')
2395-    @mock.patch('os.mkdir')
2396-    @mock.patch('__builtin__.open')
2397-    @mock.patch('os.listdir')
2398-    @mock.patch('os.path.isdir')
2399-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2400-        """ This tests whether a server instance can be constructed
2401-        with a null backend. The server instance fails the test if it
2402-        tries to read or write to the file system. """
2403-
2404-        # Now begin the test.
2405-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2406-
2407-        self.failIf(mockisdir.called)
2408-        self.failIf(mocklistdir.called)
2409-        self.failIf(mockopen.called)
2410-        self.failIf(mockmkdir.called)
2411-
2412-        # You passed!
2413-
2414     @mock.patch('time.time')
2415     @mock.patch('os.mkdir')
2416     @mock.patch('__builtin__.open')
2417hunk ./src/allmydata/test/test_backends.py 53
2418                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2419         mockopen.side_effect = call_open
2420 
2421-        # Now begin the test.
2422-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2423-
2424-        self.failIf(mockisdir.called)
2425-        self.failIf(mocklistdir.called)
2426-        self.failIf(mockopen.called)
2427-        self.failIf(mockmkdir.called)
2428-        self.failIf(mocktime.called)
2429-
2430-        # You passed!
2431-
2432-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2433-    def setUp(self):
2434-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2435-
2436-    @mock.patch('os.mkdir')
2437-    @mock.patch('__builtin__.open')
2438-    @mock.patch('os.listdir')
2439-    @mock.patch('os.path.isdir')
2440-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2441-        """ Write a new share. """
2442-
2443-        # Now begin the test.
2444-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2445-        bs[0].remote_write(0, 'a')
2446-        self.failIf(mockisdir.called)
2447-        self.failIf(mocklistdir.called)
2448-        self.failIf(mockopen.called)
2449-        self.failIf(mockmkdir.called)
2450+        def call_isdir(fname):
2451+            if fname == os.path.join(tempdir,'shares'):
2452+                return True
2453+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2454+                return True
2455+            else:
2456+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2457+        mockisdir.side_effect = call_isdir
2458 
2459hunk ./src/allmydata/test/test_backends.py 62
2460-    @mock.patch('os.path.exists')
2461-    @mock.patch('os.path.getsize')
2462-    @mock.patch('__builtin__.open')
2463-    @mock.patch('os.listdir')
2464-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2465-        """ This tests whether the code correctly finds and reads
2466-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2467-        servers. There is a similar test in test_download, but that one
2468-        is from the perspective of the client and exercises a deeper
2469-        stack of code. This one is for exercising just the
2470-        StorageServer object. """
2471+        def call_mkdir(fname, mode):
2472+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2473+            self.failUnlessEqual(0777, mode)
2474+            if fname == tempdir:
2475+                return None
2476+            elif fname == os.path.join(tempdir,'shares'):
2477+                return None
2478+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2479+                return None
2480+            else:
2481+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2482+        mockmkdir.side_effect = call_mkdir
2483 
2484         # Now begin the test.
2485hunk ./src/allmydata/test/test_backends.py 76
2486-        bs = self.s.remote_get_buckets('teststorage_index')
2487+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2488 
2489hunk ./src/allmydata/test/test_backends.py 78
2490-        self.failUnlessEqual(len(bs), 0)
2491-        self.failIf(mocklistdir.called)
2492-        self.failIf(mockopen.called)
2493-        self.failIf(mockgetsize.called)
2494-        self.failIf(mockexists.called)
2495+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2496 
2497 
2498 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2499hunk ./src/allmydata/test/test_backends.py 193
2500         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2501 
2502 
2503+
2504+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2505+    @mock.patch('time.time')
2506+    @mock.patch('os.mkdir')
2507+    @mock.patch('__builtin__.open')
2508+    @mock.patch('os.listdir')
2509+    @mock.patch('os.path.isdir')
2510+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2511+        """ This tests whether a file system backend instance can be
2512+        constructed. To pass the test, it has to use the
2513+        filesystem in only the prescribed ways. """
2514+
2515+        def call_open(fname, mode):
2516+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2517+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2518+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2519+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2520+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2521+                return StringIO()
2522+            else:
2523+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2524+        mockopen.side_effect = call_open
2525+
2526+        def call_isdir(fname):
2527+            if fname == os.path.join(tempdir,'shares'):
2528+                return True
2529+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2530+                return True
2531+            else:
2532+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2533+        mockisdir.side_effect = call_isdir
2534+
2535+        def call_mkdir(fname, mode):
2536+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2537+            self.failUnlessEqual(0777, mode)
2538+            if fname == tempdir:
2539+                return None
2540+            elif fname == os.path.join(tempdir,'shares'):
2541+                return None
2542+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2543+                return None
2544+            else:
2545+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2546+        mockmkdir.side_effect = call_mkdir
2547+
2548+        # Now begin the test.
2549+        DASCore('teststoredir', expiration_policy)
2550+
2551+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2552}
2553[checkpoint 6
2554wilcoxjg@gmail.com**20110706190824
2555 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2556] {
2557hunk ./src/allmydata/interfaces.py 100
2558                          renew_secret=LeaseRenewSecret,
2559                          cancel_secret=LeaseCancelSecret,
2560                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2561-                         allocated_size=Offset, canary=Referenceable):
2562+                         allocated_size=Offset,
2563+                         canary=Referenceable):
2564         """
2565hunk ./src/allmydata/interfaces.py 103
2566-        @param storage_index: the index of the bucket to be created or
2567+        @param storage_index: the index of the shares to be created or
2568                               increfed.
2569hunk ./src/allmydata/interfaces.py 105
2570-        @param sharenums: these are the share numbers (probably between 0 and
2571-                          99) that the sender is proposing to store on this
2572-                          server.
2573-        @param renew_secret: This is the secret used to protect bucket refresh
2574+        @param renew_secret: This is the secret used to protect shares refresh
2575                              This secret is generated by the client and
2576                              stored for later comparison by the server. Each
2577                              server is given a different secret.
2578hunk ./src/allmydata/interfaces.py 109
2579-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2580-        @param canary: If the canary is lost before close(), the bucket is
2581+        @param cancel_secret: Like renew_secret, but protects shares decref.
2582+        @param sharenums: these are the share numbers (probably between 0 and
2583+                          99) that the sender is proposing to store on this
2584+                          server.
2585+        @param allocated_size: XXX The size of the shares the client wishes to store.
2586+        @param canary: If the canary is lost before close(), the shares are
2587                        deleted.
2588hunk ./src/allmydata/interfaces.py 116
2589+
2590         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2591                  already have and allocated is what we hereby agree to accept.
2592                  New leases are added for shares in both lists.
2593hunk ./src/allmydata/interfaces.py 128
2594                   renew_secret=LeaseRenewSecret,
2595                   cancel_secret=LeaseCancelSecret):
2596         """
2597-        Add a new lease on the given bucket. If the renew_secret matches an
2598+        Add a new lease on the given shares. If the renew_secret matches an
2599         existing lease, that lease will be renewed instead. If there is no
2600         bucket for the given storage_index, return silently. (note that in
2601         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2602hunk ./src/allmydata/storage/server.py 17
2603 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2604      create_mutable_sharefile
2605 
2606-from zope.interface import implements
2607-
2608 # storage/
2609 # storage/shares/incoming
2610 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2611hunk ./src/allmydata/test/test_backends.py 6
2612 from StringIO import StringIO
2613 
2614 from allmydata.test.common_util import ReallyEqualMixin
2615+from allmydata.util.assertutil import _assert
2616 
2617 import mock, os
2618 
2619hunk ./src/allmydata/test/test_backends.py 92
2620                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2621             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2622                 return StringIO()
2623+            else:
2624+                _assert(False, "The tester code doesn't recognize this case.") 
2625+
2626         mockopen.side_effect = call_open
2627         testbackend = DASCore(tempdir, expiration_policy)
2628         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2629hunk ./src/allmydata/test/test_backends.py 109
2630 
2631         def call_listdir(dirname):
2632             self.failUnlessReallyEqual(dirname, sharedirname)
2633-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2634+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2635 
2636         mocklistdir.side_effect = call_listdir
2637 
2638hunk ./src/allmydata/test/test_backends.py 113
2639+        def call_isdir(dirname):
2640+            self.failUnlessReallyEqual(dirname, sharedirname)
2641+            return True
2642+
2643+        mockisdir.side_effect = call_isdir
2644+
2645+        def call_mkdir(dirname, permissions):
2646+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2647+                self.Fail
2648+            else:
2649+                return True
2650+
2651+        mockmkdir.side_effect = call_mkdir
2652+
2653         class MockFile:
2654             def __init__(self):
2655                 self.buffer = ''
2656hunk ./src/allmydata/test/test_backends.py 156
2657             return sharefile
2658 
2659         mockopen.side_effect = call_open
2660+
2661         # Now begin the test.
2662         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2663         bs[0].remote_write(0, 'a')
2664hunk ./src/allmydata/test/test_backends.py 161
2665         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2666+       
2667+        # Now test the allocated_size method.
2668+        spaceint = self.s.allocated_size()
2669 
2670     @mock.patch('os.path.exists')
2671     @mock.patch('os.path.getsize')
2672}
2673[checkpoint 7
2674wilcoxjg@gmail.com**20110706200820
2675 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2676] hunk ./src/allmydata/test/test_backends.py 164
2677         
2678         # Now test the allocated_size method.
2679         spaceint = self.s.allocated_size()
2680+        self.failUnlessReallyEqual(spaceint, 1)
2681 
2682     @mock.patch('os.path.exists')
2683     @mock.patch('os.path.getsize')
2684[checkpoint8
2685wilcoxjg@gmail.com**20110706223126
2686 Ignore-this: 97336180883cb798b16f15411179f827
2687   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2688] hunk ./src/allmydata/test/test_backends.py 32
2689                      'cutoff_date' : None,
2690                      'sharetypes' : None}
2691 
2692+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2693+    def setUp(self):
2694+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2695+
2696+    @mock.patch('os.mkdir')
2697+    @mock.patch('__builtin__.open')
2698+    @mock.patch('os.listdir')
2699+    @mock.patch('os.path.isdir')
2700+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2701+        """ Write a new share. """
2702+
2703+        # Now begin the test.
2704+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2705+        bs[0].remote_write(0, 'a')
2706+        self.failIf(mockisdir.called)
2707+        self.failIf(mocklistdir.called)
2708+        self.failIf(mockopen.called)
2709+        self.failIf(mockmkdir.called)
2710+
2711 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2712     @mock.patch('time.time')
2713     @mock.patch('os.mkdir')
2714[checkpoint 9
2715wilcoxjg@gmail.com**20110707042942
2716 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2717] {
2718hunk ./src/allmydata/storage/backends/das/core.py 88
2719                     filename = os.path.join(finalstoragedir, f)
2720                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2721         except OSError:
2722-            # Commonly caused by there being no buckets at all.
2723+            # Commonly caused by there being no shares at all.
2724             pass
2725         
2726     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2727hunk ./src/allmydata/storage/backends/das/core.py 141
2728         self.storage_index = storageindex
2729         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2730         self._max_size = max_size
2731+        self.incomingdir = os.path.join(sharedir, 'incoming')
2732+        si_dir = storage_index_to_dir(storageindex)
2733+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2734+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2735         if create:
2736             # touch the file, so later callers will see that we're working on
2737             # it. Also construct the metadata.
2738hunk ./src/allmydata/storage/backends/das/core.py 177
2739             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2740         self._data_offset = 0xc
2741 
2742+    def close(self):
2743+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2744+        fileutil.rename(self.incominghome, self.finalhome)
2745+        try:
2746+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2747+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2748+            # these directories lying around forever, but the delete might
2749+            # fail if we're working on another share for the same storage
2750+            # index (like ab/abcde/5). The alternative approach would be to
2751+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2752+            # ShareWriter), each of which is responsible for a single
2753+            # directory on disk, and have them use reference counting of
2754+            # their children to know when they should do the rmdir. This
2755+            # approach is simpler, but relies on os.rmdir refusing to delete
2756+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2757+            os.rmdir(os.path.dirname(self.incominghome))
2758+            # we also delete the grandparent (prefix) directory, .../ab ,
2759+            # again to avoid leaving directories lying around. This might
2760+            # fail if there is another bucket open that shares a prefix (like
2761+            # ab/abfff).
2762+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2763+            # we leave the great-grandparent (incoming/) directory in place.
2764+        except EnvironmentError:
2765+            # ignore the "can't rmdir because the directory is not empty"
2766+            # exceptions, those are normal consequences of the
2767+            # above-mentioned conditions.
2768+            pass
2769+        pass
2770+       
2771+    def stat(self):
2772+        return os.stat(self.finalhome)[stat.ST_SIZE]
2773+
2774     def get_shnum(self):
2775         return self.shnum
2776 
2777hunk ./src/allmydata/storage/immutable.py 7
2778 
2779 from zope.interface import implements
2780 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2781-from allmydata.util import base32, fileutil, log
2782+from allmydata.util import base32, log
2783 from allmydata.util.assertutil import precondition
2784 from allmydata.util.hashutil import constant_time_compare
2785 from allmydata.storage.lease import LeaseInfo
2786hunk ./src/allmydata/storage/immutable.py 44
2787     def remote_close(self):
2788         precondition(not self.closed)
2789         start = time.time()
2790-
2791-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2792-        fileutil.rename(self.incominghome, self.finalhome)
2793-        try:
2794-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2795-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2796-            # these directories lying around forever, but the delete might
2797-            # fail if we're working on another share for the same storage
2798-            # index (like ab/abcde/5). The alternative approach would be to
2799-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2800-            # ShareWriter), each of which is responsible for a single
2801-            # directory on disk, and have them use reference counting of
2802-            # their children to know when they should do the rmdir. This
2803-            # approach is simpler, but relies on os.rmdir refusing to delete
2804-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2805-            os.rmdir(os.path.dirname(self.incominghome))
2806-            # we also delete the grandparent (prefix) directory, .../ab ,
2807-            # again to avoid leaving directories lying around. This might
2808-            # fail if there is another bucket open that shares a prefix (like
2809-            # ab/abfff).
2810-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2811-            # we leave the great-grandparent (incoming/) directory in place.
2812-        except EnvironmentError:
2813-            # ignore the "can't rmdir because the directory is not empty"
2814-            # exceptions, those are normal consequences of the
2815-            # above-mentioned conditions.
2816-            pass
2817+        self._sharefile.close()
2818         self._sharefile = None
2819         self.closed = True
2820         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2821hunk ./src/allmydata/storage/immutable.py 49
2822 
2823-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2824+        filelen = self._sharefile.stat()
2825         self.ss.bucket_writer_closed(self, filelen)
2826         self.ss.add_latency("close", time.time() - start)
2827         self.ss.count("close")
2828hunk ./src/allmydata/storage/server.py 45
2829         self._active_writers = weakref.WeakKeyDictionary()
2830         self.backend = backend
2831         self.backend.setServiceParent(self)
2832+        self.backend.set_storage_server(self)
2833         log.msg("StorageServer created", facility="tahoe.storage")
2834 
2835         self.latencies = {"allocate": [], # immutable
2836hunk ./src/allmydata/storage/server.py 220
2837 
2838         for shnum in (sharenums - alreadygot):
2839             if (not limited) or (remaining_space >= max_space_per_bucket):
2840-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2841-                self.backend.set_storage_server(self)
2842                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2843                                                      max_space_per_bucket, lease_info, canary)
2844                 bucketwriters[shnum] = bw
2845hunk ./src/allmydata/test/test_backends.py 117
2846         mockopen.side_effect = call_open
2847         testbackend = DASCore(tempdir, expiration_policy)
2848         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2849-
2850+   
2851+    @mock.patch('allmydata.util.fileutil.get_available_space')
2852     @mock.patch('time.time')
2853     @mock.patch('os.mkdir')
2854     @mock.patch('__builtin__.open')
2855hunk ./src/allmydata/test/test_backends.py 124
2856     @mock.patch('os.listdir')
2857     @mock.patch('os.path.isdir')
2858-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2859+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2860+                             mockget_available_space):
2861         """ Write a new share. """
2862 
2863         def call_listdir(dirname):
2864hunk ./src/allmydata/test/test_backends.py 148
2865 
2866         mockmkdir.side_effect = call_mkdir
2867 
2868+        def call_get_available_space(storedir, reserved_space):
2869+            self.failUnlessReallyEqual(storedir, tempdir)
2870+            return 1
2871+
2872+        mockget_available_space.side_effect = call_get_available_space
2873+
2874         class MockFile:
2875             def __init__(self):
2876                 self.buffer = ''
2877hunk ./src/allmydata/test/test_backends.py 188
2878         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2879         bs[0].remote_write(0, 'a')
2880         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2881-       
2882+
2883+        # What happens when there's not enough space for the client's request?
2884+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2885+
2886         # Now test the allocated_size method.
2887         spaceint = self.s.allocated_size()
2888         self.failUnlessReallyEqual(spaceint, 1)
2889}
2890[checkpoint10
2891wilcoxjg@gmail.com**20110707172049
2892 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2893] {
2894hunk ./src/allmydata/test/test_backends.py 20
2895 # The following share file contents was generated with
2896 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2897 # with share data == 'a'.
2898-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2899+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2900+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2901+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2902 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2903 
2904hunk ./src/allmydata/test/test_backends.py 25
2905+testnodeid = 'testnodeidxxxxxxxxxx'
2906 tempdir = 'teststoredir'
2907 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2908 sharefname = os.path.join(sharedirname, '0')
2909hunk ./src/allmydata/test/test_backends.py 37
2910 
2911 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2912     def setUp(self):
2913-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2914+        self.s = StorageServer(testnodeid, backend=NullCore())
2915 
2916     @mock.patch('os.mkdir')
2917     @mock.patch('__builtin__.open')
2918hunk ./src/allmydata/test/test_backends.py 99
2919         mockmkdir.side_effect = call_mkdir
2920 
2921         # Now begin the test.
2922-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2923+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2924 
2925         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2926 
2927hunk ./src/allmydata/test/test_backends.py 119
2928 
2929         mockopen.side_effect = call_open
2930         testbackend = DASCore(tempdir, expiration_policy)
2931-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2932-   
2933+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
2934+       
2935+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
2936     @mock.patch('allmydata.util.fileutil.get_available_space')
2937     @mock.patch('time.time')
2938     @mock.patch('os.mkdir')
2939hunk ./src/allmydata/test/test_backends.py 129
2940     @mock.patch('os.listdir')
2941     @mock.patch('os.path.isdir')
2942     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2943-                             mockget_available_space):
2944+                             mockget_available_space, mockget_shares):
2945         """ Write a new share. """
2946 
2947         def call_listdir(dirname):
2948hunk ./src/allmydata/test/test_backends.py 139
2949         mocklistdir.side_effect = call_listdir
2950 
2951         def call_isdir(dirname):
2952+            #XXX Should there be any other tests here?
2953             self.failUnlessReallyEqual(dirname, sharedirname)
2954             return True
2955 
2956hunk ./src/allmydata/test/test_backends.py 159
2957 
2958         mockget_available_space.side_effect = call_get_available_space
2959 
2960+        mocktime.return_value = 0
2961+        class MockShare:
2962+            def __init__(self):
2963+                self.shnum = 1
2964+               
2965+            def add_or_renew_lease(elf, lease_info):
2966+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
2967+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
2968+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
2969+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
2970+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
2971+               
2972+
2973+        share = MockShare()
2974+        def call_get_shares(storageindex):
2975+            return [share]
2976+
2977+        mockget_shares.side_effect = call_get_shares
2978+
2979         class MockFile:
2980             def __init__(self):
2981                 self.buffer = ''
2982hunk ./src/allmydata/test/test_backends.py 199
2983             def tell(self):
2984                 return self.pos
2985 
2986-        mocktime.return_value = 0
2987 
2988         sharefile = MockFile()
2989         def call_open(fname, mode):
2990}
2991[jacp 11
2992wilcoxjg@gmail.com**20110708213919
2993 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
2994] {
2995hunk ./src/allmydata/storage/backends/das/core.py 144
2996         self.incomingdir = os.path.join(sharedir, 'incoming')
2997         si_dir = storage_index_to_dir(storageindex)
2998         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2999+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3000         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3001         if create:
3002             # touch the file, so later callers will see that we're working on
3003hunk ./src/allmydata/storage/backends/das/core.py 208
3004         pass
3005         
3006     def stat(self):
3007-        return os.stat(self.finalhome)[stat.ST_SIZE]
3008+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3009 
3010     def get_shnum(self):
3011         return self.shnum
3012hunk ./src/allmydata/storage/immutable.py 44
3013     def remote_close(self):
3014         precondition(not self.closed)
3015         start = time.time()
3016+
3017         self._sharefile.close()
3018hunk ./src/allmydata/storage/immutable.py 46
3019+        filelen = self._sharefile.stat()
3020         self._sharefile = None
3021hunk ./src/allmydata/storage/immutable.py 48
3022+
3023         self.closed = True
3024         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3025 
3026hunk ./src/allmydata/storage/immutable.py 52
3027-        filelen = self._sharefile.stat()
3028         self.ss.bucket_writer_closed(self, filelen)
3029         self.ss.add_latency("close", time.time() - start)
3030         self.ss.count("close")
3031hunk ./src/allmydata/storage/server.py 220
3032 
3033         for shnum in (sharenums - alreadygot):
3034             if (not limited) or (remaining_space >= max_space_per_bucket):
3035-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3036-                                                     max_space_per_bucket, lease_info, canary)
3037+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3038                 bucketwriters[shnum] = bw
3039                 self._active_writers[bw] = 1
3040                 if limited:
3041hunk ./src/allmydata/test/test_backends.py 20
3042 # The following share file contents was generated with
3043 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3044 # with share data == 'a'.
3045-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3046-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3047+renew_secret  = 'x'*32
3048+cancel_secret = 'y'*32
3049 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3050 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3051 
3052hunk ./src/allmydata/test/test_backends.py 27
3053 testnodeid = 'testnodeidxxxxxxxxxx'
3054 tempdir = 'teststoredir'
3055-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3056-sharefname = os.path.join(sharedirname, '0')
3057+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3058+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3059+shareincomingname = os.path.join(sharedirincomingname, '0')
3060+sharefname = os.path.join(sharedirfinalname, '0')
3061+
3062 expiration_policy = {'enabled' : False,
3063                      'mode' : 'age',
3064                      'override_lease_duration' : None,
3065hunk ./src/allmydata/test/test_backends.py 123
3066         mockopen.side_effect = call_open
3067         testbackend = DASCore(tempdir, expiration_policy)
3068         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3069-       
3070+
3071+    @mock.patch('allmydata.util.fileutil.rename')
3072+    @mock.patch('allmydata.util.fileutil.make_dirs')
3073+    @mock.patch('os.path.exists')
3074+    @mock.patch('os.stat')
3075     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3076     @mock.patch('allmydata.util.fileutil.get_available_space')
3077     @mock.patch('time.time')
3078hunk ./src/allmydata/test/test_backends.py 136
3079     @mock.patch('os.listdir')
3080     @mock.patch('os.path.isdir')
3081     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3082-                             mockget_available_space, mockget_shares):
3083+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3084+                             mockmake_dirs, mockrename):
3085         """ Write a new share. """
3086 
3087         def call_listdir(dirname):
3088hunk ./src/allmydata/test/test_backends.py 141
3089-            self.failUnlessReallyEqual(dirname, sharedirname)
3090+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3091             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3092 
3093         mocklistdir.side_effect = call_listdir
3094hunk ./src/allmydata/test/test_backends.py 148
3095 
3096         def call_isdir(dirname):
3097             #XXX Should there be any other tests here?
3098-            self.failUnlessReallyEqual(dirname, sharedirname)
3099+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3100             return True
3101 
3102         mockisdir.side_effect = call_isdir
3103hunk ./src/allmydata/test/test_backends.py 154
3104 
3105         def call_mkdir(dirname, permissions):
3106-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3107+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3108                 self.Fail
3109             else:
3110                 return True
3111hunk ./src/allmydata/test/test_backends.py 208
3112                 return self.pos
3113 
3114 
3115-        sharefile = MockFile()
3116+        fobj = MockFile()
3117         def call_open(fname, mode):
3118             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3119hunk ./src/allmydata/test/test_backends.py 211
3120-            return sharefile
3121+            return fobj
3122 
3123         mockopen.side_effect = call_open
3124 
3125hunk ./src/allmydata/test/test_backends.py 215
3126+        def call_make_dirs(dname):
3127+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3128+           
3129+        mockmake_dirs.side_effect = call_make_dirs
3130+
3131+        def call_rename(src, dst):
3132+           self.failUnlessReallyEqual(src, shareincomingname)
3133+           self.failUnlessReallyEqual(dst, sharefname)
3134+           
3135+        mockrename.side_effect = call_rename
3136+
3137+        def call_exists(fname):
3138+            self.failUnlessReallyEqual(fname, sharefname)
3139+
3140+        mockexists.side_effect = call_exists
3141+
3142         # Now begin the test.
3143         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3144         bs[0].remote_write(0, 'a')
3145hunk ./src/allmydata/test/test_backends.py 234
3146-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3147+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3148+        spaceint = self.s.allocated_size()
3149+        self.failUnlessReallyEqual(spaceint, 1)
3150+
3151+        bs[0].remote_close()
3152 
3153         # What happens when there's not enough space for the client's request?
3154hunk ./src/allmydata/test/test_backends.py 241
3155-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3156+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3157 
3158         # Now test the allocated_size method.
3159hunk ./src/allmydata/test/test_backends.py 244
3160-        spaceint = self.s.allocated_size()
3161-        self.failUnlessReallyEqual(spaceint, 1)
3162+        #self.failIf(mockexists.called, mockexists.call_args_list)
3163+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3164+        #self.failIf(mockrename.called, mockrename.call_args_list)
3165+        #self.failIf(mockstat.called, mockstat.call_args_list)
3166 
3167     @mock.patch('os.path.exists')
3168     @mock.patch('os.path.getsize')
3169}
3170[checkpoint12 testing correct behavior with regard to incoming and final
3171wilcoxjg@gmail.com**20110710191915
3172 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3173] {
3174hunk ./src/allmydata/storage/backends/das/core.py 74
3175         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3176         self.lease_checker.setServiceParent(self)
3177 
3178+    def get_incoming(self, storageindex):
3179+        return set((1,))
3180+
3181     def get_available_space(self):
3182         if self.readonly:
3183             return 0
3184hunk ./src/allmydata/storage/server.py 77
3185         """Return a dict, indexed by category, that contains a dict of
3186         latency numbers for each category. If there are sufficient samples
3187         for unambiguous interpretation, each dict will contain the
3188-        following keys: mean, 01_0_percentile, 10_0_percentile,
3189+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3190         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3191         99_0_percentile, 99_9_percentile.  If there are insufficient
3192         samples for a given percentile to be interpreted unambiguously
3193hunk ./src/allmydata/storage/server.py 120
3194 
3195     def get_stats(self):
3196         # remember: RIStatsProvider requires that our return dict
3197-        # contains numeric values.
3198+        # contains numeric, or None values.
3199         stats = { 'storage_server.allocated': self.allocated_size(), }
3200         stats['storage_server.reserved_space'] = self.reserved_space
3201         for category,ld in self.get_latencies().items():
3202hunk ./src/allmydata/storage/server.py 185
3203         start = time.time()
3204         self.count("allocate")
3205         alreadygot = set()
3206+        incoming = set()
3207         bucketwriters = {} # k: shnum, v: BucketWriter
3208 
3209         si_s = si_b2a(storage_index)
3210hunk ./src/allmydata/storage/server.py 219
3211             alreadygot.add(share.shnum)
3212             share.add_or_renew_lease(lease_info)
3213 
3214-        for shnum in (sharenums - alreadygot):
3215+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3216+        incoming = self.backend.get_incoming(storageindex)
3217+
3218+        for shnum in ((sharenums - alreadygot) - incoming):
3219             if (not limited) or (remaining_space >= max_space_per_bucket):
3220                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3221                 bucketwriters[shnum] = bw
3222hunk ./src/allmydata/storage/server.py 229
3223                 self._active_writers[bw] = 1
3224                 if limited:
3225                     remaining_space -= max_space_per_bucket
3226-
3227-        #XXX We SHOULD DOCUMENT LATER.
3228+            else:
3229+                # Bummer not enough space to accept this share.
3230+                pass
3231 
3232         self.add_latency("allocate", time.time() - start)
3233         return alreadygot, bucketwriters
3234hunk ./src/allmydata/storage/server.py 323
3235         self.add_latency("get", time.time() - start)
3236         return bucketreaders
3237 
3238-    def get_leases(self, storage_index):
3239+    def remote_get_incoming(self, storageindex):
3240+        incoming_share_set = self.backend.get_incoming(storageindex)
3241+        return incoming_share_set
3242+
3243+    def get_leases(self, storageindex):
3244         """Provide an iterator that yields all of the leases attached to this
3245         bucket. Each lease is returned as a LeaseInfo instance.
3246 
3247hunk ./src/allmydata/storage/server.py 337
3248         # since all shares get the same lease data, we just grab the leases
3249         # from the first share
3250         try:
3251-            shnum, filename = self._get_shares(storage_index).next()
3252+            shnum, filename = self._get_shares(storageindex).next()
3253             sf = ShareFile(filename)
3254             return sf.get_leases()
3255         except StopIteration:
3256hunk ./src/allmydata/test/test_backends.py 182
3257 
3258         share = MockShare()
3259         def call_get_shares(storageindex):
3260-            return [share]
3261+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3262+            return []#share]
3263 
3264         mockget_shares.side_effect = call_get_shares
3265 
3266hunk ./src/allmydata/test/test_backends.py 222
3267         mockmake_dirs.side_effect = call_make_dirs
3268 
3269         def call_rename(src, dst):
3270-           self.failUnlessReallyEqual(src, shareincomingname)
3271-           self.failUnlessReallyEqual(dst, sharefname)
3272+            self.failUnlessReallyEqual(src, shareincomingname)
3273+            self.failUnlessReallyEqual(dst, sharefname)
3274             
3275         mockrename.side_effect = call_rename
3276 
3277hunk ./src/allmydata/test/test_backends.py 233
3278         mockexists.side_effect = call_exists
3279 
3280         # Now begin the test.
3281+
3282+        # XXX (0) ???  Fail unless something is not properly set-up?
3283         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3284hunk ./src/allmydata/test/test_backends.py 236
3285+
3286+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3287+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3288+
3289+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3290+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3291+        # with the same si, until BucketWriter.remote_close() has been called.
3292+        # self.failIf(bsa)
3293+
3294+        # XXX (3) Inspect final and fail unless there's nothing there.
3295         bs[0].remote_write(0, 'a')
3296hunk ./src/allmydata/test/test_backends.py 247
3297+        # XXX (4a) Inspect final and fail unless share 0 is there.
3298+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3299         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3300         spaceint = self.s.allocated_size()
3301         self.failUnlessReallyEqual(spaceint, 1)
3302hunk ./src/allmydata/test/test_backends.py 253
3303 
3304+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3305         bs[0].remote_close()
3306 
3307         # What happens when there's not enough space for the client's request?
3308hunk ./src/allmydata/test/test_backends.py 260
3309         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3310 
3311         # Now test the allocated_size method.
3312-        #self.failIf(mockexists.called, mockexists.call_args_list)
3313+        # self.failIf(mockexists.called, mockexists.call_args_list)
3314         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3315         #self.failIf(mockrename.called, mockrename.call_args_list)
3316         #self.failIf(mockstat.called, mockstat.call_args_list)
3317}
3318[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3319wilcoxjg@gmail.com**20110710195139
3320 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3321] {
3322hunk ./src/allmydata/storage/server.py 220
3323             share.add_or_renew_lease(lease_info)
3324 
3325         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3326-        incoming = self.backend.get_incoming(storageindex)
3327+        incoming = self.backend.get_incoming(storage_index)
3328 
3329         for shnum in ((sharenums - alreadygot) - incoming):
3330             if (not limited) or (remaining_space >= max_space_per_bucket):
3331hunk ./src/allmydata/storage/server.py 323
3332         self.add_latency("get", time.time() - start)
3333         return bucketreaders
3334 
3335-    def remote_get_incoming(self, storageindex):
3336-        incoming_share_set = self.backend.get_incoming(storageindex)
3337+    def remote_get_incoming(self, storage_index):
3338+        incoming_share_set = self.backend.get_incoming(storage_index)
3339         return incoming_share_set
3340 
3341hunk ./src/allmydata/storage/server.py 327
3342-    def get_leases(self, storageindex):
3343+    def get_leases(self, storage_index):
3344         """Provide an iterator that yields all of the leases attached to this
3345         bucket. Each lease is returned as a LeaseInfo instance.
3346 
3347hunk ./src/allmydata/storage/server.py 337
3348         # since all shares get the same lease data, we just grab the leases
3349         # from the first share
3350         try:
3351-            shnum, filename = self._get_shares(storageindex).next()
3352+            shnum, filename = self._get_shares(storage_index).next()
3353             sf = ShareFile(filename)
3354             return sf.get_leases()
3355         except StopIteration:
3356replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3357}
3358[adding comments to clarify what I'm about to do.
3359wilcoxjg@gmail.com**20110710220623
3360 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3361] {
3362hunk ./src/allmydata/storage/backends/das/core.py 8
3363 
3364 import os, re, weakref, struct, time
3365 
3366-from foolscap.api import Referenceable
3367+#from foolscap.api import Referenceable
3368 from twisted.application import service
3369 
3370 from zope.interface import implements
3371hunk ./src/allmydata/storage/backends/das/core.py 12
3372-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3373+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3374 from allmydata.util import fileutil, idlib, log, time_format
3375 import allmydata # for __full_version__
3376 
3377hunk ./src/allmydata/storage/server.py 219
3378             alreadygot.add(share.shnum)
3379             share.add_or_renew_lease(lease_info)
3380 
3381-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3382+        # fill incoming with all shares that are incoming use a set operation
3383+        # since there's no need to operate on individual pieces
3384         incoming = self.backend.get_incoming(storageindex)
3385 
3386         for shnum in ((sharenums - alreadygot) - incoming):
3387hunk ./src/allmydata/test/test_backends.py 245
3388         # with the same si, until BucketWriter.remote_close() has been called.
3389         # self.failIf(bsa)
3390 
3391-        # XXX (3) Inspect final and fail unless there's nothing there.
3392         bs[0].remote_write(0, 'a')
3393hunk ./src/allmydata/test/test_backends.py 246
3394-        # XXX (4a) Inspect final and fail unless share 0 is there.
3395-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3396         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3397         spaceint = self.s.allocated_size()
3398         self.failUnlessReallyEqual(spaceint, 1)
3399hunk ./src/allmydata/test/test_backends.py 250
3400 
3401-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3402+        # XXX (3) Inspect final and fail unless there's nothing there.
3403         bs[0].remote_close()
3404hunk ./src/allmydata/test/test_backends.py 252
3405+        # XXX (4a) Inspect final and fail unless share 0 is there.
3406+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3407 
3408         # What happens when there's not enough space for the client's request?
3409         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3410}
3411[branching back, no longer attempting to mock inside TestServerFSBackend
3412wilcoxjg@gmail.com**20110711190849
3413 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3414] {
3415hunk ./src/allmydata/storage/backends/das/core.py 75
3416         self.lease_checker.setServiceParent(self)
3417 
3418     def get_incoming(self, storageindex):
3419-        return set((1,))
3420-
3421-    def get_available_space(self):
3422-        if self.readonly:
3423-            return 0
3424-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3425+        """Return the set of incoming shnums."""
3426+        return set(os.listdir(self.incomingdir))
3427 
3428     def get_shares(self, storage_index):
3429         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3430hunk ./src/allmydata/storage/backends/das/core.py 90
3431             # Commonly caused by there being no shares at all.
3432             pass
3433         
3434+    def get_available_space(self):
3435+        if self.readonly:
3436+            return 0
3437+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3438+
3439     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3440         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3441         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3442hunk ./src/allmydata/test/test_backends.py 27
3443 
3444 testnodeid = 'testnodeidxxxxxxxxxx'
3445 tempdir = 'teststoredir'
3446-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3447-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3448+basedir = os.path.join(tempdir, 'shares')
3449+baseincdir = os.path.join(basedir, 'incoming')
3450+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3451+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3452 shareincomingname = os.path.join(sharedirincomingname, '0')
3453 sharefname = os.path.join(sharedirfinalname, '0')
3454 
3455hunk ./src/allmydata/test/test_backends.py 142
3456                              mockmake_dirs, mockrename):
3457         """ Write a new share. """
3458 
3459-        def call_listdir(dirname):
3460-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3461-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3462-
3463-        mocklistdir.side_effect = call_listdir
3464-
3465-        def call_isdir(dirname):
3466-            #XXX Should there be any other tests here?
3467-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3468-            return True
3469-
3470-        mockisdir.side_effect = call_isdir
3471-
3472-        def call_mkdir(dirname, permissions):
3473-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3474-                self.Fail
3475-            else:
3476-                return True
3477-
3478-        mockmkdir.side_effect = call_mkdir
3479-
3480-        def call_get_available_space(storedir, reserved_space):
3481-            self.failUnlessReallyEqual(storedir, tempdir)
3482-            return 1
3483-
3484-        mockget_available_space.side_effect = call_get_available_space
3485-
3486-        mocktime.return_value = 0
3487         class MockShare:
3488             def __init__(self):
3489                 self.shnum = 1
3490hunk ./src/allmydata/test/test_backends.py 152
3491                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3492                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3493                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3494-               
3495 
3496         share = MockShare()
3497hunk ./src/allmydata/test/test_backends.py 154
3498-        def call_get_shares(storageindex):
3499-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3500-            return []#share]
3501-
3502-        mockget_shares.side_effect = call_get_shares
3503 
3504         class MockFile:
3505             def __init__(self):
3506hunk ./src/allmydata/test/test_backends.py 176
3507             def tell(self):
3508                 return self.pos
3509 
3510-
3511         fobj = MockFile()
3512hunk ./src/allmydata/test/test_backends.py 177
3513+
3514+        directories = {}
3515+        def call_listdir(dirname):
3516+            if dirname not in directories:
3517+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3518+            else:
3519+                return directories[dirname].get_contents()
3520+
3521+        mocklistdir.side_effect = call_listdir
3522+
3523+        class MockDir:
3524+            def __init__(self, dirname):
3525+                self.name = dirname
3526+                self.contents = []
3527+   
3528+            def get_contents(self):
3529+                return self.contents
3530+
3531+        def call_isdir(dirname):
3532+            #XXX Should there be any other tests here?
3533+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3534+            return True
3535+
3536+        mockisdir.side_effect = call_isdir
3537+
3538+        def call_mkdir(dirname, permissions):
3539+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3540+                self.Fail
3541+            if dirname in directories:
3542+                raise OSError(17, "File exists: '%s'" % dirname)
3543+                self.Fail
3544+            elif dirname not in directories:
3545+                directories[dirname] = MockDir(dirname)
3546+                return True
3547+
3548+        mockmkdir.side_effect = call_mkdir
3549+
3550+        def call_get_available_space(storedir, reserved_space):
3551+            self.failUnlessReallyEqual(storedir, tempdir)
3552+            return 1
3553+
3554+        mockget_available_space.side_effect = call_get_available_space
3555+
3556+        mocktime.return_value = 0
3557+        def call_get_shares(storageindex):
3558+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3559+            return []#share]
3560+
3561+        mockget_shares.side_effect = call_get_shares
3562+
3563         def call_open(fname, mode):
3564             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3565             return fobj
3566}
3567[checkpoint12 TestServerFSBackend no longer mocks filesystem
3568wilcoxjg@gmail.com**20110711193357
3569 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3570] {
3571hunk ./src/allmydata/storage/backends/das/core.py 23
3572      create_mutable_sharefile
3573 from allmydata.storage.immutable import BucketWriter, BucketReader
3574 from allmydata.storage.crawler import FSBucketCountingCrawler
3575+from allmydata.util.hashutil import constant_time_compare
3576 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3577 
3578 from zope.interface import implements
3579hunk ./src/allmydata/storage/backends/das/core.py 28
3580 
3581+# storage/
3582+# storage/shares/incoming
3583+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3584+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3585+# storage/shares/$START/$STORAGEINDEX
3586+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3587+
3588+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3589+# base-32 chars).
3590 # $SHARENUM matches this regex:
3591 NUM_RE=re.compile("^[0-9]+$")
3592 
3593hunk ./src/allmydata/test/test_backends.py 126
3594         testbackend = DASCore(tempdir, expiration_policy)
3595         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3596 
3597-    @mock.patch('allmydata.util.fileutil.rename')
3598-    @mock.patch('allmydata.util.fileutil.make_dirs')
3599-    @mock.patch('os.path.exists')
3600-    @mock.patch('os.stat')
3601-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3602-    @mock.patch('allmydata.util.fileutil.get_available_space')
3603     @mock.patch('time.time')
3604hunk ./src/allmydata/test/test_backends.py 127
3605-    @mock.patch('os.mkdir')
3606-    @mock.patch('__builtin__.open')
3607-    @mock.patch('os.listdir')
3608-    @mock.patch('os.path.isdir')
3609-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3610-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3611-                             mockmake_dirs, mockrename):
3612+    def test_write_share(self, mocktime):
3613         """ Write a new share. """
3614 
3615         class MockShare:
3616hunk ./src/allmydata/test/test_backends.py 143
3617 
3618         share = MockShare()
3619 
3620-        class MockFile:
3621-            def __init__(self):
3622-                self.buffer = ''
3623-                self.pos = 0
3624-            def write(self, instring):
3625-                begin = self.pos
3626-                padlen = begin - len(self.buffer)
3627-                if padlen > 0:
3628-                    self.buffer += '\x00' * padlen
3629-                end = self.pos + len(instring)
3630-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3631-                self.pos = end
3632-            def close(self):
3633-                pass
3634-            def seek(self, pos):
3635-                self.pos = pos
3636-            def read(self, numberbytes):
3637-                return self.buffer[self.pos:self.pos+numberbytes]
3638-            def tell(self):
3639-                return self.pos
3640-
3641-        fobj = MockFile()
3642-
3643-        directories = {}
3644-        def call_listdir(dirname):
3645-            if dirname not in directories:
3646-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3647-            else:
3648-                return directories[dirname].get_contents()
3649-
3650-        mocklistdir.side_effect = call_listdir
3651-
3652-        class MockDir:
3653-            def __init__(self, dirname):
3654-                self.name = dirname
3655-                self.contents = []
3656-   
3657-            def get_contents(self):
3658-                return self.contents
3659-
3660-        def call_isdir(dirname):
3661-            #XXX Should there be any other tests here?
3662-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3663-            return True
3664-
3665-        mockisdir.side_effect = call_isdir
3666-
3667-        def call_mkdir(dirname, permissions):
3668-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3669-                self.Fail
3670-            if dirname in directories:
3671-                raise OSError(17, "File exists: '%s'" % dirname)
3672-                self.Fail
3673-            elif dirname not in directories:
3674-                directories[dirname] = MockDir(dirname)
3675-                return True
3676-
3677-        mockmkdir.side_effect = call_mkdir
3678-
3679-        def call_get_available_space(storedir, reserved_space):
3680-            self.failUnlessReallyEqual(storedir, tempdir)
3681-            return 1
3682-
3683-        mockget_available_space.side_effect = call_get_available_space
3684-
3685-        mocktime.return_value = 0
3686-        def call_get_shares(storageindex):
3687-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3688-            return []#share]
3689-
3690-        mockget_shares.side_effect = call_get_shares
3691-
3692-        def call_open(fname, mode):
3693-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3694-            return fobj
3695-
3696-        mockopen.side_effect = call_open
3697-
3698-        def call_make_dirs(dname):
3699-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3700-           
3701-        mockmake_dirs.side_effect = call_make_dirs
3702-
3703-        def call_rename(src, dst):
3704-            self.failUnlessReallyEqual(src, shareincomingname)
3705-            self.failUnlessReallyEqual(dst, sharefname)
3706-           
3707-        mockrename.side_effect = call_rename
3708-
3709-        def call_exists(fname):
3710-            self.failUnlessReallyEqual(fname, sharefname)
3711-
3712-        mockexists.side_effect = call_exists
3713-
3714         # Now begin the test.
3715 
3716         # XXX (0) ???  Fail unless something is not properly set-up?
3717}
3718[JACP
3719wilcoxjg@gmail.com**20110711194407
3720 Ignore-this: b54745de777c4bb58d68d708f010bbb
3721] {
3722hunk ./src/allmydata/storage/backends/das/core.py 86
3723 
3724     def get_incoming(self, storageindex):
3725         """Return the set of incoming shnums."""
3726-        return set(os.listdir(self.incomingdir))
3727+        try:
3728+            incominglist = os.listdir(self.incomingdir)
3729+            print "incominglist: ", incominglist
3730+            return set(incominglist)
3731+        except OSError:
3732+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3733+            pass
3734 
3735     def get_shares(self, storage_index):
3736         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3737hunk ./src/allmydata/storage/server.py 17
3738 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3739      create_mutable_sharefile
3740 
3741-# storage/
3742-# storage/shares/incoming
3743-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3744-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3745-# storage/shares/$START/$STORAGEINDEX
3746-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3747-
3748-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3749-# base-32 chars).
3750-
3751-
3752 class StorageServer(service.MultiService, Referenceable):
3753     implements(RIStorageServer, IStatsProducer)
3754     name = 'storage'
3755}
3756[testing get incoming
3757wilcoxjg@gmail.com**20110711210224
3758 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3759] {
3760hunk ./src/allmydata/storage/backends/das/core.py 87
3761     def get_incoming(self, storageindex):
3762         """Return the set of incoming shnums."""
3763         try:
3764-            incominglist = os.listdir(self.incomingdir)
3765+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3766+            incominglist = os.listdir(incomingsharesdir)
3767             print "incominglist: ", incominglist
3768             return set(incominglist)
3769         except OSError:
3770hunk ./src/allmydata/storage/backends/das/core.py 92
3771-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3772-            pass
3773-
3774+            # XXX I'd like to make this more specific. If there are no shares at all.
3775+            return set()
3776+           
3777     def get_shares(self, storage_index):
3778         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3779         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3780hunk ./src/allmydata/test/test_backends.py 149
3781         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3782 
3783         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3784+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3785         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3786 
3787hunk ./src/allmydata/test/test_backends.py 152
3788-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3789         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3790         # with the same si, until BucketWriter.remote_close() has been called.
3791         # self.failIf(bsa)
3792}
3793[ImmutableShareFile does not know its StorageIndex
3794wilcoxjg@gmail.com**20110711211424
3795 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3796] {
3797hunk ./src/allmydata/storage/backends/das/core.py 112
3798             return 0
3799         return fileutil.get_available_space(self.storedir, self.reserved_space)
3800 
3801-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3802-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3803+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3804+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3805+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3806+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3807         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3808         return bw
3809 
3810hunk ./src/allmydata/storage/backends/das/core.py 155
3811     LEASE_SIZE = struct.calcsize(">L32s32sL")
3812     sharetype = "immutable"
3813 
3814-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3815+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3816         """ If max_size is not None then I won't allow more than
3817         max_size to be written to me. If create=True then max_size
3818         must not be None. """
3819}
3820[get_incoming correctly reports the 0 share after it has arrived
3821wilcoxjg@gmail.com**20110712025157
3822 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3823] {
3824hunk ./src/allmydata/storage/backends/das/core.py 1
3825+import os, re, weakref, struct, time, stat
3826+
3827 from allmydata.interfaces import IStorageBackend
3828 from allmydata.storage.backends.base import Backend
3829 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3830hunk ./src/allmydata/storage/backends/das/core.py 8
3831 from allmydata.util.assertutil import precondition
3832 
3833-import os, re, weakref, struct, time
3834-
3835 #from foolscap.api import Referenceable
3836 from twisted.application import service
3837 
3838hunk ./src/allmydata/storage/backends/das/core.py 89
3839         try:
3840             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3841             incominglist = os.listdir(incomingsharesdir)
3842-            print "incominglist: ", incominglist
3843-            return set(incominglist)
3844+            incomingshnums = [int(x) for x in incominglist]
3845+            return set(incomingshnums)
3846         except OSError:
3847             # XXX I'd like to make this more specific. If there are no shares at all.
3848             return set()
3849hunk ./src/allmydata/storage/backends/das/core.py 113
3850         return fileutil.get_available_space(self.storedir, self.reserved_space)
3851 
3852     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3853-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3854-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3855-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3856+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3857+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3858+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3859         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3860         return bw
3861 
3862hunk ./src/allmydata/storage/backends/das/core.py 160
3863         max_size to be written to me. If create=True then max_size
3864         must not be None. """
3865         precondition((max_size is not None) or (not create), max_size, create)
3866-        self.shnum = shnum
3867-        self.storage_index = storageindex
3868-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3869         self._max_size = max_size
3870hunk ./src/allmydata/storage/backends/das/core.py 161
3871-        self.incomingdir = os.path.join(sharedir, 'incoming')
3872-        si_dir = storage_index_to_dir(storageindex)
3873-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3874-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3875-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3876+        self.incominghome = incominghome
3877+        self.finalhome = finalhome
3878         if create:
3879             # touch the file, so later callers will see that we're working on
3880             # it. Also construct the metadata.
3881hunk ./src/allmydata/storage/backends/das/core.py 166
3882-            assert not os.path.exists(self.fname)
3883-            fileutil.make_dirs(os.path.dirname(self.fname))
3884-            f = open(self.fname, 'wb')
3885+            assert not os.path.exists(self.finalhome)
3886+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3887+            f = open(self.incominghome, 'wb')
3888             # The second field -- the four-byte share data length -- is no
3889             # longer used as of Tahoe v1.3.0, but we continue to write it in
3890             # there in case someone downgrades a storage server from >=
3891hunk ./src/allmydata/storage/backends/das/core.py 183
3892             self._lease_offset = max_size + 0x0c
3893             self._num_leases = 0
3894         else:
3895-            f = open(self.fname, 'rb')
3896-            filesize = os.path.getsize(self.fname)
3897+            f = open(self.finalhome, 'rb')
3898+            filesize = os.path.getsize(self.finalhome)
3899             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3900             f.close()
3901             if version != 1:
3902hunk ./src/allmydata/storage/backends/das/core.py 189
3903                 msg = "sharefile %s had version %d but we wanted 1" % \
3904-                      (self.fname, version)
3905+                      (self.finalhome, version)
3906                 raise UnknownImmutableContainerVersionError(msg)
3907             self._num_leases = num_leases
3908             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3909hunk ./src/allmydata/storage/backends/das/core.py 225
3910         pass
3911         
3912     def stat(self):
3913-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3914+        return os.stat(self.finalhome)[stat.ST_SIZE]
3915+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
3916 
3917     def get_shnum(self):
3918         return self.shnum
3919hunk ./src/allmydata/storage/backends/das/core.py 232
3920 
3921     def unlink(self):
3922-        os.unlink(self.fname)
3923+        os.unlink(self.finalhome)
3924 
3925     def read_share_data(self, offset, length):
3926         precondition(offset >= 0)
3927hunk ./src/allmydata/storage/backends/das/core.py 239
3928         # Reads beyond the end of the data are truncated. Reads that start
3929         # beyond the end of the data return an empty string.
3930         seekpos = self._data_offset+offset
3931-        fsize = os.path.getsize(self.fname)
3932+        fsize = os.path.getsize(self.finalhome)
3933         actuallength = max(0, min(length, fsize-seekpos))
3934         if actuallength == 0:
3935             return ""
3936hunk ./src/allmydata/storage/backends/das/core.py 243
3937-        f = open(self.fname, 'rb')
3938+        f = open(self.finalhome, 'rb')
3939         f.seek(seekpos)
3940         return f.read(actuallength)
3941 
3942hunk ./src/allmydata/storage/backends/das/core.py 252
3943         precondition(offset >= 0, offset)
3944         if self._max_size is not None and offset+length > self._max_size:
3945             raise DataTooLargeError(self._max_size, offset, length)
3946-        f = open(self.fname, 'rb+')
3947+        f = open(self.incominghome, 'rb+')
3948         real_offset = self._data_offset+offset
3949         f.seek(real_offset)
3950         assert f.tell() == real_offset
3951hunk ./src/allmydata/storage/backends/das/core.py 279
3952 
3953     def get_leases(self):
3954         """Yields a LeaseInfo instance for all leases."""
3955-        f = open(self.fname, 'rb')
3956+        f = open(self.finalhome, 'rb')
3957         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3958         f.seek(self._lease_offset)
3959         for i in range(num_leases):
3960hunk ./src/allmydata/storage/backends/das/core.py 288
3961                 yield LeaseInfo().from_immutable_data(data)
3962 
3963     def add_lease(self, lease_info):
3964-        f = open(self.fname, 'rb+')
3965+        f = open(self.incominghome, 'rb+')
3966         num_leases = self._read_num_leases(f)
3967         self._write_lease_record(f, num_leases, lease_info)
3968         self._write_num_leases(f, num_leases+1)
3969hunk ./src/allmydata/storage/backends/das/core.py 301
3970                 if new_expire_time > lease.expiration_time:
3971                     # yes
3972                     lease.expiration_time = new_expire_time
3973-                    f = open(self.fname, 'rb+')
3974+                    f = open(self.finalhome, 'rb+')
3975                     self._write_lease_record(f, i, lease)
3976                     f.close()
3977                 return
3978hunk ./src/allmydata/storage/backends/das/core.py 336
3979             # the same order as they were added, so that if we crash while
3980             # doing this, we won't lose any non-cancelled leases.
3981             leases = [l for l in leases if l] # remove the cancelled leases
3982-            f = open(self.fname, 'rb+')
3983+            f = open(self.finalhome, 'rb+')
3984             for i,lease in enumerate(leases):
3985                 self._write_lease_record(f, i, lease)
3986             self._write_num_leases(f, len(leases))
3987hunk ./src/allmydata/storage/backends/das/core.py 344
3988             f.close()
3989         space_freed = self.LEASE_SIZE * num_leases_removed
3990         if not len(leases):
3991-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
3992+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
3993             self.unlink()
3994         return space_freed
3995hunk ./src/allmydata/test/test_backends.py 129
3996     @mock.patch('time.time')
3997     def test_write_share(self, mocktime):
3998         """ Write a new share. """
3999-
4000-        class MockShare:
4001-            def __init__(self):
4002-                self.shnum = 1
4003-               
4004-            def add_or_renew_lease(elf, lease_info):
4005-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4006-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4007-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4008-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4009-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4010-
4011-        share = MockShare()
4012-
4013         # Now begin the test.
4014 
4015         # XXX (0) ???  Fail unless something is not properly set-up?
4016hunk ./src/allmydata/test/test_backends.py 143
4017         # self.failIf(bsa)
4018 
4019         bs[0].remote_write(0, 'a')
4020-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4021+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4022         spaceint = self.s.allocated_size()
4023         self.failUnlessReallyEqual(spaceint, 1)
4024 
4025hunk ./src/allmydata/test/test_backends.py 161
4026         #self.failIf(mockrename.called, mockrename.call_args_list)
4027         #self.failIf(mockstat.called, mockstat.call_args_list)
4028 
4029+    def test_handle_incoming(self):
4030+        incomingset = self.s.backend.get_incoming('teststorage_index')
4031+        self.failUnlessReallyEqual(incomingset, set())
4032+
4033+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4034+       
4035+        incomingset = self.s.backend.get_incoming('teststorage_index')
4036+        self.failUnlessReallyEqual(incomingset, set((0,)))
4037+
4038+        bs[0].remote_close()
4039+        self.failUnlessReallyEqual(incomingset, set())
4040+
4041     @mock.patch('os.path.exists')
4042     @mock.patch('os.path.getsize')
4043     @mock.patch('__builtin__.open')
4044hunk ./src/allmydata/test/test_backends.py 223
4045         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4046 
4047 
4048-
4049 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4050     @mock.patch('time.time')
4051     @mock.patch('os.mkdir')
4052hunk ./src/allmydata/test/test_backends.py 271
4053         DASCore('teststoredir', expiration_policy)
4054 
4055         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4056+
4057}
4058[jacp14
4059wilcoxjg@gmail.com**20110712061211
4060 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4061] {
4062hunk ./src/allmydata/storage/backends/das/core.py 95
4063             # XXX I'd like to make this more specific. If there are no shares at all.
4064             return set()
4065             
4066-    def get_shares(self, storage_index):
4067+    def get_shares(self, storageindex):
4068         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4069hunk ./src/allmydata/storage/backends/das/core.py 97
4070-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4071+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4072         try:
4073             for f in os.listdir(finalstoragedir):
4074                 if NUM_RE.match(f):
4075hunk ./src/allmydata/storage/backends/das/core.py 102
4076                     filename = os.path.join(finalstoragedir, f)
4077-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4078+                    yield ImmutableShare(filename, storageindex, f)
4079         except OSError:
4080             # Commonly caused by there being no shares at all.
4081             pass
4082hunk ./src/allmydata/storage/backends/das/core.py 115
4083     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4084         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4085         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4086-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4087+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4088         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4089         return bw
4090 
4091hunk ./src/allmydata/storage/backends/das/core.py 155
4092     LEASE_SIZE = struct.calcsize(">L32s32sL")
4093     sharetype = "immutable"
4094 
4095-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4096+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4097         """ If max_size is not None then I won't allow more than
4098         max_size to be written to me. If create=True then max_size
4099         must not be None. """
4100hunk ./src/allmydata/storage/backends/das/core.py 160
4101         precondition((max_size is not None) or (not create), max_size, create)
4102+        self.storageindex = storageindex
4103         self._max_size = max_size
4104         self.incominghome = incominghome
4105         self.finalhome = finalhome
4106hunk ./src/allmydata/storage/backends/das/core.py 164
4107+        self.shnum = shnum
4108         if create:
4109             # touch the file, so later callers will see that we're working on
4110             # it. Also construct the metadata.
4111hunk ./src/allmydata/storage/backends/das/core.py 212
4112             # their children to know when they should do the rmdir. This
4113             # approach is simpler, but relies on os.rmdir refusing to delete
4114             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4115+            #print "os.path.dirname(self.incominghome): "
4116+            #print os.path.dirname(self.incominghome)
4117             os.rmdir(os.path.dirname(self.incominghome))
4118             # we also delete the grandparent (prefix) directory, .../ab ,
4119             # again to avoid leaving directories lying around. This might
4120hunk ./src/allmydata/storage/immutable.py 93
4121     def __init__(self, ss, share):
4122         self.ss = ss
4123         self._share_file = share
4124-        self.storage_index = share.storage_index
4125+        self.storageindex = share.storageindex
4126         self.shnum = share.shnum
4127 
4128     def __repr__(self):
4129hunk ./src/allmydata/storage/immutable.py 98
4130         return "<%s %s %s>" % (self.__class__.__name__,
4131-                               base32.b2a_l(self.storage_index[:8], 60),
4132+                               base32.b2a_l(self.storageindex[:8], 60),
4133                                self.shnum)
4134 
4135     def remote_read(self, offset, length):
4136hunk ./src/allmydata/storage/immutable.py 110
4137 
4138     def remote_advise_corrupt_share(self, reason):
4139         return self.ss.remote_advise_corrupt_share("immutable",
4140-                                                   self.storage_index,
4141+                                                   self.storageindex,
4142                                                    self.shnum,
4143                                                    reason)
4144hunk ./src/allmydata/test/test_backends.py 20
4145 # The following share file contents was generated with
4146 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4147 # with share data == 'a'.
4148-renew_secret  = 'x'*32
4149-cancel_secret = 'y'*32
4150-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4151-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4152+shareversionnumber = '\x00\x00\x00\x01'
4153+sharedatalength = '\x00\x00\x00\x01'
4154+numberofleases = '\x00\x00\x00\x01'
4155+shareinputdata = 'a'
4156+ownernumber = '\x00\x00\x00\x00'
4157+renewsecret  = 'x'*32
4158+cancelsecret = 'y'*32
4159+expirationtime = '\x00(\xde\x80'
4160+nextlease = ''
4161+containerdata = shareversionnumber + sharedatalength + numberofleases
4162+client_data = shareinputdata + ownernumber + renewsecret + \
4163+    cancelsecret + expirationtime + nextlease
4164+share_data = containerdata + client_data
4165+
4166 
4167 testnodeid = 'testnodeidxxxxxxxxxx'
4168 tempdir = 'teststoredir'
4169hunk ./src/allmydata/test/test_backends.py 52
4170 
4171 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4172     def setUp(self):
4173-        self.s = StorageServer(testnodeid, backend=NullCore())
4174+        self.ss = StorageServer(testnodeid, backend=NullCore())
4175 
4176     @mock.patch('os.mkdir')
4177     @mock.patch('__builtin__.open')
4178hunk ./src/allmydata/test/test_backends.py 62
4179         """ Write a new share. """
4180 
4181         # Now begin the test.
4182-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4183+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4184         bs[0].remote_write(0, 'a')
4185         self.failIf(mockisdir.called)
4186         self.failIf(mocklistdir.called)
4187hunk ./src/allmydata/test/test_backends.py 133
4188                 _assert(False, "The tester code doesn't recognize this case.") 
4189 
4190         mockopen.side_effect = call_open
4191-        testbackend = DASCore(tempdir, expiration_policy)
4192-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4193+        self.backend = DASCore(tempdir, expiration_policy)
4194+        self.ss = StorageServer(testnodeid, self.backend)
4195+        self.ssinf = StorageServer(testnodeid, self.backend)
4196 
4197     @mock.patch('time.time')
4198     def test_write_share(self, mocktime):
4199hunk ./src/allmydata/test/test_backends.py 142
4200         """ Write a new share. """
4201         # Now begin the test.
4202 
4203-        # XXX (0) ???  Fail unless something is not properly set-up?
4204-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4205+        mocktime.return_value = 0
4206+        # Inspect incoming and fail unless it's empty.
4207+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4208+        self.failUnlessReallyEqual(incomingset, set())
4209+       
4210+        # Among other things, populate incoming with the sharenum: 0.
4211+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4212 
4213hunk ./src/allmydata/test/test_backends.py 150
4214-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4215-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4216-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4217+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4218+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4219+       
4220+        # Attempt to create a second share writer with the same share.
4221+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4222 
4223hunk ./src/allmydata/test/test_backends.py 156
4224-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4225+        # Show that no sharewriter results from a remote_allocate_buckets
4226         # with the same si, until BucketWriter.remote_close() has been called.
4227hunk ./src/allmydata/test/test_backends.py 158
4228-        # self.failIf(bsa)
4229+        self.failIf(bsa)
4230 
4231hunk ./src/allmydata/test/test_backends.py 160
4232+        # Write 'a' to shnum 0. Only tested together with close and read.
4233         bs[0].remote_write(0, 'a')
4234hunk ./src/allmydata/test/test_backends.py 162
4235-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4236-        spaceint = self.s.allocated_size()
4237+
4238+        # Test allocated size.
4239+        spaceint = self.ss.allocated_size()
4240         self.failUnlessReallyEqual(spaceint, 1)
4241 
4242         # XXX (3) Inspect final and fail unless there's nothing there.
4243hunk ./src/allmydata/test/test_backends.py 168
4244+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4245         bs[0].remote_close()
4246         # XXX (4a) Inspect final and fail unless share 0 is there.
4247hunk ./src/allmydata/test/test_backends.py 171
4248+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4249+        #contents = sharesinfinal[0].read_share_data(0,999)
4250+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4251         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4252 
4253         # What happens when there's not enough space for the client's request?
4254hunk ./src/allmydata/test/test_backends.py 177
4255-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4256+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4257 
4258         # Now test the allocated_size method.
4259         # self.failIf(mockexists.called, mockexists.call_args_list)
4260hunk ./src/allmydata/test/test_backends.py 185
4261         #self.failIf(mockrename.called, mockrename.call_args_list)
4262         #self.failIf(mockstat.called, mockstat.call_args_list)
4263 
4264-    def test_handle_incoming(self):
4265-        incomingset = self.s.backend.get_incoming('teststorage_index')
4266-        self.failUnlessReallyEqual(incomingset, set())
4267-
4268-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4269-       
4270-        incomingset = self.s.backend.get_incoming('teststorage_index')
4271-        self.failUnlessReallyEqual(incomingset, set((0,)))
4272-
4273-        bs[0].remote_close()
4274-        self.failUnlessReallyEqual(incomingset, set())
4275-
4276     @mock.patch('os.path.exists')
4277     @mock.patch('os.path.getsize')
4278     @mock.patch('__builtin__.open')
4279hunk ./src/allmydata/test/test_backends.py 208
4280             self.failUnless('r' in mode, mode)
4281             self.failUnless('b' in mode, mode)
4282 
4283-            return StringIO(share_file_data)
4284+            return StringIO(share_data)
4285         mockopen.side_effect = call_open
4286 
4287hunk ./src/allmydata/test/test_backends.py 211
4288-        datalen = len(share_file_data)
4289+        datalen = len(share_data)
4290         def call_getsize(fname):
4291             self.failUnlessReallyEqual(fname, sharefname)
4292             return datalen
4293hunk ./src/allmydata/test/test_backends.py 223
4294         mockexists.side_effect = call_exists
4295 
4296         # Now begin the test.
4297-        bs = self.s.remote_get_buckets('teststorage_index')
4298+        bs = self.ss.remote_get_buckets('teststorage_index')
4299 
4300         self.failUnlessEqual(len(bs), 1)
4301hunk ./src/allmydata/test/test_backends.py 226
4302-        b = bs[0]
4303+        b = bs['0']
4304         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4305hunk ./src/allmydata/test/test_backends.py 228
4306-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4307+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4308         # If you try to read past the end you get the as much data as is there.
4309hunk ./src/allmydata/test/test_backends.py 230
4310-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4311+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4312         # If you start reading past the end of the file you get the empty string.
4313         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4314 
4315}
4316[jacp14 or so
4317wilcoxjg@gmail.com**20110713060346
4318 Ignore-this: 7026810f60879d65b525d450e43ff87a
4319] {
4320hunk ./src/allmydata/storage/backends/das/core.py 102
4321             for f in os.listdir(finalstoragedir):
4322                 if NUM_RE.match(f):
4323                     filename = os.path.join(finalstoragedir, f)
4324-                    yield ImmutableShare(filename, storageindex, f)
4325+                    yield ImmutableShare(filename, storageindex, int(f))
4326         except OSError:
4327             # Commonly caused by there being no shares at all.
4328             pass
4329hunk ./src/allmydata/storage/backends/null/core.py 25
4330     def set_storage_server(self, ss):
4331         self.ss = ss
4332 
4333+    def get_incoming(self, storageindex):
4334+        return set()
4335+
4336 class ImmutableShare:
4337     sharetype = "immutable"
4338 
4339hunk ./src/allmydata/storage/immutable.py 19
4340 
4341     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4342         self.ss = ss
4343-        self._max_size = max_size # don't allow the client to write more than this
4344+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4345+
4346         self._canary = canary
4347         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4348         self.closed = False
4349hunk ./src/allmydata/test/test_backends.py 135
4350         mockopen.side_effect = call_open
4351         self.backend = DASCore(tempdir, expiration_policy)
4352         self.ss = StorageServer(testnodeid, self.backend)
4353-        self.ssinf = StorageServer(testnodeid, self.backend)
4354+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4355+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4356 
4357     @mock.patch('time.time')
4358     def test_write_share(self, mocktime):
4359hunk ./src/allmydata/test/test_backends.py 161
4360         # with the same si, until BucketWriter.remote_close() has been called.
4361         self.failIf(bsa)
4362 
4363-        # Write 'a' to shnum 0. Only tested together with close and read.
4364-        bs[0].remote_write(0, 'a')
4365-
4366         # Test allocated size.
4367         spaceint = self.ss.allocated_size()
4368         self.failUnlessReallyEqual(spaceint, 1)
4369hunk ./src/allmydata/test/test_backends.py 165
4370 
4371-        # XXX (3) Inspect final and fail unless there's nothing there.
4372+        # Write 'a' to shnum 0. Only tested together with close and read.
4373+        bs[0].remote_write(0, 'a')
4374+       
4375+        # Preclose: Inspect final, failUnless nothing there.
4376         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4377         bs[0].remote_close()
4378hunk ./src/allmydata/test/test_backends.py 171
4379-        # XXX (4a) Inspect final and fail unless share 0 is there.
4380-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4381-        #contents = sharesinfinal[0].read_share_data(0,999)
4382-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4383-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4384 
4385hunk ./src/allmydata/test/test_backends.py 172
4386-        # What happens when there's not enough space for the client's request?
4387-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4388+        # Postclose: (Omnibus) failUnless written data is in final.
4389+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4390+        contents = sharesinfinal[0].read_share_data(0,73)
4391+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4392 
4393hunk ./src/allmydata/test/test_backends.py 177
4394-        # Now test the allocated_size method.
4395-        # self.failIf(mockexists.called, mockexists.call_args_list)
4396-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4397-        #self.failIf(mockrename.called, mockrename.call_args_list)
4398-        #self.failIf(mockstat.called, mockstat.call_args_list)
4399+        # Cover interior of for share in get_shares loop.
4400+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4401+       
4402+    @mock.patch('time.time')
4403+    @mock.patch('allmydata.util.fileutil.get_available_space')
4404+    def test_out_of_space(self, mockget_available_space, mocktime):
4405+        mocktime.return_value = 0
4406+       
4407+        def call_get_available_space(dir, reserve):
4408+            return 0
4409+
4410+        mockget_available_space.side_effect = call_get_available_space
4411+       
4412+       
4413+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4414 
4415     @mock.patch('os.path.exists')
4416     @mock.patch('os.path.getsize')
4417hunk ./src/allmydata/test/test_backends.py 234
4418         bs = self.ss.remote_get_buckets('teststorage_index')
4419 
4420         self.failUnlessEqual(len(bs), 1)
4421-        b = bs['0']
4422+        b = bs[0]
4423         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4424         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4425         # If you try to read past the end you get the as much data as is there.
4426}
4427[temporary work-in-progress patch to be unrecorded
4428zooko@zooko.com**20110714003008
4429 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4430 tidy up a few tests, work done in pair-programming with Zancas
4431] {
4432hunk ./src/allmydata/storage/backends/das/core.py 65
4433         self._clean_incomplete()
4434 
4435     def _clean_incomplete(self):
4436-        fileutil.rm_dir(self.incomingdir)
4437+        fileutil.rmtree(self.incomingdir)
4438         fileutil.make_dirs(self.incomingdir)
4439 
4440     def _setup_corruption_advisory(self):
4441hunk ./src/allmydata/storage/immutable.py 1
4442-import os, stat, struct, time
4443+import os, time
4444 
4445 from foolscap.api import Referenceable
4446 
4447hunk ./src/allmydata/storage/server.py 1
4448-import os, re, weakref, struct, time
4449+import os, weakref, struct, time
4450 
4451 from foolscap.api import Referenceable
4452 from twisted.application import service
4453hunk ./src/allmydata/storage/server.py 7
4454 
4455 from zope.interface import implements
4456-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4457+from allmydata.interfaces import RIStorageServer, IStatsProducer
4458 from allmydata.util import fileutil, idlib, log, time_format
4459 import allmydata # for __full_version__
4460 
4461hunk ./src/allmydata/storage/server.py 313
4462         self.add_latency("get", time.time() - start)
4463         return bucketreaders
4464 
4465-    def remote_get_incoming(self, storageindex):
4466-        incoming_share_set = self.backend.get_incoming(storageindex)
4467-        return incoming_share_set
4468-
4469     def get_leases(self, storageindex):
4470         """Provide an iterator that yields all of the leases attached to this
4471         bucket. Each lease is returned as a LeaseInfo instance.
4472hunk ./src/allmydata/test/test_backends.py 3
4473 from twisted.trial import unittest
4474 
4475+from twisted.path.filepath import FilePath
4476+
4477 from StringIO import StringIO
4478 
4479 from allmydata.test.common_util import ReallyEqualMixin
4480hunk ./src/allmydata/test/test_backends.py 38
4481 
4482 
4483 testnodeid = 'testnodeidxxxxxxxxxx'
4484-tempdir = 'teststoredir'
4485-basedir = os.path.join(tempdir, 'shares')
4486+storedir = 'teststoredir'
4487+storedirfp = FilePath(storedir)
4488+basedir = os.path.join(storedir, 'shares')
4489 baseincdir = os.path.join(basedir, 'incoming')
4490 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4491 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4492hunk ./src/allmydata/test/test_backends.py 53
4493                      'cutoff_date' : None,
4494                      'sharetypes' : None}
4495 
4496-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4497+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4498+    """ NullBackend is just for testing and executable documentation, so
4499+    this test is actually a test of StorageServer in which we're using
4500+    NullBackend as helper code for the test, rather than a test of
4501+    NullBackend. """
4502     def setUp(self):
4503         self.ss = StorageServer(testnodeid, backend=NullCore())
4504 
4505hunk ./src/allmydata/test/test_backends.py 62
4506     @mock.patch('os.mkdir')
4507+
4508     @mock.patch('__builtin__.open')
4509     @mock.patch('os.listdir')
4510     @mock.patch('os.path.isdir')
4511hunk ./src/allmydata/test/test_backends.py 69
4512     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4513         """ Write a new share. """
4514 
4515-        # Now begin the test.
4516         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4517         bs[0].remote_write(0, 'a')
4518         self.failIf(mockisdir.called)
4519hunk ./src/allmydata/test/test_backends.py 83
4520     @mock.patch('os.listdir')
4521     @mock.patch('os.path.isdir')
4522     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4523-        """ This tests whether a server instance can be constructed
4524-        with a filesystem backend. To pass the test, it has to use the
4525-        filesystem in only the prescribed ways. """
4526+        """ This tests whether a server instance can be constructed with a
4527+        filesystem backend. To pass the test, it mustn't use the filesystem
4528+        outside of its configured storedir. """
4529 
4530         def call_open(fname, mode):
4531hunk ./src/allmydata/test/test_backends.py 88
4532-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4533-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4534-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4535-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4536-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4537+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4538+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4539+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4540+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4541+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4542                 return StringIO()
4543             else:
4544hunk ./src/allmydata/test/test_backends.py 95
4545-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4546+                fnamefp = FilePath(fname)
4547+                self.failUnless(storedirfp in fnamefp.parents(),
4548+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4549         mockopen.side_effect = call_open
4550 
4551         def call_isdir(fname):
4552hunk ./src/allmydata/test/test_backends.py 101
4553-            if fname == os.path.join(tempdir,'shares'):
4554+            if fname == os.path.join(storedir, 'shares'):
4555                 return True
4556hunk ./src/allmydata/test/test_backends.py 103
4557-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4558+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4559                 return True
4560             else:
4561                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4562hunk ./src/allmydata/test/test_backends.py 109
4563         mockisdir.side_effect = call_isdir
4564 
4565+        mocklistdir.return_value = []
4566+
4567         def call_mkdir(fname, mode):
4568hunk ./src/allmydata/test/test_backends.py 112
4569-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4570             self.failUnlessEqual(0777, mode)
4571hunk ./src/allmydata/test/test_backends.py 113
4572-            if fname == tempdir:
4573-                return None
4574-            elif fname == os.path.join(tempdir,'shares'):
4575-                return None
4576-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4577-                return None
4578-            else:
4579-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4580+            self.failUnlessIn(fname,
4581+                              [storedir,
4582+                               os.path.join(storedir, 'shares'),
4583+                               os.path.join(storedir, 'shares', 'incoming')],
4584+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4585         mockmkdir.side_effect = call_mkdir
4586 
4587         # Now begin the test.
4588hunk ./src/allmydata/test/test_backends.py 121
4589-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4590+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4591 
4592         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4593 
4594hunk ./src/allmydata/test/test_backends.py 126
4595 
4596-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4597+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4598+    """ This tests both the StorageServer xyz """
4599     @mock.patch('__builtin__.open')
4600     def setUp(self, mockopen):
4601         def call_open(fname, mode):
4602hunk ./src/allmydata/test/test_backends.py 131
4603-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4604-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4605-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4606-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4607-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4608+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4609+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4610+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4611+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4612+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4613                 return StringIO()
4614             else:
4615                 _assert(False, "The tester code doesn't recognize this case.") 
4616hunk ./src/allmydata/test/test_backends.py 141
4617 
4618         mockopen.side_effect = call_open
4619-        self.backend = DASCore(tempdir, expiration_policy)
4620+        self.backend = DASCore(storedir, expiration_policy)
4621         self.ss = StorageServer(testnodeid, self.backend)
4622hunk ./src/allmydata/test/test_backends.py 143
4623-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4624+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4625         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4626 
4627     @mock.patch('time.time')
4628hunk ./src/allmydata/test/test_backends.py 147
4629-    def test_write_share(self, mocktime):
4630-        """ Write a new share. """
4631-        # Now begin the test.
4632+    def test_write_and_read_share(self, mocktime):
4633+        """
4634+        Write a new share, read it, and test the server's (and FS backend's)
4635+        handling of simultaneous and successive attempts to write the same
4636+        share.
4637+        """
4638 
4639         mocktime.return_value = 0
4640         # Inspect incoming and fail unless it's empty.
4641hunk ./src/allmydata/test/test_backends.py 159
4642         incomingset = self.ss.backend.get_incoming('teststorage_index')
4643         self.failUnlessReallyEqual(incomingset, set())
4644         
4645-        # Among other things, populate incoming with the sharenum: 0.
4646+        # Populate incoming with the sharenum: 0.
4647         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4648 
4649         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4650hunk ./src/allmydata/test/test_backends.py 163
4651-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4652+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4653         
4654hunk ./src/allmydata/test/test_backends.py 165
4655-        # Attempt to create a second share writer with the same share.
4656+        # Attempt to create a second share writer with the same sharenum.
4657         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4658 
4659         # Show that no sharewriter results from a remote_allocate_buckets
4660hunk ./src/allmydata/test/test_backends.py 169
4661-        # with the same si, until BucketWriter.remote_close() has been called.
4662+        # with the same si and sharenum, until BucketWriter.remote_close()
4663+        # has been called.
4664         self.failIf(bsa)
4665 
4666         # Test allocated size.
4667hunk ./src/allmydata/test/test_backends.py 187
4668         # Postclose: (Omnibus) failUnless written data is in final.
4669         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4670         contents = sharesinfinal[0].read_share_data(0,73)
4671-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4672+        self.failUnlessReallyEqual(contents, client_data)
4673 
4674hunk ./src/allmydata/test/test_backends.py 189
4675-        # Cover interior of for share in get_shares loop.
4676-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4677+        # Exercise the case that the share we're asking to allocate is
4678+        # already (completely) uploaded.
4679+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4680         
4681     @mock.patch('time.time')
4682     @mock.patch('allmydata.util.fileutil.get_available_space')
4683hunk ./src/allmydata/test/test_backends.py 210
4684     @mock.patch('os.path.getsize')
4685     @mock.patch('__builtin__.open')
4686     @mock.patch('os.listdir')
4687-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4688+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4689         """ This tests whether the code correctly finds and reads
4690         shares written out by old (Tahoe-LAFS <= v1.8.2)
4691         servers. There is a similar test in test_download, but that one
4692hunk ./src/allmydata/test/test_backends.py 219
4693         StorageServer object. """
4694 
4695         def call_listdir(dirname):
4696-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4697+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4698             return ['0']
4699 
4700         mocklistdir.side_effect = call_listdir
4701hunk ./src/allmydata/test/test_backends.py 226
4702 
4703         def call_open(fname, mode):
4704             self.failUnlessReallyEqual(fname, sharefname)
4705-            self.failUnless('r' in mode, mode)
4706+            self.failUnlessEqual(mode[0], 'r', mode)
4707             self.failUnless('b' in mode, mode)
4708 
4709             return StringIO(share_data)
4710hunk ./src/allmydata/test/test_backends.py 268
4711         filesystem in only the prescribed ways. """
4712 
4713         def call_open(fname, mode):
4714-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4715-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4716-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4717-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4718-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4719+            if fname == os.path.join(storedir,'bucket_counter.state'):
4720+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4721+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4722+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4723+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4724                 return StringIO()
4725             else:
4726                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4727hunk ./src/allmydata/test/test_backends.py 279
4728         mockopen.side_effect = call_open
4729 
4730         def call_isdir(fname):
4731-            if fname == os.path.join(tempdir,'shares'):
4732+            if fname == os.path.join(storedir,'shares'):
4733                 return True
4734hunk ./src/allmydata/test/test_backends.py 281
4735-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4736+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4737                 return True
4738             else:
4739                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4740hunk ./src/allmydata/test/test_backends.py 290
4741         def call_mkdir(fname, mode):
4742             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4743             self.failUnlessEqual(0777, mode)
4744-            if fname == tempdir:
4745+            if fname == storedir:
4746                 return None
4747hunk ./src/allmydata/test/test_backends.py 292
4748-            elif fname == os.path.join(tempdir,'shares'):
4749+            elif fname == os.path.join(storedir,'shares'):
4750                 return None
4751hunk ./src/allmydata/test/test_backends.py 294
4752-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4753+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4754                 return None
4755             else:
4756                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4757hunk ./src/allmydata/util/fileutil.py 5
4758 Futz with files like a pro.
4759 """
4760 
4761-import sys, exceptions, os, stat, tempfile, time, binascii
4762+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4763 
4764 from twisted.python import log
4765 
4766hunk ./src/allmydata/util/fileutil.py 186
4767             raise tx
4768         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4769 
4770-def rm_dir(dirname):
4771+def rmtree(dirname):
4772     """
4773     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4774     already gone, do nothing and return without raising an exception.  If this
4775hunk ./src/allmydata/util/fileutil.py 205
4776             else:
4777                 remove(fullname)
4778         os.rmdir(dirname)
4779-    except Exception, le:
4780-        # Ignore "No such file or directory"
4781-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4782+    except EnvironmentError, le:
4783+        # Ignore "No such file or directory", collect any other exception.
4784+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4785             excs.append(le)
4786hunk ./src/allmydata/util/fileutil.py 209
4787+    except Exception, le:
4788+        excs.append(le)
4789 
4790     # Okay, now we've recursively removed everything, ignoring any "No
4791     # such file or directory" errors, and collecting any other errors.
4792hunk ./src/allmydata/util/fileutil.py 222
4793             raise OSError, "Failed to remove dir for unknown reason."
4794         raise OSError, excs
4795 
4796+def rm_dir(dirname):
4797+    # Renamed to be like shutil.rmtree and unlike rmdir.
4798+    return rmtree(dirname)
4799 
4800 def remove_if_possible(f):
4801     try:
4802}
4803[work in progress intended to be unrecorded and never committed to trunk
4804zooko@zooko.com**20110714212139
4805 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
4806 switch from os.path.join to filepath
4807 incomplete refactoring of common "stay in your subtree" tester code into a superclass
4808 
4809] {
4810hunk ./src/allmydata/test/test_backends.py 3
4811 from twisted.trial import unittest
4812 
4813-from twisted.path.filepath import FilePath
4814+from twisted.python.filepath import FilePath
4815 
4816 from StringIO import StringIO
4817 
4818hunk ./src/allmydata/test/test_backends.py 10
4819 from allmydata.test.common_util import ReallyEqualMixin
4820 from allmydata.util.assertutil import _assert
4821 
4822-import mock, os
4823+import mock
4824 
4825 # This is the code that we're going to be testing.
4826 from allmydata.storage.server import StorageServer
4827hunk ./src/allmydata/test/test_backends.py 25
4828 shareversionnumber = '\x00\x00\x00\x01'
4829 sharedatalength = '\x00\x00\x00\x01'
4830 numberofleases = '\x00\x00\x00\x01'
4831+
4832 shareinputdata = 'a'
4833 ownernumber = '\x00\x00\x00\x00'
4834 renewsecret  = 'x'*32
4835hunk ./src/allmydata/test/test_backends.py 39
4836 
4837 
4838 testnodeid = 'testnodeidxxxxxxxxxx'
4839-storedir = 'teststoredir'
4840-storedirfp = FilePath(storedir)
4841-basedir = os.path.join(storedir, 'shares')
4842-baseincdir = os.path.join(basedir, 'incoming')
4843-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4844-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4845-shareincomingname = os.path.join(sharedirincomingname, '0')
4846-sharefname = os.path.join(sharedirfinalname, '0')
4847+
4848+class TestFilesMixin(unittest.TestCase):
4849+    def setUp(self):
4850+        self.storedir = FilePath('teststoredir')
4851+        self.basedir = self.storedir.child('shares')
4852+        self.baseincdir = self.basedir.child('incoming')
4853+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4854+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4855+        self.shareincomingname = self.sharedirincomingname.child('0')
4856+        self.sharefname = self.sharedirfinalname.child('0')
4857+
4858+    def call_open(self, fname, mode):
4859+        fnamefp = FilePath(fname)
4860+        if fnamefp == self.storedir.child('bucket_counter.state'):
4861+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
4862+        elif fnamefp == self.storedir.child('lease_checker.state'):
4863+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
4864+        elif fnamefp == self.storedir.child('lease_checker.history'):
4865+            return StringIO()
4866+        else:
4867+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4868+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
4869+
4870+    def call_isdir(self, fname):
4871+        fnamefp = FilePath(fname)
4872+        if fnamefp == self.storedir.child('shares'):
4873+            return True
4874+        elif fnamefp == self.storedir.child('shares').child('incoming'):
4875+            return True
4876+        else:
4877+            self.failUnless(self.storedir in fnamefp.parents(),
4878+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4879+
4880+    def call_mkdir(self, fname, mode):
4881+        self.failUnlessEqual(0777, mode)
4882+        fnamefp = FilePath(fname)
4883+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4884+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4885+
4886+
4887+    @mock.patch('os.mkdir')
4888+    @mock.patch('__builtin__.open')
4889+    @mock.patch('os.listdir')
4890+    @mock.patch('os.path.isdir')
4891+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4892+        mocklistdir.return_value = []
4893+        mockmkdir.side_effect = self.call_mkdir
4894+        mockisdir.side_effect = self.call_isdir
4895+        mockopen.side_effect = self.call_open
4896+        mocklistdir.return_value = []
4897+       
4898+        test_func()
4899+       
4900+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4901 
4902 expiration_policy = {'enabled' : False,
4903                      'mode' : 'age',
4904hunk ./src/allmydata/test/test_backends.py 123
4905         self.failIf(mockopen.called)
4906         self.failIf(mockmkdir.called)
4907 
4908-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
4909-    @mock.patch('time.time')
4910-    @mock.patch('os.mkdir')
4911-    @mock.patch('__builtin__.open')
4912-    @mock.patch('os.listdir')
4913-    @mock.patch('os.path.isdir')
4914-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4915+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
4916+    def test_create_server_fs_backend(self):
4917         """ This tests whether a server instance can be constructed with a
4918         filesystem backend. To pass the test, it mustn't use the filesystem
4919         outside of its configured storedir. """
4920hunk ./src/allmydata/test/test_backends.py 129
4921 
4922-        def call_open(fname, mode):
4923-            if fname == os.path.join(storedir, 'bucket_counter.state'):
4924-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4925-            elif fname == os.path.join(storedir, 'lease_checker.state'):
4926-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4927-            elif fname == os.path.join(storedir, 'lease_checker.history'):
4928-                return StringIO()
4929-            else:
4930-                fnamefp = FilePath(fname)
4931-                self.failUnless(storedirfp in fnamefp.parents(),
4932-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4933-        mockopen.side_effect = call_open
4934+        def _f():
4935+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4936 
4937hunk ./src/allmydata/test/test_backends.py 132
4938-        def call_isdir(fname):
4939-            if fname == os.path.join(storedir, 'shares'):
4940-                return True
4941-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4942-                return True
4943-            else:
4944-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4945-        mockisdir.side_effect = call_isdir
4946-
4947-        mocklistdir.return_value = []
4948-
4949-        def call_mkdir(fname, mode):
4950-            self.failUnlessEqual(0777, mode)
4951-            self.failUnlessIn(fname,
4952-                              [storedir,
4953-                               os.path.join(storedir, 'shares'),
4954-                               os.path.join(storedir, 'shares', 'incoming')],
4955-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4956-        mockmkdir.side_effect = call_mkdir
4957-
4958-        # Now begin the test.
4959-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4960-
4961-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4962+        self._help_test_stay_in_your_subtree(_f)
4963 
4964 
4965 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4966}
4967[another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
4968zooko@zooko.com**20110715191500
4969 Ignore-this: af33336789041800761e80510ea2f583
4970 In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
4971] {
4972hunk ./src/allmydata/storage/backends/das/core.py 59
4973                 log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
4974                         umid="0wZ27w", level=log.UNUSUAL)
4975 
4976-        self.sharedir = os.path.join(self.storedir, "shares")
4977-        fileutil.make_dirs(self.sharedir)
4978-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
4979+        self.sharedir = self.storedir.child("shares")
4980+        fileutil.fp_make_dirs(self.sharedir)
4981+        self.incomingdir = self.sharedir.child('incoming')
4982         self._clean_incomplete()
4983 
4984     def _clean_incomplete(self):
4985hunk ./src/allmydata/storage/backends/das/core.py 65
4986-        fileutil.rmtree(self.incomingdir)
4987-        fileutil.make_dirs(self.incomingdir)
4988+        fileutil.fp_remove(self.incomingdir)
4989+        fileutil.fp_make_dirs(self.incomingdir)
4990 
4991     def _setup_corruption_advisory(self):
4992         # we don't actually create the corruption-advisory dir until necessary
4993hunk ./src/allmydata/storage/backends/das/core.py 70
4994-        self.corruption_advisory_dir = os.path.join(self.storedir,
4995-                                                    "corruption-advisories")
4996+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
4997 
4998     def _setup_bucket_counter(self):
4999hunk ./src/allmydata/storage/backends/das/core.py 73
5000-        statefname = os.path.join(self.storedir, "bucket_counter.state")
5001+        statefname = self.storedir.child("bucket_counter.state")
5002         self.bucket_counter = FSBucketCountingCrawler(statefname)
5003         self.bucket_counter.setServiceParent(self)
5004 
5005hunk ./src/allmydata/storage/backends/das/core.py 78
5006     def _setup_lease_checkerf(self, expiration_policy):
5007-        statefile = os.path.join(self.storedir, "lease_checker.state")
5008-        historyfile = os.path.join(self.storedir, "lease_checker.history")
5009+        statefile = self.storedir.child("lease_checker.state")
5010+        historyfile = self.storedir.child("lease_checker.history")
5011         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
5012         self.lease_checker.setServiceParent(self)
5013 
5014hunk ./src/allmydata/storage/backends/das/core.py 83
5015-    def get_incoming(self, storageindex):
5016+    def get_incoming_shnums(self, storageindex):
5017         """Return the set of incoming shnums."""
5018         try:
5019hunk ./src/allmydata/storage/backends/das/core.py 86
5020-            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
5021-            incominglist = os.listdir(incomingsharesdir)
5022-            incomingshnums = [int(x) for x in incominglist]
5023-            return set(incomingshnums)
5024-        except OSError:
5025-            # XXX I'd like to make this more specific. If there are no shares at all.
5026-            return set()
5027+           
5028+            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5029+            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5030+            return frozenset(incomingshnums)
5031+        except UnlistableError:
5032+            # There is no shares directory at all.
5033+            return frozenset()
5034             
5035     def get_shares(self, storageindex):
5036         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5037hunk ./src/allmydata/storage/backends/das/core.py 96
5038-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
5039+        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5040         try:
5041hunk ./src/allmydata/storage/backends/das/core.py 98
5042-            for f in os.listdir(finalstoragedir):
5043-                if NUM_RE.match(f):
5044-                    filename = os.path.join(finalstoragedir, f)
5045-                    yield ImmutableShare(filename, storageindex, int(f))
5046-        except OSError:
5047-            # Commonly caused by there being no shares at all.
5048+            for f in finalstoragedir.listdir():
5049+                if NUM_RE.match(f.basename):
5050+                    yield ImmutableShare(f, storageindex, int(f))
5051+        except UnlistableError:
5052+            # There is no shares directory at all.
5053             pass
5054         
5055     def get_available_space(self):
5056hunk ./src/allmydata/storage/backends/das/core.py 149
5057 # then the value stored in this field will be the actual share data length
5058 # modulo 2**32.
5059 
5060-class ImmutableShare:
5061+class ImmutableShare(object):
5062     LEASE_SIZE = struct.calcsize(">L32s32sL")
5063     sharetype = "immutable"
5064 
5065hunk ./src/allmydata/storage/backends/das/core.py 166
5066         if create:
5067             # touch the file, so later callers will see that we're working on
5068             # it. Also construct the metadata.
5069-            assert not os.path.exists(self.finalhome)
5070-            fileutil.make_dirs(os.path.dirname(self.incominghome))
5071+            assert not finalhome.exists()
5072+            fp_make_dirs(self.incominghome)
5073             f = open(self.incominghome, 'wb')
5074             # The second field -- the four-byte share data length -- is no
5075             # longer used as of Tahoe v1.3.0, but we continue to write it in
5076hunk ./src/allmydata/storage/backends/das/core.py 316
5077         except IndexError:
5078             self.add_lease(lease_info)
5079 
5080-
5081     def cancel_lease(self, cancel_secret):
5082         """Remove a lease with the given cancel_secret. If the last lease is
5083         cancelled, the file will be removed. Return the number of bytes that
5084hunk ./src/allmydata/storage/common.py 19
5085 def si_a2b(ascii_storageindex):
5086     return base32.a2b(ascii_storageindex)
5087 
5088-def storage_index_to_dir(storageindex):
5089+def storage_index_to_dir(startfp, storageindex):
5090     sia = si_b2a(storageindex)
5091     return os.path.join(sia[:2], sia)
5092hunk ./src/allmydata/storage/server.py 210
5093 
5094         # fill incoming with all shares that are incoming use a set operation
5095         # since there's no need to operate on individual pieces
5096-        incoming = self.backend.get_incoming(storageindex)
5097+        incoming = self.backend.get_incoming_shnums(storageindex)
5098 
5099         for shnum in ((sharenums - alreadygot) - incoming):
5100             if (not limited) or (remaining_space >= max_space_per_bucket):
5101hunk ./src/allmydata/test/test_backends.py 5
5102 
5103 from twisted.python.filepath import FilePath
5104 
5105+from allmydata.util.log import msg
5106+
5107 from StringIO import StringIO
5108 
5109 from allmydata.test.common_util import ReallyEqualMixin
5110hunk ./src/allmydata/test/test_backends.py 42
5111 
5112 testnodeid = 'testnodeidxxxxxxxxxx'
5113 
5114-class TestFilesMixin(unittest.TestCase):
5115-    def setUp(self):
5116-        self.storedir = FilePath('teststoredir')
5117-        self.basedir = self.storedir.child('shares')
5118-        self.baseincdir = self.basedir.child('incoming')
5119-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5120-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5121-        self.shareincomingname = self.sharedirincomingname.child('0')
5122-        self.sharefname = self.sharedirfinalname.child('0')
5123+class MockStat:
5124+    def __init__(self):
5125+        self.st_mode = None
5126 
5127hunk ./src/allmydata/test/test_backends.py 46
5128+class MockFiles(unittest.TestCase):
5129+    """ I simulate a filesystem that the code under test can use. I flag the
5130+    code under test if it reads or writes outside of its prescribed
5131+    subtree. I simulate just the parts of the filesystem that the current
5132+    implementation of DAS backend needs. """
5133     def call_open(self, fname, mode):
5134         fnamefp = FilePath(fname)
5135hunk ./src/allmydata/test/test_backends.py 53
5136+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5137+                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5138+
5139         if fnamefp == self.storedir.child('bucket_counter.state'):
5140             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
5141         elif fnamefp == self.storedir.child('lease_checker.state'):
5142hunk ./src/allmydata/test/test_backends.py 61
5143             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
5144         elif fnamefp == self.storedir.child('lease_checker.history'):
5145+            # This is separated out from the else clause below just because
5146+            # we know this particular file is going to be used by the
5147+            # current implementation of DAS backend, and we might want to
5148+            # use this information in this test in the future...
5149             return StringIO()
5150         else:
5151hunk ./src/allmydata/test/test_backends.py 67
5152-            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5153-                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5154+            # Anything else you open inside your subtree appears to be an
5155+            # empty file.
5156+            return StringIO()
5157 
5158     def call_isdir(self, fname):
5159         fnamefp = FilePath(fname)
5160hunk ./src/allmydata/test/test_backends.py 73
5161-        if fnamefp == self.storedir.child('shares'):
5162+        return fnamefp.isdir()
5163+
5164+        self.failUnless(self.storedir == self or self.storedir in self.parents(),
5165+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
5166+
5167+        # The first two cases are separate from the else clause below just
5168+        # because we know that the current implementation of the DAS backend
5169+        # inspects these two directories and we might want to make use of
5170+        # that information in the tests in the future...
5171+        if self == self.storedir.child('shares'):
5172             return True
5173hunk ./src/allmydata/test/test_backends.py 84
5174-        elif fnamefp == self.storedir.child('shares').child('incoming'):
5175+        elif self == self.storedir.child('shares').child('incoming'):
5176             return True
5177         else:
5178hunk ./src/allmydata/test/test_backends.py 87
5179-            self.failUnless(self.storedir in fnamefp.parents(),
5180-                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5181+            # Anything else you open inside your subtree appears to be a
5182+            # directory.
5183+            return True
5184 
5185     def call_mkdir(self, fname, mode):
5186hunk ./src/allmydata/test/test_backends.py 92
5187-        self.failUnlessEqual(0777, mode)
5188         fnamefp = FilePath(fname)
5189         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5190                         "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5191hunk ./src/allmydata/test/test_backends.py 95
5192+        self.failUnlessEqual(0777, mode)
5193 
5194hunk ./src/allmydata/test/test_backends.py 97
5195+    def call_listdir(self, fname):
5196+        fnamefp = FilePath(fname)
5197+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5198+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5199 
5200hunk ./src/allmydata/test/test_backends.py 102
5201-    @mock.patch('os.mkdir')
5202-    @mock.patch('__builtin__.open')
5203-    @mock.patch('os.listdir')
5204-    @mock.patch('os.path.isdir')
5205-    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5206-        mocklistdir.return_value = []
5207+    def call_stat(self, fname):
5208+        fnamefp = FilePath(fname)
5209+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5210+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5211+
5212+        msg("%s.call_stat(%s)" % (self, fname,))
5213+        mstat = MockStat()
5214+        mstat.st_mode = 16893 # a directory
5215+        return mstat
5216+
5217+    def setUp(self):
5218+        msg( "%s.setUp()" % (self,))
5219+        self.storedir = FilePath('teststoredir')
5220+        self.basedir = self.storedir.child('shares')
5221+        self.baseincdir = self.basedir.child('incoming')
5222+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5223+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5224+        self.shareincomingname = self.sharedirincomingname.child('0')
5225+        self.sharefname = self.sharedirfinalname.child('0')
5226+
5227+        self.mocklistdirp = mock.patch('os.listdir')
5228+        mocklistdir = self.mocklistdirp.__enter__()
5229+        mocklistdir.side_effect = self.call_listdir
5230+
5231+        self.mockmkdirp = mock.patch('os.mkdir')
5232+        mockmkdir = self.mockmkdirp.__enter__()
5233         mockmkdir.side_effect = self.call_mkdir
5234hunk ./src/allmydata/test/test_backends.py 129
5235+
5236+        self.mockisdirp = mock.patch('os.path.isdir')
5237+        mockisdir = self.mockisdirp.__enter__()
5238         mockisdir.side_effect = self.call_isdir
5239hunk ./src/allmydata/test/test_backends.py 133
5240+
5241+        self.mockopenp = mock.patch('__builtin__.open')
5242+        mockopen = self.mockopenp.__enter__()
5243         mockopen.side_effect = self.call_open
5244hunk ./src/allmydata/test/test_backends.py 137
5245-        mocklistdir.return_value = []
5246-       
5247-        test_func()
5248-       
5249-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5250+
5251+        self.mockstatp = mock.patch('os.stat')
5252+        mockstat = self.mockstatp.__enter__()
5253+        mockstat.side_effect = self.call_stat
5254+
5255+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
5256+        mockfpstat = self.mockfpstatp.__enter__()
5257+        mockfpstat.side_effect = self.call_stat
5258+
5259+    def tearDown(self):
5260+        msg( "%s.tearDown()" % (self,))
5261+        self.mockfpstatp.__exit__()
5262+        self.mockstatp.__exit__()
5263+        self.mockopenp.__exit__()
5264+        self.mockisdirp.__exit__()
5265+        self.mockmkdirp.__exit__()
5266+        self.mocklistdirp.__exit__()
5267 
5268 expiration_policy = {'enabled' : False,
5269                      'mode' : 'age',
5270hunk ./src/allmydata/test/test_backends.py 184
5271         self.failIf(mockopen.called)
5272         self.failIf(mockmkdir.called)
5273 
5274-class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5275+class TestServerConstruction(MockFiles, ReallyEqualMixin):
5276     def test_create_server_fs_backend(self):
5277         """ This tests whether a server instance can be constructed with a
5278         filesystem backend. To pass the test, it mustn't use the filesystem
5279hunk ./src/allmydata/test/test_backends.py 190
5280         outside of its configured storedir. """
5281 
5282-        def _f():
5283-            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5284+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5285 
5286hunk ./src/allmydata/test/test_backends.py 192
5287-        self._help_test_stay_in_your_subtree(_f)
5288-
5289-
5290-class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5291-    """ This tests both the StorageServer xyz """
5292-    @mock.patch('__builtin__.open')
5293-    def setUp(self, mockopen):
5294-        def call_open(fname, mode):
5295-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5296-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5297-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5298-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5299-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5300-                return StringIO()
5301-            else:
5302-                _assert(False, "The tester code doesn't recognize this case.") 
5303-
5304-        mockopen.side_effect = call_open
5305-        self.backend = DASCore(storedir, expiration_policy)
5306-        self.ss = StorageServer(testnodeid, self.backend)
5307-        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
5308-        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5309+class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
5310+    """ This tests both the StorageServer and the DAS backend together. """
5311+    def setUp(self):
5312+        MockFiles.setUp(self)
5313+        try:
5314+            self.backend = DASCore(self.storedir, expiration_policy)
5315+            self.ss = StorageServer(testnodeid, self.backend)
5316+            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
5317+            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5318+        except:
5319+            MockFiles.tearDown(self)
5320+            raise
5321 
5322     @mock.patch('time.time')
5323     def test_write_and_read_share(self, mocktime):
5324hunk ./src/allmydata/util/fileutil.py 8
5325 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5326 
5327 from twisted.python import log
5328+from twisted.python.filepath import UnlistableError
5329 
5330 from pycryptopp.cipher.aes import AES
5331 
5332hunk ./src/allmydata/util/fileutil.py 187
5333             raise tx
5334         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5335 
5336+def fp_make_dirs(dirfp):
5337+    """
5338+    An idempotent version of FilePath.makedirs().  If the dir already
5339+    exists, do nothing and return without raising an exception.  If this
5340+    call creates the dir, return without raising an exception.  If there is
5341+    an error that prevents creation or if the directory gets deleted after
5342+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
5343+    exists, raise an exception.
5344+    """
5345+    log.msg( "xxx 0 %s" % (dirfp,))
5346+    tx = None
5347+    try:
5348+        dirfp.makedirs()
5349+    except OSError, x:
5350+        tx = x
5351+
5352+    if not dirfp.isdir():
5353+        if tx:
5354+            raise tx
5355+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5356+
5357 def rmtree(dirname):
5358     """
5359     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5360hunk ./src/allmydata/util/fileutil.py 244
5361             raise OSError, "Failed to remove dir for unknown reason."
5362         raise OSError, excs
5363 
5364+def fp_remove(dirfp):
5365+    try:
5366+        dirfp.remove()
5367+    except UnlistableError, e:
5368+        if e.originalException.errno != errno.ENOENT:
5369+            raise
5370+
5371 def rm_dir(dirname):
5372     # Renamed to be like shutil.rmtree and unlike rmdir.
5373     return rmtree(dirname)
5374}
5375
5376Context:
5377
5378[docs: add missing link in NEWS.rst
5379zooko@zooko.com**20110712153307
5380 Ignore-this: be7b7eb81c03700b739daa1027d72b35
5381]
5382[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
5383zooko@zooko.com**20110712153229
5384 Ignore-this: 723c4f9e2211027c79d711715d972c5
5385 Also remove a couple of vestigial references to figleaf, which is long gone.
5386 fixes #1409 (remove contrib/fuse)
5387]
5388[add Protovis.js-based download-status timeline visualization
5389Brian Warner <warner@lothar.com>**20110629222606
5390 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
5391 
5392 provide status overlap info on the webapi t=json output, add decode/decrypt
5393 rate tooltips, add zoomin/zoomout buttons
5394]
5395[add more download-status data, fix tests
5396Brian Warner <warner@lothar.com>**20110629222555
5397 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
5398]
5399[prepare for viz: improve DownloadStatus events
5400Brian Warner <warner@lothar.com>**20110629222542
5401 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
5402 
5403 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
5404]
5405[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
5406zooko@zooko.com**20110629185711
5407 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
5408]
5409[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
5410david-sarah@jacaranda.org**20110130235809
5411 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
5412]
5413[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
5414david-sarah@jacaranda.org**20110626054124
5415 Ignore-this: abb864427a1b91bd10d5132b4589fd90
5416]
5417[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
5418david-sarah@jacaranda.org**20110623205528
5419 Ignore-this: c63e23146c39195de52fb17c7c49b2da
5420]
5421[Rename test_package_initialization.py to (much shorter) test_import.py .
5422Brian Warner <warner@lothar.com>**20110611190234
5423 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
5424 
5425 The former name was making my 'ls' listings hard to read, by forcing them
5426 down to just two columns.
5427]
5428[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
5429zooko@zooko.com**20110611163741
5430 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
5431 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
5432 fixes #1412
5433]
5434[wui: right-align the size column in the WUI
5435zooko@zooko.com**20110611153758
5436 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
5437 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
5438 fixes #1412
5439]
5440[docs: three minor fixes
5441zooko@zooko.com**20110610121656
5442 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
5443 CREDITS for arc for stats tweak
5444 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
5445 English usage tweak
5446]
5447[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
5448david-sarah@jacaranda.org**20110609223719
5449 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
5450]
5451[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
5452wilcoxjg@gmail.com**20110527120135
5453 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
5454 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
5455 NEWS.rst, stats.py: documentation of change to get_latencies
5456 stats.rst: now documents percentile modification in get_latencies
5457 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
5458 fixes #1392
5459]
5460[corrected "k must never be smaller than N" to "k must never be greater than N"
5461secorp@allmydata.org**20110425010308
5462 Ignore-this: 233129505d6c70860087f22541805eac
5463]
5464[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
5465david-sarah@jacaranda.org**20110517011214
5466 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
5467]
5468[docs: convert NEWS to NEWS.rst and change all references to it.
5469david-sarah@jacaranda.org**20110517010255
5470 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
5471]
5472[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
5473david-sarah@jacaranda.org**20110512140559
5474 Ignore-this: 784548fc5367fac5450df1c46890876d
5475]
5476[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
5477david-sarah@jacaranda.org**20110130164923
5478 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
5479]
5480[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
5481zooko@zooko.com**20110128142006
5482 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
5483 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
5484]
5485[M-x whitespace-cleanup
5486zooko@zooko.com**20110510193653
5487 Ignore-this: dea02f831298c0f65ad096960e7df5c7
5488]
5489[docs: fix typo in running.rst, thanks to arch_o_median
5490zooko@zooko.com**20110510193633
5491 Ignore-this: ca06de166a46abbc61140513918e79e8
5492]
5493[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
5494david-sarah@jacaranda.org**20110204204902
5495 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
5496]
5497[relnotes.txt: forseeable -> foreseeable. refs #1342
5498david-sarah@jacaranda.org**20110204204116
5499 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
5500]
5501[replace remaining .html docs with .rst docs
5502zooko@zooko.com**20110510191650
5503 Ignore-this: d557d960a986d4ac8216d1677d236399
5504 Remove install.html (long since deprecated).
5505 Also replace some obsolete references to install.html with references to quickstart.rst.
5506 Fix some broken internal references within docs/historical/historical_known_issues.txt.
5507 Thanks to Ravi Pinjala and Patrick McDonald.
5508 refs #1227
5509]
5510[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
5511zooko@zooko.com**20110428055232
5512 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
5513]
5514[munin tahoe_files plugin: fix incorrect file count
5515francois@ctrlaltdel.ch**20110428055312
5516 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
5517 fixes #1391
5518]
5519[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
5520david-sarah@jacaranda.org**20110411190738
5521 Ignore-this: 7847d26bc117c328c679f08a7baee519
5522]
5523[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
5524david-sarah@jacaranda.org**20110410155844
5525 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
5526]
5527[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
5528david-sarah@jacaranda.org**20110410155705
5529 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
5530]
5531[remove unused variable detected by pyflakes
5532zooko@zooko.com**20110407172231
5533 Ignore-this: 7344652d5e0720af822070d91f03daf9
5534]
5535[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
5536david-sarah@jacaranda.org**20110401202750
5537 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
5538]
5539[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
5540Brian Warner <warner@lothar.com>**20110325232511
5541 Ignore-this: d5307faa6900f143193bfbe14e0f01a
5542]
5543[control.py: remove all uses of s.get_serverid()
5544warner@lothar.com**20110227011203
5545 Ignore-this: f80a787953bd7fa3d40e828bde00e855
5546]
5547[web: remove some uses of s.get_serverid(), not all
5548warner@lothar.com**20110227011159
5549 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
5550]
5551[immutable/downloader/fetcher.py: remove all get_serverid() calls
5552warner@lothar.com**20110227011156
5553 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
5554]
5555[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
5556warner@lothar.com**20110227011153
5557 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
5558 
5559 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
5560 _shares_from_server dict was being popped incorrectly (using shnum as the
5561 index instead of serverid). I'm still thinking through the consequences of
5562 this bug. It was probably benign and really hard to detect. I think it would
5563 cause us to incorrectly believe that we're pulling too many shares from a
5564 server, and thus prefer a different server rather than asking for a second
5565 share from the first server. The diversity code is intended to spread out the
5566 number of shares simultaneously being requested from each server, but with
5567 this bug, it might be spreading out the total number of shares requested at
5568 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
5569 segment, so the effect doesn't last very long).
5570]
5571[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
5572warner@lothar.com**20110227011150
5573 Ignore-this: d8d56dd8e7b280792b40105e13664554
5574 
5575 test_download.py: create+check MyShare instances better, make sure they share
5576 Server objects, now that finder.py cares
5577]
5578[immutable/downloader/finder.py: reduce use of get_serverid(), one left
5579warner@lothar.com**20110227011146
5580 Ignore-this: 5785be173b491ae8a78faf5142892020
5581]
5582[immutable/offloaded.py: reduce use of get_serverid() a bit more
5583warner@lothar.com**20110227011142
5584 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
5585]
5586[immutable/upload.py: reduce use of get_serverid()
5587warner@lothar.com**20110227011138
5588 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
5589]
5590[immutable/checker.py: remove some uses of s.get_serverid(), not all
5591warner@lothar.com**20110227011134
5592 Ignore-this: e480a37efa9e94e8016d826c492f626e
5593]
5594[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
5595warner@lothar.com**20110227011132
5596 Ignore-this: 6078279ddf42b179996a4b53bee8c421
5597 MockIServer stubs
5598]
5599[upload.py: rearrange _make_trackers a bit, no behavior changes
5600warner@lothar.com**20110227011128
5601 Ignore-this: 296d4819e2af452b107177aef6ebb40f
5602]
5603[happinessutil.py: finally rename merge_peers to merge_servers
5604warner@lothar.com**20110227011124
5605 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
5606]
5607[test_upload.py: factor out FakeServerTracker
5608warner@lothar.com**20110227011120
5609 Ignore-this: 6c182cba90e908221099472cc159325b
5610]
5611[test_upload.py: server-vs-tracker cleanup
5612warner@lothar.com**20110227011115
5613 Ignore-this: 2915133be1a3ba456e8603885437e03
5614]
5615[happinessutil.py: server-vs-tracker cleanup
5616warner@lothar.com**20110227011111
5617 Ignore-this: b856c84033562d7d718cae7cb01085a9
5618]
5619[upload.py: more tracker-vs-server cleanup
5620warner@lothar.com**20110227011107
5621 Ignore-this: bb75ed2afef55e47c085b35def2de315
5622]
5623[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
5624warner@lothar.com**20110227011103
5625 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
5626]
5627[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
5628warner@lothar.com**20110227011100
5629 Ignore-this: 7ea858755cbe5896ac212a925840fe68
5630 
5631 No behavioral changes, just updating variable/method names and log messages.
5632 The effects outside these three files should be minimal: some exception
5633 messages changed (to say "server" instead of "peer"), and some internal class
5634 names were changed. A few things still use "peer" to minimize external
5635 changes, like UploadResults.timings["peer_selection"] and
5636 happinessutil.merge_peers, which can be changed later.
5637]
5638[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
5639warner@lothar.com**20110227011056
5640 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
5641]
5642[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
5643warner@lothar.com**20110227011051
5644 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
5645]
5646[test: increase timeout on a network test because Francois's ARM machine hit that timeout
5647zooko@zooko.com**20110317165909
5648 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
5649 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
5650]
5651[docs/configuration.rst: add a "Frontend Configuration" section
5652Brian Warner <warner@lothar.com>**20110222014323
5653 Ignore-this: 657018aa501fe4f0efef9851628444ca
5654 
5655 this points to docs/frontends/*.rst, which were previously underlinked
5656]
5657[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
5658"Brian Warner <warner@lothar.com>"**20110221061544
5659 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
5660]
5661[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
5662david-sarah@jacaranda.org**20110221015817
5663 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
5664]
5665[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
5666david-sarah@jacaranda.org**20110221020125
5667 Ignore-this: b0744ed58f161bf188e037bad077fc48
5668]
5669[Refactor StorageFarmBroker handling of servers
5670Brian Warner <warner@lothar.com>**20110221015804
5671 Ignore-this: 842144ed92f5717699b8f580eab32a51
5672 
5673 Pass around IServer instance instead of (peerid, rref) tuple. Replace
5674 "descriptor" with "server". Other replacements:
5675 
5676  get_all_servers -> get_connected_servers/get_known_servers
5677  get_servers_for_index -> get_servers_for_psi (now returns IServers)
5678 
5679 This change still needs to be pushed further down: lots of code is now
5680 getting the IServer and then distributing (peerid, rref) internally.
5681 Instead, it ought to distribute the IServer internally and delay
5682 extracting a serverid or rref until the last moment.
5683 
5684 no_network.py was updated to retain parallelism.
5685]
5686[TAG allmydata-tahoe-1.8.2
5687warner@lothar.com**20110131020101]
5688Patch bundle hash:
568921b243a396fe6e02d0ae94d540c9784b441e81bb