Ticket #999: jacp17Zancas20110723.darcs.patch

File jacp17Zancas20110723.darcs.patch, 302.6 KB (added by arch_o_median, at 2011-07-22T20:32:40Z)
Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
50  * adding comments to clarify what I'm about to do.
51
52Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
53  * branching back, no longer attempting to mock inside TestServerFSBackend
54
55Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
56  * checkpoint12 TestServerFSBackend no longer mocks filesystem
57
58Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
59  * JACP
60
61Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
62  * testing get incoming
63
64Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
65  * ImmutableShareFile does not know its StorageIndex
66
67Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
68  * get_incoming correctly reports the 0 share after it has arrived
69
70Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
71  * jacp14
72
73Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
74  * jacp14 or so
75
76Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
77  * temporary work-in-progress patch to be unrecorded
78  tidy up a few tests, work done in pair-programming with Zancas
79
80Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
81  * work in progress intended to be unrecorded and never committed to trunk
82  switch from os.path.join to filepath
83  incomplete refactoring of common "stay in your subtree" tester code into a superclass
84 
85
86Fri Jul 15 13:15:00 MDT 2011  zooko@zooko.com
87  * another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
88  In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
89
90Tue Jul 19 23:59:18 MDT 2011  zooko@zooko.com
91  * another temporary patch for sharing work-in-progress
92  A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
93  There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
94  (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
95 
96
97Fri Jul 22 01:00:36 MDT 2011  wilcoxjg@gmail.com
98  * jacp16 or so
99
100Fri Jul 22 14:32:44 MDT 2011  wilcoxjg@gmail.com
101  * jacp17
102
103New patches:
104
105[storage: new mocking tests of storage server read and write
106wilcoxjg@gmail.com**20110325203514
107 Ignore-this: df65c3c4f061dd1516f88662023fdb41
108 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
109] {
110addfile ./src/allmydata/test/test_server.py
111hunk ./src/allmydata/test/test_server.py 1
112+from twisted.trial import unittest
113+
114+from StringIO import StringIO
115+
116+from allmydata.test.common_util import ReallyEqualMixin
117+
118+import mock
119+
120+# This is the code that we're going to be testing.
121+from allmydata.storage.server import StorageServer
122+
123+# The following share file contents was generated with
124+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
125+# with share data == 'a'.
126+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
127+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
128+
129+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
130+
131+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
132+    @mock.patch('__builtin__.open')
133+    def test_create_server(self, mockopen):
134+        """ This tests whether a server instance can be constructed. """
135+
136+        def call_open(fname, mode):
137+            if fname == 'testdir/bucket_counter.state':
138+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
139+            elif fname == 'testdir/lease_checker.state':
140+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
141+            elif fname == 'testdir/lease_checker.history':
142+                return StringIO()
143+        mockopen.side_effect = call_open
144+
145+        # Now begin the test.
146+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
147+
148+        # You passed!
149+
150+class TestServer(unittest.TestCase, ReallyEqualMixin):
151+    @mock.patch('__builtin__.open')
152+    def setUp(self, mockopen):
153+        def call_open(fname, mode):
154+            if fname == 'testdir/bucket_counter.state':
155+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
156+            elif fname == 'testdir/lease_checker.state':
157+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
158+            elif fname == 'testdir/lease_checker.history':
159+                return StringIO()
160+        mockopen.side_effect = call_open
161+
162+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
163+
164+
165+    @mock.patch('time.time')
166+    @mock.patch('os.mkdir')
167+    @mock.patch('__builtin__.open')
168+    @mock.patch('os.listdir')
169+    @mock.patch('os.path.isdir')
170+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
171+        """Handle a report of corruption."""
172+
173+        def call_listdir(dirname):
174+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
175+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
176+
177+        mocklistdir.side_effect = call_listdir
178+
179+        class MockFile:
180+            def __init__(self):
181+                self.buffer = ''
182+                self.pos = 0
183+            def write(self, instring):
184+                begin = self.pos
185+                padlen = begin - len(self.buffer)
186+                if padlen > 0:
187+                    self.buffer += '\x00' * padlen
188+                end = self.pos + len(instring)
189+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
190+                self.pos = end
191+            def close(self):
192+                pass
193+            def seek(self, pos):
194+                self.pos = pos
195+            def read(self, numberbytes):
196+                return self.buffer[self.pos:self.pos+numberbytes]
197+            def tell(self):
198+                return self.pos
199+
200+        mocktime.return_value = 0
201+
202+        sharefile = MockFile()
203+        def call_open(fname, mode):
204+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
205+            return sharefile
206+
207+        mockopen.side_effect = call_open
208+        # Now begin the test.
209+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
210+        print bs
211+        bs[0].remote_write(0, 'a')
212+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
213+
214+
215+    @mock.patch('os.path.exists')
216+    @mock.patch('os.path.getsize')
217+    @mock.patch('__builtin__.open')
218+    @mock.patch('os.listdir')
219+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
220+        """ This tests whether the code correctly finds and reads
221+        shares written out by old (Tahoe-LAFS <= v1.8.2)
222+        servers. There is a similar test in test_download, but that one
223+        is from the perspective of the client and exercises a deeper
224+        stack of code. This one is for exercising just the
225+        StorageServer object. """
226+
227+        def call_listdir(dirname):
228+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
229+            return ['0']
230+
231+        mocklistdir.side_effect = call_listdir
232+
233+        def call_open(fname, mode):
234+            self.failUnlessReallyEqual(fname, sharefname)
235+            self.failUnless('r' in mode, mode)
236+            self.failUnless('b' in mode, mode)
237+
238+            return StringIO(share_file_data)
239+        mockopen.side_effect = call_open
240+
241+        datalen = len(share_file_data)
242+        def call_getsize(fname):
243+            self.failUnlessReallyEqual(fname, sharefname)
244+            return datalen
245+        mockgetsize.side_effect = call_getsize
246+
247+        def call_exists(fname):
248+            self.failUnlessReallyEqual(fname, sharefname)
249+            return True
250+        mockexists.side_effect = call_exists
251+
252+        # Now begin the test.
253+        bs = self.s.remote_get_buckets('teststorage_index')
254+
255+        self.failUnlessEqual(len(bs), 1)
256+        b = bs[0]
257+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
258+        # If you try to read past the end you get the as much data as is there.
259+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
260+        # If you start reading past the end of the file you get the empty string.
261+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
262}
263[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
264wilcoxjg@gmail.com**20110624202850
265 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
266 sloppy not for production
267] {
268move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
269hunk ./src/allmydata/storage/crawler.py 13
270     pass
271 
272 class ShareCrawler(service.MultiService):
273-    """A ShareCrawler subclass is attached to a StorageServer, and
274+    """A subcless of ShareCrawler is attached to a StorageServer, and
275     periodically walks all of its shares, processing each one in some
276     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
277     since large servers can easily have a terabyte of shares, in several
278hunk ./src/allmydata/storage/crawler.py 31
279     We assume that the normal upload/download/get_buckets traffic of a tahoe
280     grid will cause the prefixdir contents to be mostly cached in the kernel,
281     or that the number of buckets in each prefixdir will be small enough to
282-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
283+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
284     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
285     prefix. On this server, each prefixdir took 130ms-200ms to list the first
286     time, and 17ms to list the second time.
287hunk ./src/allmydata/storage/crawler.py 68
288     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
289     minimum_cycle_time = 300 # don't run a cycle faster than this
290 
291-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
292+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
293         service.MultiService.__init__(self)
294         if allowed_cpu_percentage is not None:
295             self.allowed_cpu_percentage = allowed_cpu_percentage
296hunk ./src/allmydata/storage/crawler.py 72
297-        self.server = server
298-        self.sharedir = server.sharedir
299-        self.statefile = statefile
300+        self.backend = backend
301         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
302                          for i in range(2**10)]
303         self.prefixes.sort()
304hunk ./src/allmydata/storage/crawler.py 446
305 
306     minimum_cycle_time = 60*60 # we don't need this more than once an hour
307 
308-    def __init__(self, server, statefile, num_sample_prefixes=1):
309-        ShareCrawler.__init__(self, server, statefile)
310+    def __init__(self, statefile, num_sample_prefixes=1):
311+        ShareCrawler.__init__(self, statefile)
312         self.num_sample_prefixes = num_sample_prefixes
313 
314     def add_initial_state(self):
315hunk ./src/allmydata/storage/expirer.py 15
316     removed.
317 
318     I collect statistics on the leases and make these available to a web
319-    status page, including::
320+    status page, including:
321 
322     Space recovered during this cycle-so-far:
323      actual (only if expiration_enabled=True):
324hunk ./src/allmydata/storage/expirer.py 51
325     slow_start = 360 # wait 6 minutes after startup
326     minimum_cycle_time = 12*60*60 # not more than twice per day
327 
328-    def __init__(self, server, statefile, historyfile,
329+    def __init__(self, statefile, historyfile,
330                  expiration_enabled, mode,
331                  override_lease_duration, # used if expiration_mode=="age"
332                  cutoff_date, # used if expiration_mode=="cutoff-date"
333hunk ./src/allmydata/storage/expirer.py 71
334         else:
335             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
336         self.sharetypes_to_expire = sharetypes
337-        ShareCrawler.__init__(self, server, statefile)
338+        ShareCrawler.__init__(self, statefile)
339 
340     def add_initial_state(self):
341         # we fill ["cycle-to-date"] here (even though they will be reset in
342hunk ./src/allmydata/storage/immutable.py 44
343     sharetype = "immutable"
344 
345     def __init__(self, filename, max_size=None, create=False):
346-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
347+        """ If max_size is not None then I won't allow more than
348+        max_size to be written to me. If create=True then max_size
349+        must not be None. """
350         precondition((max_size is not None) or (not create), max_size, create)
351         self.home = filename
352         self._max_size = max_size
353hunk ./src/allmydata/storage/immutable.py 87
354 
355     def read_share_data(self, offset, length):
356         precondition(offset >= 0)
357-        # reads beyond the end of the data are truncated. Reads that start
358-        # beyond the end of the data return an empty string. I wonder why
359-        # Python doesn't do the following computation for me?
360+        # Reads beyond the end of the data are truncated. Reads that start
361+        # beyond the end of the data return an empty string.
362         seekpos = self._data_offset+offset
363         fsize = os.path.getsize(self.home)
364         actuallength = max(0, min(length, fsize-seekpos))
365hunk ./src/allmydata/storage/immutable.py 198
366             space_freed += os.stat(self.home)[stat.ST_SIZE]
367             self.unlink()
368         return space_freed
369+class NullBucketWriter(Referenceable):
370+    implements(RIBucketWriter)
371 
372hunk ./src/allmydata/storage/immutable.py 201
373+    def remote_write(self, offset, data):
374+        return
375 
376 class BucketWriter(Referenceable):
377     implements(RIBucketWriter)
378hunk ./src/allmydata/storage/server.py 7
379 from twisted.application import service
380 
381 from zope.interface import implements
382-from allmydata.interfaces import RIStorageServer, IStatsProducer
383+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
384 from allmydata.util import fileutil, idlib, log, time_format
385 import allmydata # for __full_version__
386 
387hunk ./src/allmydata/storage/server.py 16
388 from allmydata.storage.lease import LeaseInfo
389 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
390      create_mutable_sharefile
391-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
392+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
393 from allmydata.storage.crawler import BucketCountingCrawler
394 from allmydata.storage.expirer import LeaseCheckingCrawler
395 
396hunk ./src/allmydata/storage/server.py 20
397+from zope.interface import implements
398+
399+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
400+# be started and stopped.
401+class Backend(service.MultiService):
402+    implements(IStatsProducer)
403+    def __init__(self):
404+        service.MultiService.__init__(self)
405+
406+    def get_bucket_shares(self):
407+        """XXX"""
408+        raise NotImplementedError
409+
410+    def get_share(self):
411+        """XXX"""
412+        raise NotImplementedError
413+
414+    def make_bucket_writer(self):
415+        """XXX"""
416+        raise NotImplementedError
417+
418+class NullBackend(Backend):
419+    def __init__(self):
420+        Backend.__init__(self)
421+
422+    def get_available_space(self):
423+        return None
424+
425+    def get_bucket_shares(self, storage_index):
426+        return set()
427+
428+    def get_share(self, storage_index, sharenum):
429+        return None
430+
431+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
432+        return NullBucketWriter()
433+
434+class FSBackend(Backend):
435+    def __init__(self, storedir, readonly=False, reserved_space=0):
436+        Backend.__init__(self)
437+
438+        self._setup_storage(storedir, readonly, reserved_space)
439+        self._setup_corruption_advisory()
440+        self._setup_bucket_counter()
441+        self._setup_lease_checkerf()
442+
443+    def _setup_storage(self, storedir, readonly, reserved_space):
444+        self.storedir = storedir
445+        self.readonly = readonly
446+        self.reserved_space = int(reserved_space)
447+        if self.reserved_space:
448+            if self.get_available_space() is None:
449+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
450+                        umid="0wZ27w", level=log.UNUSUAL)
451+
452+        self.sharedir = os.path.join(self.storedir, "shares")
453+        fileutil.make_dirs(self.sharedir)
454+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
455+        self._clean_incomplete()
456+
457+    def _clean_incomplete(self):
458+        fileutil.rm_dir(self.incomingdir)
459+        fileutil.make_dirs(self.incomingdir)
460+
461+    def _setup_corruption_advisory(self):
462+        # we don't actually create the corruption-advisory dir until necessary
463+        self.corruption_advisory_dir = os.path.join(self.storedir,
464+                                                    "corruption-advisories")
465+
466+    def _setup_bucket_counter(self):
467+        statefile = os.path.join(self.storedir, "bucket_counter.state")
468+        self.bucket_counter = BucketCountingCrawler(statefile)
469+        self.bucket_counter.setServiceParent(self)
470+
471+    def _setup_lease_checkerf(self):
472+        statefile = os.path.join(self.storedir, "lease_checker.state")
473+        historyfile = os.path.join(self.storedir, "lease_checker.history")
474+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
475+                                   expiration_enabled, expiration_mode,
476+                                   expiration_override_lease_duration,
477+                                   expiration_cutoff_date,
478+                                   expiration_sharetypes)
479+        self.lease_checker.setServiceParent(self)
480+
481+    def get_available_space(self):
482+        if self.readonly:
483+            return 0
484+        return fileutil.get_available_space(self.storedir, self.reserved_space)
485+
486+    def get_bucket_shares(self, storage_index):
487+        """Return a list of (shnum, pathname) tuples for files that hold
488+        shares for this storage_index. In each tuple, 'shnum' will always be
489+        the integer form of the last component of 'pathname'."""
490+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
491+        try:
492+            for f in os.listdir(storagedir):
493+                if NUM_RE.match(f):
494+                    filename = os.path.join(storagedir, f)
495+                    yield (int(f), filename)
496+        except OSError:
497+            # Commonly caused by there being no buckets at all.
498+            pass
499+
500 # storage/
501 # storage/shares/incoming
502 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
503hunk ./src/allmydata/storage/server.py 143
504     name = 'storage'
505     LeaseCheckerClass = LeaseCheckingCrawler
506 
507-    def __init__(self, storedir, nodeid, reserved_space=0,
508-                 discard_storage=False, readonly_storage=False,
509+    def __init__(self, nodeid, backend, reserved_space=0,
510+                 readonly_storage=False,
511                  stats_provider=None,
512                  expiration_enabled=False,
513                  expiration_mode="age",
514hunk ./src/allmydata/storage/server.py 155
515         assert isinstance(nodeid, str)
516         assert len(nodeid) == 20
517         self.my_nodeid = nodeid
518-        self.storedir = storedir
519-        sharedir = os.path.join(storedir, "shares")
520-        fileutil.make_dirs(sharedir)
521-        self.sharedir = sharedir
522-        # we don't actually create the corruption-advisory dir until necessary
523-        self.corruption_advisory_dir = os.path.join(storedir,
524-                                                    "corruption-advisories")
525-        self.reserved_space = int(reserved_space)
526-        self.no_storage = discard_storage
527-        self.readonly_storage = readonly_storage
528         self.stats_provider = stats_provider
529         if self.stats_provider:
530             self.stats_provider.register_producer(self)
531hunk ./src/allmydata/storage/server.py 158
532-        self.incomingdir = os.path.join(sharedir, 'incoming')
533-        self._clean_incomplete()
534-        fileutil.make_dirs(self.incomingdir)
535         self._active_writers = weakref.WeakKeyDictionary()
536hunk ./src/allmydata/storage/server.py 159
537+        self.backend = backend
538+        self.backend.setServiceParent(self)
539         log.msg("StorageServer created", facility="tahoe.storage")
540 
541hunk ./src/allmydata/storage/server.py 163
542-        if reserved_space:
543-            if self.get_available_space() is None:
544-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
545-                        umin="0wZ27w", level=log.UNUSUAL)
546-
547         self.latencies = {"allocate": [], # immutable
548                           "write": [],
549                           "close": [],
550hunk ./src/allmydata/storage/server.py 174
551                           "renew": [],
552                           "cancel": [],
553                           }
554-        self.add_bucket_counter()
555-
556-        statefile = os.path.join(self.storedir, "lease_checker.state")
557-        historyfile = os.path.join(self.storedir, "lease_checker.history")
558-        klass = self.LeaseCheckerClass
559-        self.lease_checker = klass(self, statefile, historyfile,
560-                                   expiration_enabled, expiration_mode,
561-                                   expiration_override_lease_duration,
562-                                   expiration_cutoff_date,
563-                                   expiration_sharetypes)
564-        self.lease_checker.setServiceParent(self)
565 
566     def __repr__(self):
567         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
568hunk ./src/allmydata/storage/server.py 178
569 
570-    def add_bucket_counter(self):
571-        statefile = os.path.join(self.storedir, "bucket_counter.state")
572-        self.bucket_counter = BucketCountingCrawler(self, statefile)
573-        self.bucket_counter.setServiceParent(self)
574-
575     def count(self, name, delta=1):
576         if self.stats_provider:
577             self.stats_provider.count("storage_server." + name, delta)
578hunk ./src/allmydata/storage/server.py 233
579             kwargs["facility"] = "tahoe.storage"
580         return log.msg(*args, **kwargs)
581 
582-    def _clean_incomplete(self):
583-        fileutil.rm_dir(self.incomingdir)
584-
585     def get_stats(self):
586         # remember: RIStatsProvider requires that our return dict
587         # contains numeric values.
588hunk ./src/allmydata/storage/server.py 269
589             stats['storage_server.total_bucket_count'] = bucket_count
590         return stats
591 
592-    def get_available_space(self):
593-        """Returns available space for share storage in bytes, or None if no
594-        API to get this information is available."""
595-
596-        if self.readonly_storage:
597-            return 0
598-        return fileutil.get_available_space(self.storedir, self.reserved_space)
599-
600     def allocated_size(self):
601         space = 0
602         for bw in self._active_writers:
603hunk ./src/allmydata/storage/server.py 276
604         return space
605 
606     def remote_get_version(self):
607-        remaining_space = self.get_available_space()
608+        remaining_space = self.backend.get_available_space()
609         if remaining_space is None:
610             # We're on a platform that has no API to get disk stats.
611             remaining_space = 2**64
612hunk ./src/allmydata/storage/server.py 301
613         self.count("allocate")
614         alreadygot = set()
615         bucketwriters = {} # k: shnum, v: BucketWriter
616-        si_dir = storage_index_to_dir(storage_index)
617-        si_s = si_b2a(storage_index)
618 
619hunk ./src/allmydata/storage/server.py 302
620+        si_s = si_b2a(storage_index)
621         log.msg("storage: allocate_buckets %s" % si_s)
622 
623         # in this implementation, the lease information (including secrets)
624hunk ./src/allmydata/storage/server.py 316
625 
626         max_space_per_bucket = allocated_size
627 
628-        remaining_space = self.get_available_space()
629+        remaining_space = self.backend.get_available_space()
630         limited = remaining_space is not None
631         if limited:
632             # this is a bit conservative, since some of this allocated_size()
633hunk ./src/allmydata/storage/server.py 329
634         # they asked about: this will save them a lot of work. Add or update
635         # leases for all of them: if they want us to hold shares for this
636         # file, they'll want us to hold leases for this file.
637-        for (shnum, fn) in self._get_bucket_shares(storage_index):
638+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
639             alreadygot.add(shnum)
640             sf = ShareFile(fn)
641             sf.add_or_renew_lease(lease_info)
642hunk ./src/allmydata/storage/server.py 335
643 
644         for shnum in sharenums:
645-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
646-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
647-            if os.path.exists(finalhome):
648+            share = self.backend.get_share(storage_index, shnum)
649+
650+            if not share:
651+                if (not limited) or (remaining_space >= max_space_per_bucket):
652+                    # ok! we need to create the new share file.
653+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
654+                                      max_space_per_bucket, lease_info, canary)
655+                    bucketwriters[shnum] = bw
656+                    self._active_writers[bw] = 1
657+                    if limited:
658+                        remaining_space -= max_space_per_bucket
659+                else:
660+                    # bummer! not enough space to accept this bucket
661+                    pass
662+
663+            elif share.is_complete():
664                 # great! we already have it. easy.
665                 pass
666hunk ./src/allmydata/storage/server.py 353
667-            elif os.path.exists(incominghome):
668+            elif not share.is_complete():
669                 # Note that we don't create BucketWriters for shnums that
670                 # have a partial share (in incoming/), so if a second upload
671                 # occurs while the first is still in progress, the second
672hunk ./src/allmydata/storage/server.py 359
673                 # uploader will use different storage servers.
674                 pass
675-            elif (not limited) or (remaining_space >= max_space_per_bucket):
676-                # ok! we need to create the new share file.
677-                bw = BucketWriter(self, incominghome, finalhome,
678-                                  max_space_per_bucket, lease_info, canary)
679-                if self.no_storage:
680-                    bw.throw_out_all_data = True
681-                bucketwriters[shnum] = bw
682-                self._active_writers[bw] = 1
683-                if limited:
684-                    remaining_space -= max_space_per_bucket
685-            else:
686-                # bummer! not enough space to accept this bucket
687-                pass
688-
689-        if bucketwriters:
690-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
691 
692         self.add_latency("allocate", time.time() - start)
693         return alreadygot, bucketwriters
694hunk ./src/allmydata/storage/server.py 437
695             self.stats_provider.count('storage_server.bytes_added', consumed_size)
696         del self._active_writers[bw]
697 
698-    def _get_bucket_shares(self, storage_index):
699-        """Return a list of (shnum, pathname) tuples for files that hold
700-        shares for this storage_index. In each tuple, 'shnum' will always be
701-        the integer form of the last component of 'pathname'."""
702-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
703-        try:
704-            for f in os.listdir(storagedir):
705-                if NUM_RE.match(f):
706-                    filename = os.path.join(storagedir, f)
707-                    yield (int(f), filename)
708-        except OSError:
709-            # Commonly caused by there being no buckets at all.
710-            pass
711 
712     def remote_get_buckets(self, storage_index):
713         start = time.time()
714hunk ./src/allmydata/storage/server.py 444
715         si_s = si_b2a(storage_index)
716         log.msg("storage: get_buckets %s" % si_s)
717         bucketreaders = {} # k: sharenum, v: BucketReader
718-        for shnum, filename in self._get_bucket_shares(storage_index):
719+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
720             bucketreaders[shnum] = BucketReader(self, filename,
721                                                 storage_index, shnum)
722         self.add_latency("get", time.time() - start)
723hunk ./src/allmydata/test/test_backends.py 10
724 import mock
725 
726 # This is the code that we're going to be testing.
727-from allmydata.storage.server import StorageServer
728+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
729 
730 # The following share file contents was generated with
731 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
732hunk ./src/allmydata/test/test_backends.py 21
733 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
734 
735 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
736+    @mock.patch('time.time')
737+    @mock.patch('os.mkdir')
738+    @mock.patch('__builtin__.open')
739+    @mock.patch('os.listdir')
740+    @mock.patch('os.path.isdir')
741+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
742+        """ This tests whether a server instance can be constructed
743+        with a null backend. The server instance fails the test if it
744+        tries to read or write to the file system. """
745+
746+        # Now begin the test.
747+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
748+
749+        self.failIf(mockisdir.called)
750+        self.failIf(mocklistdir.called)
751+        self.failIf(mockopen.called)
752+        self.failIf(mockmkdir.called)
753+
754+        # You passed!
755+
756+    @mock.patch('time.time')
757+    @mock.patch('os.mkdir')
758     @mock.patch('__builtin__.open')
759hunk ./src/allmydata/test/test_backends.py 44
760-    def test_create_server(self, mockopen):
761-        """ This tests whether a server instance can be constructed. """
762+    @mock.patch('os.listdir')
763+    @mock.patch('os.path.isdir')
764+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
765+        """ This tests whether a server instance can be constructed
766+        with a filesystem backend. To pass the test, it has to use the
767+        filesystem in only the prescribed ways. """
768 
769         def call_open(fname, mode):
770             if fname == 'testdir/bucket_counter.state':
771hunk ./src/allmydata/test/test_backends.py 58
772                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
773             elif fname == 'testdir/lease_checker.history':
774                 return StringIO()
775+            else:
776+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
777         mockopen.side_effect = call_open
778 
779         # Now begin the test.
780hunk ./src/allmydata/test/test_backends.py 63
781-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
782+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
783+
784+        self.failIf(mockisdir.called)
785+        self.failIf(mocklistdir.called)
786+        self.failIf(mockopen.called)
787+        self.failIf(mockmkdir.called)
788+        self.failIf(mocktime.called)
789 
790         # You passed!
791 
792hunk ./src/allmydata/test/test_backends.py 73
793-class TestServer(unittest.TestCase, ReallyEqualMixin):
794+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
795+    def setUp(self):
796+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
797+
798+    @mock.patch('os.mkdir')
799+    @mock.patch('__builtin__.open')
800+    @mock.patch('os.listdir')
801+    @mock.patch('os.path.isdir')
802+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
803+        """ Write a new share. """
804+
805+        # Now begin the test.
806+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
807+        bs[0].remote_write(0, 'a')
808+        self.failIf(mockisdir.called)
809+        self.failIf(mocklistdir.called)
810+        self.failIf(mockopen.called)
811+        self.failIf(mockmkdir.called)
812+
813+    @mock.patch('os.path.exists')
814+    @mock.patch('os.path.getsize')
815+    @mock.patch('__builtin__.open')
816+    @mock.patch('os.listdir')
817+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
818+        """ This tests whether the code correctly finds and reads
819+        shares written out by old (Tahoe-LAFS <= v1.8.2)
820+        servers. There is a similar test in test_download, but that one
821+        is from the perspective of the client and exercises a deeper
822+        stack of code. This one is for exercising just the
823+        StorageServer object. """
824+
825+        # Now begin the test.
826+        bs = self.s.remote_get_buckets('teststorage_index')
827+
828+        self.failUnlessEqual(len(bs), 0)
829+        self.failIf(mocklistdir.called)
830+        self.failIf(mockopen.called)
831+        self.failIf(mockgetsize.called)
832+        self.failIf(mockexists.called)
833+
834+
835+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
836     @mock.patch('__builtin__.open')
837     def setUp(self, mockopen):
838         def call_open(fname, mode):
839hunk ./src/allmydata/test/test_backends.py 126
840                 return StringIO()
841         mockopen.side_effect = call_open
842 
843-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
844-
845+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
846 
847     @mock.patch('time.time')
848     @mock.patch('os.mkdir')
849hunk ./src/allmydata/test/test_backends.py 134
850     @mock.patch('os.listdir')
851     @mock.patch('os.path.isdir')
852     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
853-        """Handle a report of corruption."""
854+        """ Write a new share. """
855 
856         def call_listdir(dirname):
857             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
858hunk ./src/allmydata/test/test_backends.py 173
859         mockopen.side_effect = call_open
860         # Now begin the test.
861         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
862-        print bs
863         bs[0].remote_write(0, 'a')
864         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
865 
866hunk ./src/allmydata/test/test_backends.py 176
867-
868     @mock.patch('os.path.exists')
869     @mock.patch('os.path.getsize')
870     @mock.patch('__builtin__.open')
871hunk ./src/allmydata/test/test_backends.py 218
872 
873         self.failUnlessEqual(len(bs), 1)
874         b = bs[0]
875+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
876         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
877         # If you try to read past the end you get the as much data as is there.
878         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
879hunk ./src/allmydata/test/test_backends.py 224
880         # If you start reading past the end of the file you get the empty string.
881         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
882+
883+
884}
885[a temp patch used as a snapshot
886wilcoxjg@gmail.com**20110626052732
887 Ignore-this: 95f05e314eaec870afa04c76d979aa44
888] {
889hunk ./docs/configuration.rst 637
890   [storage]
891   enabled = True
892   readonly = True
893-  sizelimit = 10000000000
894 
895 
896   [helper]
897hunk ./docs/garbage-collection.rst 16
898 
899 When a file or directory in the virtual filesystem is no longer referenced,
900 the space that its shares occupied on each storage server can be freed,
901-making room for other shares. Tahoe currently uses a garbage collection
902+making room for other shares. Tahoe uses a garbage collection
903 ("GC") mechanism to implement this space-reclamation process. Each share has
904 one or more "leases", which are managed by clients who want the
905 file/directory to be retained. The storage server accepts each share for a
906hunk ./docs/garbage-collection.rst 34
907 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
908 If lease renewal occurs quickly and with 100% reliability, than any renewal
909 time that is shorter than the lease duration will suffice, but a larger ratio
910-of duration-over-renewal-time will be more robust in the face of occasional
911+of lease duration to renewal time will be more robust in the face of occasional
912 delays or failures.
913 
914 The current recommended values for a small Tahoe grid are to renew the leases
915replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
916hunk ./src/allmydata/client.py 260
917             sharetypes.append("mutable")
918         expiration_sharetypes = tuple(sharetypes)
919 
920+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
921+            xyz
922+        xyz
923         ss = StorageServer(storedir, self.nodeid,
924                            reserved_space=reserved,
925                            discard_storage=discard,
926hunk ./src/allmydata/storage/crawler.py 234
927         f = open(tmpfile, "wb")
928         pickle.dump(self.state, f)
929         f.close()
930-        fileutil.move_into_place(tmpfile, self.statefile)
931+        fileutil.move_into_place(tmpfile, self.statefname)
932 
933     def startService(self):
934         # arrange things to look like we were just sleeping, so
935}
936[snapshot of progress on backend implementation (not suitable for trunk)
937wilcoxjg@gmail.com**20110626053244
938 Ignore-this: 50c764af791c2b99ada8289546806a0a
939] {
940adddir ./src/allmydata/storage/backends
941adddir ./src/allmydata/storage/backends/das
942move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
943adddir ./src/allmydata/storage/backends/null
944hunk ./src/allmydata/interfaces.py 270
945         store that on disk.
946         """
947 
948+class IStorageBackend(Interface):
949+    """
950+    Objects of this kind live on the server side and are used by the
951+    storage server object.
952+    """
953+    def get_available_space(self, reserved_space):
954+        """ Returns available space for share storage in bytes, or
955+        None if this information is not available or if the available
956+        space is unlimited.
957+
958+        If the backend is configured for read-only mode then this will
959+        return 0.
960+
961+        reserved_space is how many bytes to subtract from the answer, so
962+        you can pass how many bytes you would like to leave unused on this
963+        filesystem as reserved_space. """
964+
965+    def get_bucket_shares(self):
966+        """XXX"""
967+
968+    def get_share(self):
969+        """XXX"""
970+
971+    def make_bucket_writer(self):
972+        """XXX"""
973+
974+class IStorageBackendShare(Interface):
975+    """
976+    This object contains as much as all of the share data.  It is intended
977+    for lazy evaluation such that in many use cases substantially less than
978+    all of the share data will be accessed.
979+    """
980+    def is_complete(self):
981+        """
982+        Returns the share state, or None if the share does not exist.
983+        """
984+
985 class IStorageBucketWriter(Interface):
986     """
987     Objects of this kind live on the client side.
988hunk ./src/allmydata/interfaces.py 2492
989 
990 class EmptyPathnameComponentError(Exception):
991     """The webapi disallows empty pathname components."""
992+
993+class IShareStore(Interface):
994+    pass
995+
996addfile ./src/allmydata/storage/backends/__init__.py
997addfile ./src/allmydata/storage/backends/das/__init__.py
998addfile ./src/allmydata/storage/backends/das/core.py
999hunk ./src/allmydata/storage/backends/das/core.py 1
1000+from allmydata.interfaces import IStorageBackend
1001+from allmydata.storage.backends.base import Backend
1002+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1003+from allmydata.util.assertutil import precondition
1004+
1005+import os, re, weakref, struct, time
1006+
1007+from foolscap.api import Referenceable
1008+from twisted.application import service
1009+
1010+from zope.interface import implements
1011+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
1012+from allmydata.util import fileutil, idlib, log, time_format
1013+import allmydata # for __full_version__
1014+
1015+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1016+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
1017+from allmydata.storage.lease import LeaseInfo
1018+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1019+     create_mutable_sharefile
1020+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1021+from allmydata.storage.crawler import FSBucketCountingCrawler
1022+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1023+
1024+from zope.interface import implements
1025+
1026+class DASCore(Backend):
1027+    implements(IStorageBackend)
1028+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1029+        Backend.__init__(self)
1030+
1031+        self._setup_storage(storedir, readonly, reserved_space)
1032+        self._setup_corruption_advisory()
1033+        self._setup_bucket_counter()
1034+        self._setup_lease_checkerf(expiration_policy)
1035+
1036+    def _setup_storage(self, storedir, readonly, reserved_space):
1037+        self.storedir = storedir
1038+        self.readonly = readonly
1039+        self.reserved_space = int(reserved_space)
1040+        if self.reserved_space:
1041+            if self.get_available_space() is None:
1042+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1043+                        umid="0wZ27w", level=log.UNUSUAL)
1044+
1045+        self.sharedir = os.path.join(self.storedir, "shares")
1046+        fileutil.make_dirs(self.sharedir)
1047+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1048+        self._clean_incomplete()
1049+
1050+    def _clean_incomplete(self):
1051+        fileutil.rm_dir(self.incomingdir)
1052+        fileutil.make_dirs(self.incomingdir)
1053+
1054+    def _setup_corruption_advisory(self):
1055+        # we don't actually create the corruption-advisory dir until necessary
1056+        self.corruption_advisory_dir = os.path.join(self.storedir,
1057+                                                    "corruption-advisories")
1058+
1059+    def _setup_bucket_counter(self):
1060+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1061+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1062+        self.bucket_counter.setServiceParent(self)
1063+
1064+    def _setup_lease_checkerf(self, expiration_policy):
1065+        statefile = os.path.join(self.storedir, "lease_checker.state")
1066+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1067+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1068+        self.lease_checker.setServiceParent(self)
1069+
1070+    def get_available_space(self):
1071+        if self.readonly:
1072+            return 0
1073+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1074+
1075+    def get_shares(self, storage_index):
1076+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1077+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1078+        try:
1079+            for f in os.listdir(finalstoragedir):
1080+                if NUM_RE.match(f):
1081+                    filename = os.path.join(finalstoragedir, f)
1082+                    yield FSBShare(filename, int(f))
1083+        except OSError:
1084+            # Commonly caused by there being no buckets at all.
1085+            pass
1086+       
1087+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1088+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1089+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1090+        return bw
1091+       
1092+
1093+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1094+# and share data. The share data is accessed by RIBucketWriter.write and
1095+# RIBucketReader.read . The lease information is not accessible through these
1096+# interfaces.
1097+
1098+# The share file has the following layout:
1099+#  0x00: share file version number, four bytes, current version is 1
1100+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1101+#  0x08: number of leases, four bytes big-endian
1102+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1103+#  A+0x0c = B: first lease. Lease format is:
1104+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1105+#   B+0x04: renew secret, 32 bytes (SHA256)
1106+#   B+0x24: cancel secret, 32 bytes (SHA256)
1107+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1108+#   B+0x48: next lease, or end of record
1109+
1110+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1111+# but it is still filled in by storage servers in case the storage server
1112+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1113+# share file is moved from one storage server to another. The value stored in
1114+# this field is truncated, so if the actual share data length is >= 2**32,
1115+# then the value stored in this field will be the actual share data length
1116+# modulo 2**32.
1117+
1118+class ImmutableShare:
1119+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1120+    sharetype = "immutable"
1121+
1122+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1123+        """ If max_size is not None then I won't allow more than
1124+        max_size to be written to me. If create=True then max_size
1125+        must not be None. """
1126+        precondition((max_size is not None) or (not create), max_size, create)
1127+        self.shnum = shnum
1128+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1129+        self._max_size = max_size
1130+        if create:
1131+            # touch the file, so later callers will see that we're working on
1132+            # it. Also construct the metadata.
1133+            assert not os.path.exists(self.fname)
1134+            fileutil.make_dirs(os.path.dirname(self.fname))
1135+            f = open(self.fname, 'wb')
1136+            # The second field -- the four-byte share data length -- is no
1137+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1138+            # there in case someone downgrades a storage server from >=
1139+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1140+            # server to another, etc. We do saturation -- a share data length
1141+            # larger than 2**32-1 (what can fit into the field) is marked as
1142+            # the largest length that can fit into the field. That way, even
1143+            # if this does happen, the old < v1.3.0 server will still allow
1144+            # clients to read the first part of the share.
1145+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1146+            f.close()
1147+            self._lease_offset = max_size + 0x0c
1148+            self._num_leases = 0
1149+        else:
1150+            f = open(self.fname, 'rb')
1151+            filesize = os.path.getsize(self.fname)
1152+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1153+            f.close()
1154+            if version != 1:
1155+                msg = "sharefile %s had version %d but we wanted 1" % \
1156+                      (self.fname, version)
1157+                raise UnknownImmutableContainerVersionError(msg)
1158+            self._num_leases = num_leases
1159+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1160+        self._data_offset = 0xc
1161+
1162+    def unlink(self):
1163+        os.unlink(self.fname)
1164+
1165+    def read_share_data(self, offset, length):
1166+        precondition(offset >= 0)
1167+        # Reads beyond the end of the data are truncated. Reads that start
1168+        # beyond the end of the data return an empty string.
1169+        seekpos = self._data_offset+offset
1170+        fsize = os.path.getsize(self.fname)
1171+        actuallength = max(0, min(length, fsize-seekpos))
1172+        if actuallength == 0:
1173+            return ""
1174+        f = open(self.fname, 'rb')
1175+        f.seek(seekpos)
1176+        return f.read(actuallength)
1177+
1178+    def write_share_data(self, offset, data):
1179+        length = len(data)
1180+        precondition(offset >= 0, offset)
1181+        if self._max_size is not None and offset+length > self._max_size:
1182+            raise DataTooLargeError(self._max_size, offset, length)
1183+        f = open(self.fname, 'rb+')
1184+        real_offset = self._data_offset+offset
1185+        f.seek(real_offset)
1186+        assert f.tell() == real_offset
1187+        f.write(data)
1188+        f.close()
1189+
1190+    def _write_lease_record(self, f, lease_number, lease_info):
1191+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1192+        f.seek(offset)
1193+        assert f.tell() == offset
1194+        f.write(lease_info.to_immutable_data())
1195+
1196+    def _read_num_leases(self, f):
1197+        f.seek(0x08)
1198+        (num_leases,) = struct.unpack(">L", f.read(4))
1199+        return num_leases
1200+
1201+    def _write_num_leases(self, f, num_leases):
1202+        f.seek(0x08)
1203+        f.write(struct.pack(">L", num_leases))
1204+
1205+    def _truncate_leases(self, f, num_leases):
1206+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1207+
1208+    def get_leases(self):
1209+        """Yields a LeaseInfo instance for all leases."""
1210+        f = open(self.fname, 'rb')
1211+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1212+        f.seek(self._lease_offset)
1213+        for i in range(num_leases):
1214+            data = f.read(self.LEASE_SIZE)
1215+            if data:
1216+                yield LeaseInfo().from_immutable_data(data)
1217+
1218+    def add_lease(self, lease_info):
1219+        f = open(self.fname, 'rb+')
1220+        num_leases = self._read_num_leases(f)
1221+        self._write_lease_record(f, num_leases, lease_info)
1222+        self._write_num_leases(f, num_leases+1)
1223+        f.close()
1224+
1225+    def renew_lease(self, renew_secret, new_expire_time):
1226+        for i,lease in enumerate(self.get_leases()):
1227+            if constant_time_compare(lease.renew_secret, renew_secret):
1228+                # yup. See if we need to update the owner time.
1229+                if new_expire_time > lease.expiration_time:
1230+                    # yes
1231+                    lease.expiration_time = new_expire_time
1232+                    f = open(self.fname, 'rb+')
1233+                    self._write_lease_record(f, i, lease)
1234+                    f.close()
1235+                return
1236+        raise IndexError("unable to renew non-existent lease")
1237+
1238+    def add_or_renew_lease(self, lease_info):
1239+        try:
1240+            self.renew_lease(lease_info.renew_secret,
1241+                             lease_info.expiration_time)
1242+        except IndexError:
1243+            self.add_lease(lease_info)
1244+
1245+
1246+    def cancel_lease(self, cancel_secret):
1247+        """Remove a lease with the given cancel_secret. If the last lease is
1248+        cancelled, the file will be removed. Return the number of bytes that
1249+        were freed (by truncating the list of leases, and possibly by
1250+        deleting the file. Raise IndexError if there was no lease with the
1251+        given cancel_secret.
1252+        """
1253+
1254+        leases = list(self.get_leases())
1255+        num_leases_removed = 0
1256+        for i,lease in enumerate(leases):
1257+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1258+                leases[i] = None
1259+                num_leases_removed += 1
1260+        if not num_leases_removed:
1261+            raise IndexError("unable to find matching lease to cancel")
1262+        if num_leases_removed:
1263+            # pack and write out the remaining leases. We write these out in
1264+            # the same order as they were added, so that if we crash while
1265+            # doing this, we won't lose any non-cancelled leases.
1266+            leases = [l for l in leases if l] # remove the cancelled leases
1267+            f = open(self.fname, 'rb+')
1268+            for i,lease in enumerate(leases):
1269+                self._write_lease_record(f, i, lease)
1270+            self._write_num_leases(f, len(leases))
1271+            self._truncate_leases(f, len(leases))
1272+            f.close()
1273+        space_freed = self.LEASE_SIZE * num_leases_removed
1274+        if not len(leases):
1275+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1276+            self.unlink()
1277+        return space_freed
1278hunk ./src/allmydata/storage/backends/das/expirer.py 2
1279 import time, os, pickle, struct
1280-from allmydata.storage.crawler import ShareCrawler
1281-from allmydata.storage.shares import get_share_file
1282+from allmydata.storage.crawler import FSShareCrawler
1283 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1284      UnknownImmutableContainerVersionError
1285 from twisted.python import log as twlog
1286hunk ./src/allmydata/storage/backends/das/expirer.py 7
1287 
1288-class LeaseCheckingCrawler(ShareCrawler):
1289+class FSLeaseCheckingCrawler(FSShareCrawler):
1290     """I examine the leases on all shares, determining which are still valid
1291     and which have expired. I can remove the expired leases (if so
1292     configured), and the share will be deleted when the last lease is
1293hunk ./src/allmydata/storage/backends/das/expirer.py 50
1294     slow_start = 360 # wait 6 minutes after startup
1295     minimum_cycle_time = 12*60*60 # not more than twice per day
1296 
1297-    def __init__(self, statefile, historyfile,
1298-                 expiration_enabled, mode,
1299-                 override_lease_duration, # used if expiration_mode=="age"
1300-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1301-                 sharetypes):
1302+    def __init__(self, statefile, historyfile, expiration_policy):
1303         self.historyfile = historyfile
1304hunk ./src/allmydata/storage/backends/das/expirer.py 52
1305-        self.expiration_enabled = expiration_enabled
1306-        self.mode = mode
1307+        self.expiration_enabled = expiration_policy['enabled']
1308+        self.mode = expiration_policy['mode']
1309         self.override_lease_duration = None
1310         self.cutoff_date = None
1311         if self.mode == "age":
1312hunk ./src/allmydata/storage/backends/das/expirer.py 57
1313-            assert isinstance(override_lease_duration, (int, type(None)))
1314-            self.override_lease_duration = override_lease_duration # seconds
1315+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1316+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1317         elif self.mode == "cutoff-date":
1318hunk ./src/allmydata/storage/backends/das/expirer.py 60
1319-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1320+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1321             assert cutoff_date is not None
1322hunk ./src/allmydata/storage/backends/das/expirer.py 62
1323-            self.cutoff_date = cutoff_date
1324+            self.cutoff_date = expiration_policy['cutoff_date']
1325         else:
1326hunk ./src/allmydata/storage/backends/das/expirer.py 64
1327-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1328-        self.sharetypes_to_expire = sharetypes
1329-        ShareCrawler.__init__(self, statefile)
1330+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1331+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1332+        FSShareCrawler.__init__(self, statefile)
1333 
1334     def add_initial_state(self):
1335         # we fill ["cycle-to-date"] here (even though they will be reset in
1336hunk ./src/allmydata/storage/backends/das/expirer.py 156
1337 
1338     def process_share(self, sharefilename):
1339         # first, find out what kind of a share it is
1340-        sf = get_share_file(sharefilename)
1341+        f = open(sharefilename, "rb")
1342+        prefix = f.read(32)
1343+        f.close()
1344+        if prefix == MutableShareFile.MAGIC:
1345+            sf = MutableShareFile(sharefilename)
1346+        else:
1347+            # otherwise assume it's immutable
1348+            sf = FSBShare(sharefilename)
1349         sharetype = sf.sharetype
1350         now = time.time()
1351         s = self.stat(sharefilename)
1352addfile ./src/allmydata/storage/backends/null/__init__.py
1353addfile ./src/allmydata/storage/backends/null/core.py
1354hunk ./src/allmydata/storage/backends/null/core.py 1
1355+from allmydata.storage.backends.base import Backend
1356+
1357+class NullCore(Backend):
1358+    def __init__(self):
1359+        Backend.__init__(self)
1360+
1361+    def get_available_space(self):
1362+        return None
1363+
1364+    def get_shares(self, storage_index):
1365+        return set()
1366+
1367+    def get_share(self, storage_index, sharenum):
1368+        return None
1369+
1370+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1371+        return NullBucketWriter()
1372hunk ./src/allmydata/storage/crawler.py 12
1373 class TimeSliceExceeded(Exception):
1374     pass
1375 
1376-class ShareCrawler(service.MultiService):
1377+class FSShareCrawler(service.MultiService):
1378     """A subcless of ShareCrawler is attached to a StorageServer, and
1379     periodically walks all of its shares, processing each one in some
1380     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1381hunk ./src/allmydata/storage/crawler.py 68
1382     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1383     minimum_cycle_time = 300 # don't run a cycle faster than this
1384 
1385-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1386+    def __init__(self, statefname, allowed_cpu_percentage=None):
1387         service.MultiService.__init__(self)
1388         if allowed_cpu_percentage is not None:
1389             self.allowed_cpu_percentage = allowed_cpu_percentage
1390hunk ./src/allmydata/storage/crawler.py 72
1391-        self.backend = backend
1392+        self.statefname = statefname
1393         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1394                          for i in range(2**10)]
1395         self.prefixes.sort()
1396hunk ./src/allmydata/storage/crawler.py 192
1397         #                            of the last bucket to be processed, or
1398         #                            None if we are sleeping between cycles
1399         try:
1400-            f = open(self.statefile, "rb")
1401+            f = open(self.statefname, "rb")
1402             state = pickle.load(f)
1403             f.close()
1404         except EnvironmentError:
1405hunk ./src/allmydata/storage/crawler.py 230
1406         else:
1407             last_complete_prefix = self.prefixes[lcpi]
1408         self.state["last-complete-prefix"] = last_complete_prefix
1409-        tmpfile = self.statefile + ".tmp"
1410+        tmpfile = self.statefname + ".tmp"
1411         f = open(tmpfile, "wb")
1412         pickle.dump(self.state, f)
1413         f.close()
1414hunk ./src/allmydata/storage/crawler.py 433
1415         pass
1416 
1417 
1418-class BucketCountingCrawler(ShareCrawler):
1419+class FSBucketCountingCrawler(FSShareCrawler):
1420     """I keep track of how many buckets are being managed by this server.
1421     This is equivalent to the number of distributed files and directories for
1422     which I am providing storage. The actual number of files+directories in
1423hunk ./src/allmydata/storage/crawler.py 446
1424 
1425     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1426 
1427-    def __init__(self, statefile, num_sample_prefixes=1):
1428-        ShareCrawler.__init__(self, statefile)
1429+    def __init__(self, statefname, num_sample_prefixes=1):
1430+        FSShareCrawler.__init__(self, statefname)
1431         self.num_sample_prefixes = num_sample_prefixes
1432 
1433     def add_initial_state(self):
1434hunk ./src/allmydata/storage/immutable.py 14
1435 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1436      DataTooLargeError
1437 
1438-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1439-# and share data. The share data is accessed by RIBucketWriter.write and
1440-# RIBucketReader.read . The lease information is not accessible through these
1441-# interfaces.
1442-
1443-# The share file has the following layout:
1444-#  0x00: share file version number, four bytes, current version is 1
1445-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1446-#  0x08: number of leases, four bytes big-endian
1447-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1448-#  A+0x0c = B: first lease. Lease format is:
1449-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1450-#   B+0x04: renew secret, 32 bytes (SHA256)
1451-#   B+0x24: cancel secret, 32 bytes (SHA256)
1452-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1453-#   B+0x48: next lease, or end of record
1454-
1455-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1456-# but it is still filled in by storage servers in case the storage server
1457-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1458-# share file is moved from one storage server to another. The value stored in
1459-# this field is truncated, so if the actual share data length is >= 2**32,
1460-# then the value stored in this field will be the actual share data length
1461-# modulo 2**32.
1462-
1463-class ShareFile:
1464-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1465-    sharetype = "immutable"
1466-
1467-    def __init__(self, filename, max_size=None, create=False):
1468-        """ If max_size is not None then I won't allow more than
1469-        max_size to be written to me. If create=True then max_size
1470-        must not be None. """
1471-        precondition((max_size is not None) or (not create), max_size, create)
1472-        self.home = filename
1473-        self._max_size = max_size
1474-        if create:
1475-            # touch the file, so later callers will see that we're working on
1476-            # it. Also construct the metadata.
1477-            assert not os.path.exists(self.home)
1478-            fileutil.make_dirs(os.path.dirname(self.home))
1479-            f = open(self.home, 'wb')
1480-            # The second field -- the four-byte share data length -- is no
1481-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1482-            # there in case someone downgrades a storage server from >=
1483-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1484-            # server to another, etc. We do saturation -- a share data length
1485-            # larger than 2**32-1 (what can fit into the field) is marked as
1486-            # the largest length that can fit into the field. That way, even
1487-            # if this does happen, the old < v1.3.0 server will still allow
1488-            # clients to read the first part of the share.
1489-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1490-            f.close()
1491-            self._lease_offset = max_size + 0x0c
1492-            self._num_leases = 0
1493-        else:
1494-            f = open(self.home, 'rb')
1495-            filesize = os.path.getsize(self.home)
1496-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1497-            f.close()
1498-            if version != 1:
1499-                msg = "sharefile %s had version %d but we wanted 1" % \
1500-                      (filename, version)
1501-                raise UnknownImmutableContainerVersionError(msg)
1502-            self._num_leases = num_leases
1503-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1504-        self._data_offset = 0xc
1505-
1506-    def unlink(self):
1507-        os.unlink(self.home)
1508-
1509-    def read_share_data(self, offset, length):
1510-        precondition(offset >= 0)
1511-        # Reads beyond the end of the data are truncated. Reads that start
1512-        # beyond the end of the data return an empty string.
1513-        seekpos = self._data_offset+offset
1514-        fsize = os.path.getsize(self.home)
1515-        actuallength = max(0, min(length, fsize-seekpos))
1516-        if actuallength == 0:
1517-            return ""
1518-        f = open(self.home, 'rb')
1519-        f.seek(seekpos)
1520-        return f.read(actuallength)
1521-
1522-    def write_share_data(self, offset, data):
1523-        length = len(data)
1524-        precondition(offset >= 0, offset)
1525-        if self._max_size is not None and offset+length > self._max_size:
1526-            raise DataTooLargeError(self._max_size, offset, length)
1527-        f = open(self.home, 'rb+')
1528-        real_offset = self._data_offset+offset
1529-        f.seek(real_offset)
1530-        assert f.tell() == real_offset
1531-        f.write(data)
1532-        f.close()
1533-
1534-    def _write_lease_record(self, f, lease_number, lease_info):
1535-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1536-        f.seek(offset)
1537-        assert f.tell() == offset
1538-        f.write(lease_info.to_immutable_data())
1539-
1540-    def _read_num_leases(self, f):
1541-        f.seek(0x08)
1542-        (num_leases,) = struct.unpack(">L", f.read(4))
1543-        return num_leases
1544-
1545-    def _write_num_leases(self, f, num_leases):
1546-        f.seek(0x08)
1547-        f.write(struct.pack(">L", num_leases))
1548-
1549-    def _truncate_leases(self, f, num_leases):
1550-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1551-
1552-    def get_leases(self):
1553-        """Yields a LeaseInfo instance for all leases."""
1554-        f = open(self.home, 'rb')
1555-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1556-        f.seek(self._lease_offset)
1557-        for i in range(num_leases):
1558-            data = f.read(self.LEASE_SIZE)
1559-            if data:
1560-                yield LeaseInfo().from_immutable_data(data)
1561-
1562-    def add_lease(self, lease_info):
1563-        f = open(self.home, 'rb+')
1564-        num_leases = self._read_num_leases(f)
1565-        self._write_lease_record(f, num_leases, lease_info)
1566-        self._write_num_leases(f, num_leases+1)
1567-        f.close()
1568-
1569-    def renew_lease(self, renew_secret, new_expire_time):
1570-        for i,lease in enumerate(self.get_leases()):
1571-            if constant_time_compare(lease.renew_secret, renew_secret):
1572-                # yup. See if we need to update the owner time.
1573-                if new_expire_time > lease.expiration_time:
1574-                    # yes
1575-                    lease.expiration_time = new_expire_time
1576-                    f = open(self.home, 'rb+')
1577-                    self._write_lease_record(f, i, lease)
1578-                    f.close()
1579-                return
1580-        raise IndexError("unable to renew non-existent lease")
1581-
1582-    def add_or_renew_lease(self, lease_info):
1583-        try:
1584-            self.renew_lease(lease_info.renew_secret,
1585-                             lease_info.expiration_time)
1586-        except IndexError:
1587-            self.add_lease(lease_info)
1588-
1589-
1590-    def cancel_lease(self, cancel_secret):
1591-        """Remove a lease with the given cancel_secret. If the last lease is
1592-        cancelled, the file will be removed. Return the number of bytes that
1593-        were freed (by truncating the list of leases, and possibly by
1594-        deleting the file. Raise IndexError if there was no lease with the
1595-        given cancel_secret.
1596-        """
1597-
1598-        leases = list(self.get_leases())
1599-        num_leases_removed = 0
1600-        for i,lease in enumerate(leases):
1601-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1602-                leases[i] = None
1603-                num_leases_removed += 1
1604-        if not num_leases_removed:
1605-            raise IndexError("unable to find matching lease to cancel")
1606-        if num_leases_removed:
1607-            # pack and write out the remaining leases. We write these out in
1608-            # the same order as they were added, so that if we crash while
1609-            # doing this, we won't lose any non-cancelled leases.
1610-            leases = [l for l in leases if l] # remove the cancelled leases
1611-            f = open(self.home, 'rb+')
1612-            for i,lease in enumerate(leases):
1613-                self._write_lease_record(f, i, lease)
1614-            self._write_num_leases(f, len(leases))
1615-            self._truncate_leases(f, len(leases))
1616-            f.close()
1617-        space_freed = self.LEASE_SIZE * num_leases_removed
1618-        if not len(leases):
1619-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1620-            self.unlink()
1621-        return space_freed
1622-class NullBucketWriter(Referenceable):
1623-    implements(RIBucketWriter)
1624-
1625-    def remote_write(self, offset, data):
1626-        return
1627-
1628 class BucketWriter(Referenceable):
1629     implements(RIBucketWriter)
1630 
1631hunk ./src/allmydata/storage/immutable.py 17
1632-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1633+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1634         self.ss = ss
1635hunk ./src/allmydata/storage/immutable.py 19
1636-        self.incominghome = incominghome
1637-        self.finalhome = finalhome
1638         self._max_size = max_size # don't allow the client to write more than this
1639         self._canary = canary
1640         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1641hunk ./src/allmydata/storage/immutable.py 24
1642         self.closed = False
1643         self.throw_out_all_data = False
1644-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1645+        self._sharefile = immutableshare
1646         # also, add our lease to the file now, so that other ones can be
1647         # added by simultaneous uploaders
1648         self._sharefile.add_lease(lease_info)
1649hunk ./src/allmydata/storage/server.py 16
1650 from allmydata.storage.lease import LeaseInfo
1651 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1652      create_mutable_sharefile
1653-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1654-from allmydata.storage.crawler import BucketCountingCrawler
1655-from allmydata.storage.expirer import LeaseCheckingCrawler
1656 
1657 from zope.interface import implements
1658 
1659hunk ./src/allmydata/storage/server.py 19
1660-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1661-# be started and stopped.
1662-class Backend(service.MultiService):
1663-    implements(IStatsProducer)
1664-    def __init__(self):
1665-        service.MultiService.__init__(self)
1666-
1667-    def get_bucket_shares(self):
1668-        """XXX"""
1669-        raise NotImplementedError
1670-
1671-    def get_share(self):
1672-        """XXX"""
1673-        raise NotImplementedError
1674-
1675-    def make_bucket_writer(self):
1676-        """XXX"""
1677-        raise NotImplementedError
1678-
1679-class NullBackend(Backend):
1680-    def __init__(self):
1681-        Backend.__init__(self)
1682-
1683-    def get_available_space(self):
1684-        return None
1685-
1686-    def get_bucket_shares(self, storage_index):
1687-        return set()
1688-
1689-    def get_share(self, storage_index, sharenum):
1690-        return None
1691-
1692-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1693-        return NullBucketWriter()
1694-
1695-class FSBackend(Backend):
1696-    def __init__(self, storedir, readonly=False, reserved_space=0):
1697-        Backend.__init__(self)
1698-
1699-        self._setup_storage(storedir, readonly, reserved_space)
1700-        self._setup_corruption_advisory()
1701-        self._setup_bucket_counter()
1702-        self._setup_lease_checkerf()
1703-
1704-    def _setup_storage(self, storedir, readonly, reserved_space):
1705-        self.storedir = storedir
1706-        self.readonly = readonly
1707-        self.reserved_space = int(reserved_space)
1708-        if self.reserved_space:
1709-            if self.get_available_space() is None:
1710-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1711-                        umid="0wZ27w", level=log.UNUSUAL)
1712-
1713-        self.sharedir = os.path.join(self.storedir, "shares")
1714-        fileutil.make_dirs(self.sharedir)
1715-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1716-        self._clean_incomplete()
1717-
1718-    def _clean_incomplete(self):
1719-        fileutil.rm_dir(self.incomingdir)
1720-        fileutil.make_dirs(self.incomingdir)
1721-
1722-    def _setup_corruption_advisory(self):
1723-        # we don't actually create the corruption-advisory dir until necessary
1724-        self.corruption_advisory_dir = os.path.join(self.storedir,
1725-                                                    "corruption-advisories")
1726-
1727-    def _setup_bucket_counter(self):
1728-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1729-        self.bucket_counter = BucketCountingCrawler(statefile)
1730-        self.bucket_counter.setServiceParent(self)
1731-
1732-    def _setup_lease_checkerf(self):
1733-        statefile = os.path.join(self.storedir, "lease_checker.state")
1734-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1735-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1736-                                   expiration_enabled, expiration_mode,
1737-                                   expiration_override_lease_duration,
1738-                                   expiration_cutoff_date,
1739-                                   expiration_sharetypes)
1740-        self.lease_checker.setServiceParent(self)
1741-
1742-    def get_available_space(self):
1743-        if self.readonly:
1744-            return 0
1745-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1746-
1747-    def get_bucket_shares(self, storage_index):
1748-        """Return a list of (shnum, pathname) tuples for files that hold
1749-        shares for this storage_index. In each tuple, 'shnum' will always be
1750-        the integer form of the last component of 'pathname'."""
1751-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1752-        try:
1753-            for f in os.listdir(storagedir):
1754-                if NUM_RE.match(f):
1755-                    filename = os.path.join(storagedir, f)
1756-                    yield (int(f), filename)
1757-        except OSError:
1758-            # Commonly caused by there being no buckets at all.
1759-            pass
1760-
1761 # storage/
1762 # storage/shares/incoming
1763 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1764hunk ./src/allmydata/storage/server.py 32
1765 # $SHARENUM matches this regex:
1766 NUM_RE=re.compile("^[0-9]+$")
1767 
1768-
1769-
1770 class StorageServer(service.MultiService, Referenceable):
1771     implements(RIStorageServer, IStatsProducer)
1772     name = 'storage'
1773hunk ./src/allmydata/storage/server.py 35
1774-    LeaseCheckerClass = LeaseCheckingCrawler
1775 
1776     def __init__(self, nodeid, backend, reserved_space=0,
1777                  readonly_storage=False,
1778hunk ./src/allmydata/storage/server.py 38
1779-                 stats_provider=None,
1780-                 expiration_enabled=False,
1781-                 expiration_mode="age",
1782-                 expiration_override_lease_duration=None,
1783-                 expiration_cutoff_date=None,
1784-                 expiration_sharetypes=("mutable", "immutable")):
1785+                 stats_provider=None ):
1786         service.MultiService.__init__(self)
1787         assert isinstance(nodeid, str)
1788         assert len(nodeid) == 20
1789hunk ./src/allmydata/storage/server.py 217
1790         # they asked about: this will save them a lot of work. Add or update
1791         # leases for all of them: if they want us to hold shares for this
1792         # file, they'll want us to hold leases for this file.
1793-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1794-            alreadygot.add(shnum)
1795-            sf = ShareFile(fn)
1796-            sf.add_or_renew_lease(lease_info)
1797-
1798-        for shnum in sharenums:
1799-            share = self.backend.get_share(storage_index, shnum)
1800+        for share in self.backend.get_shares(storage_index):
1801+            alreadygot.add(share.shnum)
1802+            share.add_or_renew_lease(lease_info)
1803 
1804hunk ./src/allmydata/storage/server.py 221
1805-            if not share:
1806-                if (not limited) or (remaining_space >= max_space_per_bucket):
1807-                    # ok! we need to create the new share file.
1808-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1809-                                      max_space_per_bucket, lease_info, canary)
1810-                    bucketwriters[shnum] = bw
1811-                    self._active_writers[bw] = 1
1812-                    if limited:
1813-                        remaining_space -= max_space_per_bucket
1814-                else:
1815-                    # bummer! not enough space to accept this bucket
1816-                    pass
1817+        for shnum in (sharenums - alreadygot):
1818+            if (not limited) or (remaining_space >= max_space_per_bucket):
1819+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1820+                self.backend.set_storage_server(self)
1821+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1822+                                                     max_space_per_bucket, lease_info, canary)
1823+                bucketwriters[shnum] = bw
1824+                self._active_writers[bw] = 1
1825+                if limited:
1826+                    remaining_space -= max_space_per_bucket
1827 
1828hunk ./src/allmydata/storage/server.py 232
1829-            elif share.is_complete():
1830-                # great! we already have it. easy.
1831-                pass
1832-            elif not share.is_complete():
1833-                # Note that we don't create BucketWriters for shnums that
1834-                # have a partial share (in incoming/), so if a second upload
1835-                # occurs while the first is still in progress, the second
1836-                # uploader will use different storage servers.
1837-                pass
1838+        #XXX We SHOULD DOCUMENT LATER.
1839 
1840         self.add_latency("allocate", time.time() - start)
1841         return alreadygot, bucketwriters
1842hunk ./src/allmydata/storage/server.py 238
1843 
1844     def _iter_share_files(self, storage_index):
1845-        for shnum, filename in self._get_bucket_shares(storage_index):
1846+        for shnum, filename in self._get_shares(storage_index):
1847             f = open(filename, 'rb')
1848             header = f.read(32)
1849             f.close()
1850hunk ./src/allmydata/storage/server.py 318
1851         si_s = si_b2a(storage_index)
1852         log.msg("storage: get_buckets %s" % si_s)
1853         bucketreaders = {} # k: sharenum, v: BucketReader
1854-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1855+        for shnum, filename in self.backend.get_shares(storage_index):
1856             bucketreaders[shnum] = BucketReader(self, filename,
1857                                                 storage_index, shnum)
1858         self.add_latency("get", time.time() - start)
1859hunk ./src/allmydata/storage/server.py 334
1860         # since all shares get the same lease data, we just grab the leases
1861         # from the first share
1862         try:
1863-            shnum, filename = self._get_bucket_shares(storage_index).next()
1864+            shnum, filename = self._get_shares(storage_index).next()
1865             sf = ShareFile(filename)
1866             return sf.get_leases()
1867         except StopIteration:
1868hunk ./src/allmydata/storage/shares.py 1
1869-#! /usr/bin/python
1870-
1871-from allmydata.storage.mutable import MutableShareFile
1872-from allmydata.storage.immutable import ShareFile
1873-
1874-def get_share_file(filename):
1875-    f = open(filename, "rb")
1876-    prefix = f.read(32)
1877-    f.close()
1878-    if prefix == MutableShareFile.MAGIC:
1879-        return MutableShareFile(filename)
1880-    # otherwise assume it's immutable
1881-    return ShareFile(filename)
1882-
1883rmfile ./src/allmydata/storage/shares.py
1884hunk ./src/allmydata/test/common_util.py 20
1885 
1886 def flip_one_bit(s, offset=0, size=None):
1887     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1888-    than offset+size. """
1889+    than offset+size. Return the new string. """
1890     if size is None:
1891         size=len(s)-offset
1892     i = randrange(offset, offset+size)
1893hunk ./src/allmydata/test/test_backends.py 7
1894 
1895 from allmydata.test.common_util import ReallyEqualMixin
1896 
1897-import mock
1898+import mock, os
1899 
1900 # This is the code that we're going to be testing.
1901hunk ./src/allmydata/test/test_backends.py 10
1902-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1903+from allmydata.storage.server import StorageServer
1904+
1905+from allmydata.storage.backends.das.core import DASCore
1906+from allmydata.storage.backends.null.core import NullCore
1907+
1908 
1909 # The following share file contents was generated with
1910 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1911hunk ./src/allmydata/test/test_backends.py 22
1912 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1913 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1914 
1915-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1916+tempdir = 'teststoredir'
1917+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1918+sharefname = os.path.join(sharedirname, '0')
1919 
1920 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1921     @mock.patch('time.time')
1922hunk ./src/allmydata/test/test_backends.py 58
1923         filesystem in only the prescribed ways. """
1924 
1925         def call_open(fname, mode):
1926-            if fname == 'testdir/bucket_counter.state':
1927-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1928-            elif fname == 'testdir/lease_checker.state':
1929-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1930-            elif fname == 'testdir/lease_checker.history':
1931+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1932+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1933+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1934+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1935+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1936                 return StringIO()
1937             else:
1938                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1939hunk ./src/allmydata/test/test_backends.py 124
1940     @mock.patch('__builtin__.open')
1941     def setUp(self, mockopen):
1942         def call_open(fname, mode):
1943-            if fname == 'testdir/bucket_counter.state':
1944-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1945-            elif fname == 'testdir/lease_checker.state':
1946-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1947-            elif fname == 'testdir/lease_checker.history':
1948+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1949+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1950+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1951+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1952+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1953                 return StringIO()
1954         mockopen.side_effect = call_open
1955hunk ./src/allmydata/test/test_backends.py 131
1956-
1957-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1958+        expiration_policy = {'enabled' : False,
1959+                             'mode' : 'age',
1960+                             'override_lease_duration' : None,
1961+                             'cutoff_date' : None,
1962+                             'sharetypes' : None}
1963+        testbackend = DASCore(tempdir, expiration_policy)
1964+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1965 
1966     @mock.patch('time.time')
1967     @mock.patch('os.mkdir')
1968hunk ./src/allmydata/test/test_backends.py 148
1969         """ Write a new share. """
1970 
1971         def call_listdir(dirname):
1972-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1973-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1974+            self.failUnlessReallyEqual(dirname, sharedirname)
1975+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1976 
1977         mocklistdir.side_effect = call_listdir
1978 
1979hunk ./src/allmydata/test/test_backends.py 178
1980 
1981         sharefile = MockFile()
1982         def call_open(fname, mode):
1983-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1984+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1985             return sharefile
1986 
1987         mockopen.side_effect = call_open
1988hunk ./src/allmydata/test/test_backends.py 200
1989         StorageServer object. """
1990 
1991         def call_listdir(dirname):
1992-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1993+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1994             return ['0']
1995 
1996         mocklistdir.side_effect = call_listdir
1997}
1998[checkpoint patch
1999wilcoxjg@gmail.com**20110626165715
2000 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
2001] {
2002hunk ./src/allmydata/storage/backends/das/core.py 21
2003 from allmydata.storage.lease import LeaseInfo
2004 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2005      create_mutable_sharefile
2006-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
2007+from allmydata.storage.immutable import BucketWriter, BucketReader
2008 from allmydata.storage.crawler import FSBucketCountingCrawler
2009 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
2010 
2011hunk ./src/allmydata/storage/backends/das/core.py 27
2012 from zope.interface import implements
2013 
2014+# $SHARENUM matches this regex:
2015+NUM_RE=re.compile("^[0-9]+$")
2016+
2017 class DASCore(Backend):
2018     implements(IStorageBackend)
2019     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
2020hunk ./src/allmydata/storage/backends/das/core.py 80
2021         return fileutil.get_available_space(self.storedir, self.reserved_space)
2022 
2023     def get_shares(self, storage_index):
2024-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
2025+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
2026         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
2027         try:
2028             for f in os.listdir(finalstoragedir):
2029hunk ./src/allmydata/storage/backends/das/core.py 86
2030                 if NUM_RE.match(f):
2031                     filename = os.path.join(finalstoragedir, f)
2032-                    yield FSBShare(filename, int(f))
2033+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
2034         except OSError:
2035             # Commonly caused by there being no buckets at all.
2036             pass
2037hunk ./src/allmydata/storage/backends/das/core.py 95
2038         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
2039         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2040         return bw
2041+
2042+    def set_storage_server(self, ss):
2043+        self.ss = ss
2044         
2045 
2046 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2047hunk ./src/allmydata/storage/server.py 29
2048 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2049 # base-32 chars).
2050 
2051-# $SHARENUM matches this regex:
2052-NUM_RE=re.compile("^[0-9]+$")
2053 
2054 class StorageServer(service.MultiService, Referenceable):
2055     implements(RIStorageServer, IStatsProducer)
2056}
2057[checkpoint4
2058wilcoxjg@gmail.com**20110628202202
2059 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2060] {
2061hunk ./src/allmydata/storage/backends/das/core.py 96
2062         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2063         return bw
2064 
2065+    def make_bucket_reader(self, share):
2066+        return BucketReader(self.ss, share)
2067+
2068     def set_storage_server(self, ss):
2069         self.ss = ss
2070         
2071hunk ./src/allmydata/storage/backends/das/core.py 138
2072         must not be None. """
2073         precondition((max_size is not None) or (not create), max_size, create)
2074         self.shnum = shnum
2075+        self.storage_index = storageindex
2076         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2077         self._max_size = max_size
2078         if create:
2079hunk ./src/allmydata/storage/backends/das/core.py 173
2080             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2081         self._data_offset = 0xc
2082 
2083+    def get_shnum(self):
2084+        return self.shnum
2085+
2086     def unlink(self):
2087         os.unlink(self.fname)
2088 
2089hunk ./src/allmydata/storage/backends/null/core.py 2
2090 from allmydata.storage.backends.base import Backend
2091+from allmydata.storage.immutable import BucketWriter, BucketReader
2092 
2093 class NullCore(Backend):
2094     def __init__(self):
2095hunk ./src/allmydata/storage/backends/null/core.py 17
2096     def get_share(self, storage_index, sharenum):
2097         return None
2098 
2099-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2100-        return NullBucketWriter()
2101+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2102+       
2103+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2104+
2105+    def set_storage_server(self, ss):
2106+        self.ss = ss
2107+
2108+class ImmutableShare:
2109+    sharetype = "immutable"
2110+
2111+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2112+        """ If max_size is not None then I won't allow more than
2113+        max_size to be written to me. If create=True then max_size
2114+        must not be None. """
2115+        precondition((max_size is not None) or (not create), max_size, create)
2116+        self.shnum = shnum
2117+        self.storage_index = storageindex
2118+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2119+        self._max_size = max_size
2120+        if create:
2121+            # touch the file, so later callers will see that we're working on
2122+            # it. Also construct the metadata.
2123+            assert not os.path.exists(self.fname)
2124+            fileutil.make_dirs(os.path.dirname(self.fname))
2125+            f = open(self.fname, 'wb')
2126+            # The second field -- the four-byte share data length -- is no
2127+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2128+            # there in case someone downgrades a storage server from >=
2129+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2130+            # server to another, etc. We do saturation -- a share data length
2131+            # larger than 2**32-1 (what can fit into the field) is marked as
2132+            # the largest length that can fit into the field. That way, even
2133+            # if this does happen, the old < v1.3.0 server will still allow
2134+            # clients to read the first part of the share.
2135+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2136+            f.close()
2137+            self._lease_offset = max_size + 0x0c
2138+            self._num_leases = 0
2139+        else:
2140+            f = open(self.fname, 'rb')
2141+            filesize = os.path.getsize(self.fname)
2142+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2143+            f.close()
2144+            if version != 1:
2145+                msg = "sharefile %s had version %d but we wanted 1" % \
2146+                      (self.fname, version)
2147+                raise UnknownImmutableContainerVersionError(msg)
2148+            self._num_leases = num_leases
2149+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2150+        self._data_offset = 0xc
2151+
2152+    def get_shnum(self):
2153+        return self.shnum
2154+
2155+    def unlink(self):
2156+        os.unlink(self.fname)
2157+
2158+    def read_share_data(self, offset, length):
2159+        precondition(offset >= 0)
2160+        # Reads beyond the end of the data are truncated. Reads that start
2161+        # beyond the end of the data return an empty string.
2162+        seekpos = self._data_offset+offset
2163+        fsize = os.path.getsize(self.fname)
2164+        actuallength = max(0, min(length, fsize-seekpos))
2165+        if actuallength == 0:
2166+            return ""
2167+        f = open(self.fname, 'rb')
2168+        f.seek(seekpos)
2169+        return f.read(actuallength)
2170+
2171+    def write_share_data(self, offset, data):
2172+        length = len(data)
2173+        precondition(offset >= 0, offset)
2174+        if self._max_size is not None and offset+length > self._max_size:
2175+            raise DataTooLargeError(self._max_size, offset, length)
2176+        f = open(self.fname, 'rb+')
2177+        real_offset = self._data_offset+offset
2178+        f.seek(real_offset)
2179+        assert f.tell() == real_offset
2180+        f.write(data)
2181+        f.close()
2182+
2183+    def _write_lease_record(self, f, lease_number, lease_info):
2184+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2185+        f.seek(offset)
2186+        assert f.tell() == offset
2187+        f.write(lease_info.to_immutable_data())
2188+
2189+    def _read_num_leases(self, f):
2190+        f.seek(0x08)
2191+        (num_leases,) = struct.unpack(">L", f.read(4))
2192+        return num_leases
2193+
2194+    def _write_num_leases(self, f, num_leases):
2195+        f.seek(0x08)
2196+        f.write(struct.pack(">L", num_leases))
2197+
2198+    def _truncate_leases(self, f, num_leases):
2199+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2200+
2201+    def get_leases(self):
2202+        """Yields a LeaseInfo instance for all leases."""
2203+        f = open(self.fname, 'rb')
2204+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2205+        f.seek(self._lease_offset)
2206+        for i in range(num_leases):
2207+            data = f.read(self.LEASE_SIZE)
2208+            if data:
2209+                yield LeaseInfo().from_immutable_data(data)
2210+
2211+    def add_lease(self, lease_info):
2212+        f = open(self.fname, 'rb+')
2213+        num_leases = self._read_num_leases(f)
2214+        self._write_lease_record(f, num_leases, lease_info)
2215+        self._write_num_leases(f, num_leases+1)
2216+        f.close()
2217+
2218+    def renew_lease(self, renew_secret, new_expire_time):
2219+        for i,lease in enumerate(self.get_leases()):
2220+            if constant_time_compare(lease.renew_secret, renew_secret):
2221+                # yup. See if we need to update the owner time.
2222+                if new_expire_time > lease.expiration_time:
2223+                    # yes
2224+                    lease.expiration_time = new_expire_time
2225+                    f = open(self.fname, 'rb+')
2226+                    self._write_lease_record(f, i, lease)
2227+                    f.close()
2228+                return
2229+        raise IndexError("unable to renew non-existent lease")
2230+
2231+    def add_or_renew_lease(self, lease_info):
2232+        try:
2233+            self.renew_lease(lease_info.renew_secret,
2234+                             lease_info.expiration_time)
2235+        except IndexError:
2236+            self.add_lease(lease_info)
2237+
2238+
2239+    def cancel_lease(self, cancel_secret):
2240+        """Remove a lease with the given cancel_secret. If the last lease is
2241+        cancelled, the file will be removed. Return the number of bytes that
2242+        were freed (by truncating the list of leases, and possibly by
2243+        deleting the file. Raise IndexError if there was no lease with the
2244+        given cancel_secret.
2245+        """
2246+
2247+        leases = list(self.get_leases())
2248+        num_leases_removed = 0
2249+        for i,lease in enumerate(leases):
2250+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2251+                leases[i] = None
2252+                num_leases_removed += 1
2253+        if not num_leases_removed:
2254+            raise IndexError("unable to find matching lease to cancel")
2255+        if num_leases_removed:
2256+            # pack and write out the remaining leases. We write these out in
2257+            # the same order as they were added, so that if we crash while
2258+            # doing this, we won't lose any non-cancelled leases.
2259+            leases = [l for l in leases if l] # remove the cancelled leases
2260+            f = open(self.fname, 'rb+')
2261+            for i,lease in enumerate(leases):
2262+                self._write_lease_record(f, i, lease)
2263+            self._write_num_leases(f, len(leases))
2264+            self._truncate_leases(f, len(leases))
2265+            f.close()
2266+        space_freed = self.LEASE_SIZE * num_leases_removed
2267+        if not len(leases):
2268+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2269+            self.unlink()
2270+        return space_freed
2271hunk ./src/allmydata/storage/immutable.py 114
2272 class BucketReader(Referenceable):
2273     implements(RIBucketReader)
2274 
2275-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2276+    def __init__(self, ss, share):
2277         self.ss = ss
2278hunk ./src/allmydata/storage/immutable.py 116
2279-        self._share_file = ShareFile(sharefname)
2280-        self.storage_index = storage_index
2281-        self.shnum = shnum
2282+        self._share_file = share
2283+        self.storage_index = share.storage_index
2284+        self.shnum = share.shnum
2285 
2286     def __repr__(self):
2287         return "<%s %s %s>" % (self.__class__.__name__,
2288hunk ./src/allmydata/storage/server.py 316
2289         si_s = si_b2a(storage_index)
2290         log.msg("storage: get_buckets %s" % si_s)
2291         bucketreaders = {} # k: sharenum, v: BucketReader
2292-        for shnum, filename in self.backend.get_shares(storage_index):
2293-            bucketreaders[shnum] = BucketReader(self, filename,
2294-                                                storage_index, shnum)
2295+        self.backend.set_storage_server(self)
2296+        for share in self.backend.get_shares(storage_index):
2297+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2298         self.add_latency("get", time.time() - start)
2299         return bucketreaders
2300 
2301hunk ./src/allmydata/test/test_backends.py 25
2302 tempdir = 'teststoredir'
2303 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2304 sharefname = os.path.join(sharedirname, '0')
2305+expiration_policy = {'enabled' : False,
2306+                     'mode' : 'age',
2307+                     'override_lease_duration' : None,
2308+                     'cutoff_date' : None,
2309+                     'sharetypes' : None}
2310 
2311 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2312     @mock.patch('time.time')
2313hunk ./src/allmydata/test/test_backends.py 43
2314         tries to read or write to the file system. """
2315 
2316         # Now begin the test.
2317-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2318+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2319 
2320         self.failIf(mockisdir.called)
2321         self.failIf(mocklistdir.called)
2322hunk ./src/allmydata/test/test_backends.py 74
2323         mockopen.side_effect = call_open
2324 
2325         # Now begin the test.
2326-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2327+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2328 
2329         self.failIf(mockisdir.called)
2330         self.failIf(mocklistdir.called)
2331hunk ./src/allmydata/test/test_backends.py 86
2332 
2333 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2334     def setUp(self):
2335-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2336+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2337 
2338     @mock.patch('os.mkdir')
2339     @mock.patch('__builtin__.open')
2340hunk ./src/allmydata/test/test_backends.py 136
2341             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2342                 return StringIO()
2343         mockopen.side_effect = call_open
2344-        expiration_policy = {'enabled' : False,
2345-                             'mode' : 'age',
2346-                             'override_lease_duration' : None,
2347-                             'cutoff_date' : None,
2348-                             'sharetypes' : None}
2349         testbackend = DASCore(tempdir, expiration_policy)
2350         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2351 
2352}
2353[checkpoint5
2354wilcoxjg@gmail.com**20110705034626
2355 Ignore-this: 255780bd58299b0aa33c027e9d008262
2356] {
2357addfile ./src/allmydata/storage/backends/base.py
2358hunk ./src/allmydata/storage/backends/base.py 1
2359+from twisted.application import service
2360+
2361+class Backend(service.MultiService):
2362+    def __init__(self):
2363+        service.MultiService.__init__(self)
2364hunk ./src/allmydata/storage/backends/null/core.py 19
2365 
2366     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2367         
2368+        immutableshare = ImmutableShare()
2369         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2370 
2371     def set_storage_server(self, ss):
2372hunk ./src/allmydata/storage/backends/null/core.py 28
2373 class ImmutableShare:
2374     sharetype = "immutable"
2375 
2376-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2377+    def __init__(self):
2378         """ If max_size is not None then I won't allow more than
2379         max_size to be written to me. If create=True then max_size
2380         must not be None. """
2381hunk ./src/allmydata/storage/backends/null/core.py 32
2382-        precondition((max_size is not None) or (not create), max_size, create)
2383-        self.shnum = shnum
2384-        self.storage_index = storageindex
2385-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2386-        self._max_size = max_size
2387-        if create:
2388-            # touch the file, so later callers will see that we're working on
2389-            # it. Also construct the metadata.
2390-            assert not os.path.exists(self.fname)
2391-            fileutil.make_dirs(os.path.dirname(self.fname))
2392-            f = open(self.fname, 'wb')
2393-            # The second field -- the four-byte share data length -- is no
2394-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2395-            # there in case someone downgrades a storage server from >=
2396-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2397-            # server to another, etc. We do saturation -- a share data length
2398-            # larger than 2**32-1 (what can fit into the field) is marked as
2399-            # the largest length that can fit into the field. That way, even
2400-            # if this does happen, the old < v1.3.0 server will still allow
2401-            # clients to read the first part of the share.
2402-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2403-            f.close()
2404-            self._lease_offset = max_size + 0x0c
2405-            self._num_leases = 0
2406-        else:
2407-            f = open(self.fname, 'rb')
2408-            filesize = os.path.getsize(self.fname)
2409-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2410-            f.close()
2411-            if version != 1:
2412-                msg = "sharefile %s had version %d but we wanted 1" % \
2413-                      (self.fname, version)
2414-                raise UnknownImmutableContainerVersionError(msg)
2415-            self._num_leases = num_leases
2416-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2417-        self._data_offset = 0xc
2418+        pass
2419 
2420     def get_shnum(self):
2421         return self.shnum
2422hunk ./src/allmydata/storage/backends/null/core.py 54
2423         return f.read(actuallength)
2424 
2425     def write_share_data(self, offset, data):
2426-        length = len(data)
2427-        precondition(offset >= 0, offset)
2428-        if self._max_size is not None and offset+length > self._max_size:
2429-            raise DataTooLargeError(self._max_size, offset, length)
2430-        f = open(self.fname, 'rb+')
2431-        real_offset = self._data_offset+offset
2432-        f.seek(real_offset)
2433-        assert f.tell() == real_offset
2434-        f.write(data)
2435-        f.close()
2436+        pass
2437 
2438     def _write_lease_record(self, f, lease_number, lease_info):
2439         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2440hunk ./src/allmydata/storage/backends/null/core.py 84
2441             if data:
2442                 yield LeaseInfo().from_immutable_data(data)
2443 
2444-    def add_lease(self, lease_info):
2445-        f = open(self.fname, 'rb+')
2446-        num_leases = self._read_num_leases(f)
2447-        self._write_lease_record(f, num_leases, lease_info)
2448-        self._write_num_leases(f, num_leases+1)
2449-        f.close()
2450+    def add_lease(self, lease):
2451+        pass
2452 
2453     def renew_lease(self, renew_secret, new_expire_time):
2454         for i,lease in enumerate(self.get_leases()):
2455hunk ./src/allmydata/test/test_backends.py 32
2456                      'sharetypes' : None}
2457 
2458 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2459-    @mock.patch('time.time')
2460-    @mock.patch('os.mkdir')
2461-    @mock.patch('__builtin__.open')
2462-    @mock.patch('os.listdir')
2463-    @mock.patch('os.path.isdir')
2464-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2465-        """ This tests whether a server instance can be constructed
2466-        with a null backend. The server instance fails the test if it
2467-        tries to read or write to the file system. """
2468-
2469-        # Now begin the test.
2470-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2471-
2472-        self.failIf(mockisdir.called)
2473-        self.failIf(mocklistdir.called)
2474-        self.failIf(mockopen.called)
2475-        self.failIf(mockmkdir.called)
2476-
2477-        # You passed!
2478-
2479     @mock.patch('time.time')
2480     @mock.patch('os.mkdir')
2481     @mock.patch('__builtin__.open')
2482hunk ./src/allmydata/test/test_backends.py 53
2483                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2484         mockopen.side_effect = call_open
2485 
2486-        # Now begin the test.
2487-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2488-
2489-        self.failIf(mockisdir.called)
2490-        self.failIf(mocklistdir.called)
2491-        self.failIf(mockopen.called)
2492-        self.failIf(mockmkdir.called)
2493-        self.failIf(mocktime.called)
2494-
2495-        # You passed!
2496-
2497-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2498-    def setUp(self):
2499-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2500-
2501-    @mock.patch('os.mkdir')
2502-    @mock.patch('__builtin__.open')
2503-    @mock.patch('os.listdir')
2504-    @mock.patch('os.path.isdir')
2505-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2506-        """ Write a new share. """
2507-
2508-        # Now begin the test.
2509-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2510-        bs[0].remote_write(0, 'a')
2511-        self.failIf(mockisdir.called)
2512-        self.failIf(mocklistdir.called)
2513-        self.failIf(mockopen.called)
2514-        self.failIf(mockmkdir.called)
2515+        def call_isdir(fname):
2516+            if fname == os.path.join(tempdir,'shares'):
2517+                return True
2518+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2519+                return True
2520+            else:
2521+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2522+        mockisdir.side_effect = call_isdir
2523 
2524hunk ./src/allmydata/test/test_backends.py 62
2525-    @mock.patch('os.path.exists')
2526-    @mock.patch('os.path.getsize')
2527-    @mock.patch('__builtin__.open')
2528-    @mock.patch('os.listdir')
2529-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2530-        """ This tests whether the code correctly finds and reads
2531-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2532-        servers. There is a similar test in test_download, but that one
2533-        is from the perspective of the client and exercises a deeper
2534-        stack of code. This one is for exercising just the
2535-        StorageServer object. """
2536+        def call_mkdir(fname, mode):
2537+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2538+            self.failUnlessEqual(0777, mode)
2539+            if fname == tempdir:
2540+                return None
2541+            elif fname == os.path.join(tempdir,'shares'):
2542+                return None
2543+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2544+                return None
2545+            else:
2546+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2547+        mockmkdir.side_effect = call_mkdir
2548 
2549         # Now begin the test.
2550hunk ./src/allmydata/test/test_backends.py 76
2551-        bs = self.s.remote_get_buckets('teststorage_index')
2552+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2553 
2554hunk ./src/allmydata/test/test_backends.py 78
2555-        self.failUnlessEqual(len(bs), 0)
2556-        self.failIf(mocklistdir.called)
2557-        self.failIf(mockopen.called)
2558-        self.failIf(mockgetsize.called)
2559-        self.failIf(mockexists.called)
2560+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2561 
2562 
2563 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2564hunk ./src/allmydata/test/test_backends.py 193
2565         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2566 
2567 
2568+
2569+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2570+    @mock.patch('time.time')
2571+    @mock.patch('os.mkdir')
2572+    @mock.patch('__builtin__.open')
2573+    @mock.patch('os.listdir')
2574+    @mock.patch('os.path.isdir')
2575+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2576+        """ This tests whether a file system backend instance can be
2577+        constructed. To pass the test, it has to use the
2578+        filesystem in only the prescribed ways. """
2579+
2580+        def call_open(fname, mode):
2581+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2582+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2583+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2584+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2585+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2586+                return StringIO()
2587+            else:
2588+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2589+        mockopen.side_effect = call_open
2590+
2591+        def call_isdir(fname):
2592+            if fname == os.path.join(tempdir,'shares'):
2593+                return True
2594+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2595+                return True
2596+            else:
2597+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2598+        mockisdir.side_effect = call_isdir
2599+
2600+        def call_mkdir(fname, mode):
2601+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2602+            self.failUnlessEqual(0777, mode)
2603+            if fname == tempdir:
2604+                return None
2605+            elif fname == os.path.join(tempdir,'shares'):
2606+                return None
2607+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2608+                return None
2609+            else:
2610+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2611+        mockmkdir.side_effect = call_mkdir
2612+
2613+        # Now begin the test.
2614+        DASCore('teststoredir', expiration_policy)
2615+
2616+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2617}
2618[checkpoint 6
2619wilcoxjg@gmail.com**20110706190824
2620 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2621] {
2622hunk ./src/allmydata/interfaces.py 100
2623                          renew_secret=LeaseRenewSecret,
2624                          cancel_secret=LeaseCancelSecret,
2625                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2626-                         allocated_size=Offset, canary=Referenceable):
2627+                         allocated_size=Offset,
2628+                         canary=Referenceable):
2629         """
2630hunk ./src/allmydata/interfaces.py 103
2631-        @param storage_index: the index of the bucket to be created or
2632+        @param storage_index: the index of the shares to be created or
2633                               increfed.
2634hunk ./src/allmydata/interfaces.py 105
2635-        @param sharenums: these are the share numbers (probably between 0 and
2636-                          99) that the sender is proposing to store on this
2637-                          server.
2638-        @param renew_secret: This is the secret used to protect bucket refresh
2639+        @param renew_secret: This is the secret used to protect shares refresh
2640                              This secret is generated by the client and
2641                              stored for later comparison by the server. Each
2642                              server is given a different secret.
2643hunk ./src/allmydata/interfaces.py 109
2644-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2645-        @param canary: If the canary is lost before close(), the bucket is
2646+        @param cancel_secret: Like renew_secret, but protects shares decref.
2647+        @param sharenums: these are the share numbers (probably between 0 and
2648+                          99) that the sender is proposing to store on this
2649+                          server.
2650+        @param allocated_size: XXX The size of the shares the client wishes to store.
2651+        @param canary: If the canary is lost before close(), the shares are
2652                        deleted.
2653hunk ./src/allmydata/interfaces.py 116
2654+
2655         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2656                  already have and allocated is what we hereby agree to accept.
2657                  New leases are added for shares in both lists.
2658hunk ./src/allmydata/interfaces.py 128
2659                   renew_secret=LeaseRenewSecret,
2660                   cancel_secret=LeaseCancelSecret):
2661         """
2662-        Add a new lease on the given bucket. If the renew_secret matches an
2663+        Add a new lease on the given shares. If the renew_secret matches an
2664         existing lease, that lease will be renewed instead. If there is no
2665         bucket for the given storage_index, return silently. (note that in
2666         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2667hunk ./src/allmydata/storage/server.py 17
2668 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2669      create_mutable_sharefile
2670 
2671-from zope.interface import implements
2672-
2673 # storage/
2674 # storage/shares/incoming
2675 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2676hunk ./src/allmydata/test/test_backends.py 6
2677 from StringIO import StringIO
2678 
2679 from allmydata.test.common_util import ReallyEqualMixin
2680+from allmydata.util.assertutil import _assert
2681 
2682 import mock, os
2683 
2684hunk ./src/allmydata/test/test_backends.py 92
2685                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2686             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2687                 return StringIO()
2688+            else:
2689+                _assert(False, "The tester code doesn't recognize this case.") 
2690+
2691         mockopen.side_effect = call_open
2692         testbackend = DASCore(tempdir, expiration_policy)
2693         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2694hunk ./src/allmydata/test/test_backends.py 109
2695 
2696         def call_listdir(dirname):
2697             self.failUnlessReallyEqual(dirname, sharedirname)
2698-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2699+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2700 
2701         mocklistdir.side_effect = call_listdir
2702 
2703hunk ./src/allmydata/test/test_backends.py 113
2704+        def call_isdir(dirname):
2705+            self.failUnlessReallyEqual(dirname, sharedirname)
2706+            return True
2707+
2708+        mockisdir.side_effect = call_isdir
2709+
2710+        def call_mkdir(dirname, permissions):
2711+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2712+                self.Fail
2713+            else:
2714+                return True
2715+
2716+        mockmkdir.side_effect = call_mkdir
2717+
2718         class MockFile:
2719             def __init__(self):
2720                 self.buffer = ''
2721hunk ./src/allmydata/test/test_backends.py 156
2722             return sharefile
2723 
2724         mockopen.side_effect = call_open
2725+
2726         # Now begin the test.
2727         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2728         bs[0].remote_write(0, 'a')
2729hunk ./src/allmydata/test/test_backends.py 161
2730         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2731+       
2732+        # Now test the allocated_size method.
2733+        spaceint = self.s.allocated_size()
2734 
2735     @mock.patch('os.path.exists')
2736     @mock.patch('os.path.getsize')
2737}
2738[checkpoint 7
2739wilcoxjg@gmail.com**20110706200820
2740 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2741] hunk ./src/allmydata/test/test_backends.py 164
2742         
2743         # Now test the allocated_size method.
2744         spaceint = self.s.allocated_size()
2745+        self.failUnlessReallyEqual(spaceint, 1)
2746 
2747     @mock.patch('os.path.exists')
2748     @mock.patch('os.path.getsize')
2749[checkpoint8
2750wilcoxjg@gmail.com**20110706223126
2751 Ignore-this: 97336180883cb798b16f15411179f827
2752   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2753] hunk ./src/allmydata/test/test_backends.py 32
2754                      'cutoff_date' : None,
2755                      'sharetypes' : None}
2756 
2757+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2758+    def setUp(self):
2759+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2760+
2761+    @mock.patch('os.mkdir')
2762+    @mock.patch('__builtin__.open')
2763+    @mock.patch('os.listdir')
2764+    @mock.patch('os.path.isdir')
2765+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2766+        """ Write a new share. """
2767+
2768+        # Now begin the test.
2769+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2770+        bs[0].remote_write(0, 'a')
2771+        self.failIf(mockisdir.called)
2772+        self.failIf(mocklistdir.called)
2773+        self.failIf(mockopen.called)
2774+        self.failIf(mockmkdir.called)
2775+
2776 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2777     @mock.patch('time.time')
2778     @mock.patch('os.mkdir')
2779[checkpoint 9
2780wilcoxjg@gmail.com**20110707042942
2781 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2782] {
2783hunk ./src/allmydata/storage/backends/das/core.py 88
2784                     filename = os.path.join(finalstoragedir, f)
2785                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2786         except OSError:
2787-            # Commonly caused by there being no buckets at all.
2788+            # Commonly caused by there being no shares at all.
2789             pass
2790         
2791     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2792hunk ./src/allmydata/storage/backends/das/core.py 141
2793         self.storage_index = storageindex
2794         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2795         self._max_size = max_size
2796+        self.incomingdir = os.path.join(sharedir, 'incoming')
2797+        si_dir = storage_index_to_dir(storageindex)
2798+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2799+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2800         if create:
2801             # touch the file, so later callers will see that we're working on
2802             # it. Also construct the metadata.
2803hunk ./src/allmydata/storage/backends/das/core.py 177
2804             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2805         self._data_offset = 0xc
2806 
2807+    def close(self):
2808+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2809+        fileutil.rename(self.incominghome, self.finalhome)
2810+        try:
2811+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2812+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2813+            # these directories lying around forever, but the delete might
2814+            # fail if we're working on another share for the same storage
2815+            # index (like ab/abcde/5). The alternative approach would be to
2816+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2817+            # ShareWriter), each of which is responsible for a single
2818+            # directory on disk, and have them use reference counting of
2819+            # their children to know when they should do the rmdir. This
2820+            # approach is simpler, but relies on os.rmdir refusing to delete
2821+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2822+            os.rmdir(os.path.dirname(self.incominghome))
2823+            # we also delete the grandparent (prefix) directory, .../ab ,
2824+            # again to avoid leaving directories lying around. This might
2825+            # fail if there is another bucket open that shares a prefix (like
2826+            # ab/abfff).
2827+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2828+            # we leave the great-grandparent (incoming/) directory in place.
2829+        except EnvironmentError:
2830+            # ignore the "can't rmdir because the directory is not empty"
2831+            # exceptions, those are normal consequences of the
2832+            # above-mentioned conditions.
2833+            pass
2834+        pass
2835+       
2836+    def stat(self):
2837+        return os.stat(self.finalhome)[stat.ST_SIZE]
2838+
2839     def get_shnum(self):
2840         return self.shnum
2841 
2842hunk ./src/allmydata/storage/immutable.py 7
2843 
2844 from zope.interface import implements
2845 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2846-from allmydata.util import base32, fileutil, log
2847+from allmydata.util import base32, log
2848 from allmydata.util.assertutil import precondition
2849 from allmydata.util.hashutil import constant_time_compare
2850 from allmydata.storage.lease import LeaseInfo
2851hunk ./src/allmydata/storage/immutable.py 44
2852     def remote_close(self):
2853         precondition(not self.closed)
2854         start = time.time()
2855-
2856-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2857-        fileutil.rename(self.incominghome, self.finalhome)
2858-        try:
2859-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2860-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2861-            # these directories lying around forever, but the delete might
2862-            # fail if we're working on another share for the same storage
2863-            # index (like ab/abcde/5). The alternative approach would be to
2864-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2865-            # ShareWriter), each of which is responsible for a single
2866-            # directory on disk, and have them use reference counting of
2867-            # their children to know when they should do the rmdir. This
2868-            # approach is simpler, but relies on os.rmdir refusing to delete
2869-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2870-            os.rmdir(os.path.dirname(self.incominghome))
2871-            # we also delete the grandparent (prefix) directory, .../ab ,
2872-            # again to avoid leaving directories lying around. This might
2873-            # fail if there is another bucket open that shares a prefix (like
2874-            # ab/abfff).
2875-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2876-            # we leave the great-grandparent (incoming/) directory in place.
2877-        except EnvironmentError:
2878-            # ignore the "can't rmdir because the directory is not empty"
2879-            # exceptions, those are normal consequences of the
2880-            # above-mentioned conditions.
2881-            pass
2882+        self._sharefile.close()
2883         self._sharefile = None
2884         self.closed = True
2885         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2886hunk ./src/allmydata/storage/immutable.py 49
2887 
2888-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2889+        filelen = self._sharefile.stat()
2890         self.ss.bucket_writer_closed(self, filelen)
2891         self.ss.add_latency("close", time.time() - start)
2892         self.ss.count("close")
2893hunk ./src/allmydata/storage/server.py 45
2894         self._active_writers = weakref.WeakKeyDictionary()
2895         self.backend = backend
2896         self.backend.setServiceParent(self)
2897+        self.backend.set_storage_server(self)
2898         log.msg("StorageServer created", facility="tahoe.storage")
2899 
2900         self.latencies = {"allocate": [], # immutable
2901hunk ./src/allmydata/storage/server.py 220
2902 
2903         for shnum in (sharenums - alreadygot):
2904             if (not limited) or (remaining_space >= max_space_per_bucket):
2905-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2906-                self.backend.set_storage_server(self)
2907                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2908                                                      max_space_per_bucket, lease_info, canary)
2909                 bucketwriters[shnum] = bw
2910hunk ./src/allmydata/test/test_backends.py 117
2911         mockopen.side_effect = call_open
2912         testbackend = DASCore(tempdir, expiration_policy)
2913         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2914-
2915+   
2916+    @mock.patch('allmydata.util.fileutil.get_available_space')
2917     @mock.patch('time.time')
2918     @mock.patch('os.mkdir')
2919     @mock.patch('__builtin__.open')
2920hunk ./src/allmydata/test/test_backends.py 124
2921     @mock.patch('os.listdir')
2922     @mock.patch('os.path.isdir')
2923-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2924+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2925+                             mockget_available_space):
2926         """ Write a new share. """
2927 
2928         def call_listdir(dirname):
2929hunk ./src/allmydata/test/test_backends.py 148
2930 
2931         mockmkdir.side_effect = call_mkdir
2932 
2933+        def call_get_available_space(storedir, reserved_space):
2934+            self.failUnlessReallyEqual(storedir, tempdir)
2935+            return 1
2936+
2937+        mockget_available_space.side_effect = call_get_available_space
2938+
2939         class MockFile:
2940             def __init__(self):
2941                 self.buffer = ''
2942hunk ./src/allmydata/test/test_backends.py 188
2943         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2944         bs[0].remote_write(0, 'a')
2945         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2946-       
2947+
2948+        # What happens when there's not enough space for the client's request?
2949+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2950+
2951         # Now test the allocated_size method.
2952         spaceint = self.s.allocated_size()
2953         self.failUnlessReallyEqual(spaceint, 1)
2954}
2955[checkpoint10
2956wilcoxjg@gmail.com**20110707172049
2957 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2958] {
2959hunk ./src/allmydata/test/test_backends.py 20
2960 # The following share file contents was generated with
2961 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2962 # with share data == 'a'.
2963-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2964+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2965+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2966+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2967 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2968 
2969hunk ./src/allmydata/test/test_backends.py 25
2970+testnodeid = 'testnodeidxxxxxxxxxx'
2971 tempdir = 'teststoredir'
2972 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2973 sharefname = os.path.join(sharedirname, '0')
2974hunk ./src/allmydata/test/test_backends.py 37
2975 
2976 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2977     def setUp(self):
2978-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2979+        self.s = StorageServer(testnodeid, backend=NullCore())
2980 
2981     @mock.patch('os.mkdir')
2982     @mock.patch('__builtin__.open')
2983hunk ./src/allmydata/test/test_backends.py 99
2984         mockmkdir.side_effect = call_mkdir
2985 
2986         # Now begin the test.
2987-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2988+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2989 
2990         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2991 
2992hunk ./src/allmydata/test/test_backends.py 119
2993 
2994         mockopen.side_effect = call_open
2995         testbackend = DASCore(tempdir, expiration_policy)
2996-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2997-   
2998+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
2999+       
3000+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3001     @mock.patch('allmydata.util.fileutil.get_available_space')
3002     @mock.patch('time.time')
3003     @mock.patch('os.mkdir')
3004hunk ./src/allmydata/test/test_backends.py 129
3005     @mock.patch('os.listdir')
3006     @mock.patch('os.path.isdir')
3007     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3008-                             mockget_available_space):
3009+                             mockget_available_space, mockget_shares):
3010         """ Write a new share. """
3011 
3012         def call_listdir(dirname):
3013hunk ./src/allmydata/test/test_backends.py 139
3014         mocklistdir.side_effect = call_listdir
3015 
3016         def call_isdir(dirname):
3017+            #XXX Should there be any other tests here?
3018             self.failUnlessReallyEqual(dirname, sharedirname)
3019             return True
3020 
3021hunk ./src/allmydata/test/test_backends.py 159
3022 
3023         mockget_available_space.side_effect = call_get_available_space
3024 
3025+        mocktime.return_value = 0
3026+        class MockShare:
3027+            def __init__(self):
3028+                self.shnum = 1
3029+               
3030+            def add_or_renew_lease(elf, lease_info):
3031+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3032+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3033+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3034+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3035+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3036+               
3037+
3038+        share = MockShare()
3039+        def call_get_shares(storageindex):
3040+            return [share]
3041+
3042+        mockget_shares.side_effect = call_get_shares
3043+
3044         class MockFile:
3045             def __init__(self):
3046                 self.buffer = ''
3047hunk ./src/allmydata/test/test_backends.py 199
3048             def tell(self):
3049                 return self.pos
3050 
3051-        mocktime.return_value = 0
3052 
3053         sharefile = MockFile()
3054         def call_open(fname, mode):
3055}
3056[jacp 11
3057wilcoxjg@gmail.com**20110708213919
3058 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3059] {
3060hunk ./src/allmydata/storage/backends/das/core.py 144
3061         self.incomingdir = os.path.join(sharedir, 'incoming')
3062         si_dir = storage_index_to_dir(storageindex)
3063         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3064+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3065         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3066         if create:
3067             # touch the file, so later callers will see that we're working on
3068hunk ./src/allmydata/storage/backends/das/core.py 208
3069         pass
3070         
3071     def stat(self):
3072-        return os.stat(self.finalhome)[stat.ST_SIZE]
3073+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3074 
3075     def get_shnum(self):
3076         return self.shnum
3077hunk ./src/allmydata/storage/immutable.py 44
3078     def remote_close(self):
3079         precondition(not self.closed)
3080         start = time.time()
3081+
3082         self._sharefile.close()
3083hunk ./src/allmydata/storage/immutable.py 46
3084+        filelen = self._sharefile.stat()
3085         self._sharefile = None
3086hunk ./src/allmydata/storage/immutable.py 48
3087+
3088         self.closed = True
3089         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3090 
3091hunk ./src/allmydata/storage/immutable.py 52
3092-        filelen = self._sharefile.stat()
3093         self.ss.bucket_writer_closed(self, filelen)
3094         self.ss.add_latency("close", time.time() - start)
3095         self.ss.count("close")
3096hunk ./src/allmydata/storage/server.py 220
3097 
3098         for shnum in (sharenums - alreadygot):
3099             if (not limited) or (remaining_space >= max_space_per_bucket):
3100-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3101-                                                     max_space_per_bucket, lease_info, canary)
3102+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3103                 bucketwriters[shnum] = bw
3104                 self._active_writers[bw] = 1
3105                 if limited:
3106hunk ./src/allmydata/test/test_backends.py 20
3107 # The following share file contents was generated with
3108 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3109 # with share data == 'a'.
3110-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3111-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3112+renew_secret  = 'x'*32
3113+cancel_secret = 'y'*32
3114 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3115 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3116 
3117hunk ./src/allmydata/test/test_backends.py 27
3118 testnodeid = 'testnodeidxxxxxxxxxx'
3119 tempdir = 'teststoredir'
3120-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3121-sharefname = os.path.join(sharedirname, '0')
3122+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3123+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3124+shareincomingname = os.path.join(sharedirincomingname, '0')
3125+sharefname = os.path.join(sharedirfinalname, '0')
3126+
3127 expiration_policy = {'enabled' : False,
3128                      'mode' : 'age',
3129                      'override_lease_duration' : None,
3130hunk ./src/allmydata/test/test_backends.py 123
3131         mockopen.side_effect = call_open
3132         testbackend = DASCore(tempdir, expiration_policy)
3133         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3134-       
3135+
3136+    @mock.patch('allmydata.util.fileutil.rename')
3137+    @mock.patch('allmydata.util.fileutil.make_dirs')
3138+    @mock.patch('os.path.exists')
3139+    @mock.patch('os.stat')
3140     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3141     @mock.patch('allmydata.util.fileutil.get_available_space')
3142     @mock.patch('time.time')
3143hunk ./src/allmydata/test/test_backends.py 136
3144     @mock.patch('os.listdir')
3145     @mock.patch('os.path.isdir')
3146     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3147-                             mockget_available_space, mockget_shares):
3148+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3149+                             mockmake_dirs, mockrename):
3150         """ Write a new share. """
3151 
3152         def call_listdir(dirname):
3153hunk ./src/allmydata/test/test_backends.py 141
3154-            self.failUnlessReallyEqual(dirname, sharedirname)
3155+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3156             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3157 
3158         mocklistdir.side_effect = call_listdir
3159hunk ./src/allmydata/test/test_backends.py 148
3160 
3161         def call_isdir(dirname):
3162             #XXX Should there be any other tests here?
3163-            self.failUnlessReallyEqual(dirname, sharedirname)
3164+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3165             return True
3166 
3167         mockisdir.side_effect = call_isdir
3168hunk ./src/allmydata/test/test_backends.py 154
3169 
3170         def call_mkdir(dirname, permissions):
3171-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3172+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3173                 self.Fail
3174             else:
3175                 return True
3176hunk ./src/allmydata/test/test_backends.py 208
3177                 return self.pos
3178 
3179 
3180-        sharefile = MockFile()
3181+        fobj = MockFile()
3182         def call_open(fname, mode):
3183             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3184hunk ./src/allmydata/test/test_backends.py 211
3185-            return sharefile
3186+            return fobj
3187 
3188         mockopen.side_effect = call_open
3189 
3190hunk ./src/allmydata/test/test_backends.py 215
3191+        def call_make_dirs(dname):
3192+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3193+           
3194+        mockmake_dirs.side_effect = call_make_dirs
3195+
3196+        def call_rename(src, dst):
3197+           self.failUnlessReallyEqual(src, shareincomingname)
3198+           self.failUnlessReallyEqual(dst, sharefname)
3199+           
3200+        mockrename.side_effect = call_rename
3201+
3202+        def call_exists(fname):
3203+            self.failUnlessReallyEqual(fname, sharefname)
3204+
3205+        mockexists.side_effect = call_exists
3206+
3207         # Now begin the test.
3208         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3209         bs[0].remote_write(0, 'a')
3210hunk ./src/allmydata/test/test_backends.py 234
3211-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3212+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3213+        spaceint = self.s.allocated_size()
3214+        self.failUnlessReallyEqual(spaceint, 1)
3215+
3216+        bs[0].remote_close()
3217 
3218         # What happens when there's not enough space for the client's request?
3219hunk ./src/allmydata/test/test_backends.py 241
3220-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3221+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3222 
3223         # Now test the allocated_size method.
3224hunk ./src/allmydata/test/test_backends.py 244
3225-        spaceint = self.s.allocated_size()
3226-        self.failUnlessReallyEqual(spaceint, 1)
3227+        #self.failIf(mockexists.called, mockexists.call_args_list)
3228+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3229+        #self.failIf(mockrename.called, mockrename.call_args_list)
3230+        #self.failIf(mockstat.called, mockstat.call_args_list)
3231 
3232     @mock.patch('os.path.exists')
3233     @mock.patch('os.path.getsize')
3234}
3235[checkpoint12 testing correct behavior with regard to incoming and final
3236wilcoxjg@gmail.com**20110710191915
3237 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3238] {
3239hunk ./src/allmydata/storage/backends/das/core.py 74
3240         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3241         self.lease_checker.setServiceParent(self)
3242 
3243+    def get_incoming(self, storageindex):
3244+        return set((1,))
3245+
3246     def get_available_space(self):
3247         if self.readonly:
3248             return 0
3249hunk ./src/allmydata/storage/server.py 77
3250         """Return a dict, indexed by category, that contains a dict of
3251         latency numbers for each category. If there are sufficient samples
3252         for unambiguous interpretation, each dict will contain the
3253-        following keys: mean, 01_0_percentile, 10_0_percentile,
3254+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3255         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3256         99_0_percentile, 99_9_percentile.  If there are insufficient
3257         samples for a given percentile to be interpreted unambiguously
3258hunk ./src/allmydata/storage/server.py 120
3259 
3260     def get_stats(self):
3261         # remember: RIStatsProvider requires that our return dict
3262-        # contains numeric values.
3263+        # contains numeric, or None values.
3264         stats = { 'storage_server.allocated': self.allocated_size(), }
3265         stats['storage_server.reserved_space'] = self.reserved_space
3266         for category,ld in self.get_latencies().items():
3267hunk ./src/allmydata/storage/server.py 185
3268         start = time.time()
3269         self.count("allocate")
3270         alreadygot = set()
3271+        incoming = set()
3272         bucketwriters = {} # k: shnum, v: BucketWriter
3273 
3274         si_s = si_b2a(storage_index)
3275hunk ./src/allmydata/storage/server.py 219
3276             alreadygot.add(share.shnum)
3277             share.add_or_renew_lease(lease_info)
3278 
3279-        for shnum in (sharenums - alreadygot):
3280+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3281+        incoming = self.backend.get_incoming(storageindex)
3282+
3283+        for shnum in ((sharenums - alreadygot) - incoming):
3284             if (not limited) or (remaining_space >= max_space_per_bucket):
3285                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3286                 bucketwriters[shnum] = bw
3287hunk ./src/allmydata/storage/server.py 229
3288                 self._active_writers[bw] = 1
3289                 if limited:
3290                     remaining_space -= max_space_per_bucket
3291-
3292-        #XXX We SHOULD DOCUMENT LATER.
3293+            else:
3294+                # Bummer not enough space to accept this share.
3295+                pass
3296 
3297         self.add_latency("allocate", time.time() - start)
3298         return alreadygot, bucketwriters
3299hunk ./src/allmydata/storage/server.py 323
3300         self.add_latency("get", time.time() - start)
3301         return bucketreaders
3302 
3303-    def get_leases(self, storage_index):
3304+    def remote_get_incoming(self, storageindex):
3305+        incoming_share_set = self.backend.get_incoming(storageindex)
3306+        return incoming_share_set
3307+
3308+    def get_leases(self, storageindex):
3309         """Provide an iterator that yields all of the leases attached to this
3310         bucket. Each lease is returned as a LeaseInfo instance.
3311 
3312hunk ./src/allmydata/storage/server.py 337
3313         # since all shares get the same lease data, we just grab the leases
3314         # from the first share
3315         try:
3316-            shnum, filename = self._get_shares(storage_index).next()
3317+            shnum, filename = self._get_shares(storageindex).next()
3318             sf = ShareFile(filename)
3319             return sf.get_leases()
3320         except StopIteration:
3321hunk ./src/allmydata/test/test_backends.py 182
3322 
3323         share = MockShare()
3324         def call_get_shares(storageindex):
3325-            return [share]
3326+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3327+            return []#share]
3328 
3329         mockget_shares.side_effect = call_get_shares
3330 
3331hunk ./src/allmydata/test/test_backends.py 222
3332         mockmake_dirs.side_effect = call_make_dirs
3333 
3334         def call_rename(src, dst):
3335-           self.failUnlessReallyEqual(src, shareincomingname)
3336-           self.failUnlessReallyEqual(dst, sharefname)
3337+            self.failUnlessReallyEqual(src, shareincomingname)
3338+            self.failUnlessReallyEqual(dst, sharefname)
3339             
3340         mockrename.side_effect = call_rename
3341 
3342hunk ./src/allmydata/test/test_backends.py 233
3343         mockexists.side_effect = call_exists
3344 
3345         # Now begin the test.
3346+
3347+        # XXX (0) ???  Fail unless something is not properly set-up?
3348         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3349hunk ./src/allmydata/test/test_backends.py 236
3350+
3351+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3352+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3353+
3354+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3355+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3356+        # with the same si, until BucketWriter.remote_close() has been called.
3357+        # self.failIf(bsa)
3358+
3359+        # XXX (3) Inspect final and fail unless there's nothing there.
3360         bs[0].remote_write(0, 'a')
3361hunk ./src/allmydata/test/test_backends.py 247
3362+        # XXX (4a) Inspect final and fail unless share 0 is there.
3363+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3364         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3365         spaceint = self.s.allocated_size()
3366         self.failUnlessReallyEqual(spaceint, 1)
3367hunk ./src/allmydata/test/test_backends.py 253
3368 
3369+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3370         bs[0].remote_close()
3371 
3372         # What happens when there's not enough space for the client's request?
3373hunk ./src/allmydata/test/test_backends.py 260
3374         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3375 
3376         # Now test the allocated_size method.
3377-        #self.failIf(mockexists.called, mockexists.call_args_list)
3378+        # self.failIf(mockexists.called, mockexists.call_args_list)
3379         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3380         #self.failIf(mockrename.called, mockrename.call_args_list)
3381         #self.failIf(mockstat.called, mockstat.call_args_list)
3382}
3383[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3384wilcoxjg@gmail.com**20110710195139
3385 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3386] {
3387hunk ./src/allmydata/storage/server.py 220
3388             share.add_or_renew_lease(lease_info)
3389 
3390         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3391-        incoming = self.backend.get_incoming(storageindex)
3392+        incoming = self.backend.get_incoming(storage_index)
3393 
3394         for shnum in ((sharenums - alreadygot) - incoming):
3395             if (not limited) or (remaining_space >= max_space_per_bucket):
3396hunk ./src/allmydata/storage/server.py 323
3397         self.add_latency("get", time.time() - start)
3398         return bucketreaders
3399 
3400-    def remote_get_incoming(self, storageindex):
3401-        incoming_share_set = self.backend.get_incoming(storageindex)
3402+    def remote_get_incoming(self, storage_index):
3403+        incoming_share_set = self.backend.get_incoming(storage_index)
3404         return incoming_share_set
3405 
3406hunk ./src/allmydata/storage/server.py 327
3407-    def get_leases(self, storageindex):
3408+    def get_leases(self, storage_index):
3409         """Provide an iterator that yields all of the leases attached to this
3410         bucket. Each lease is returned as a LeaseInfo instance.
3411 
3412hunk ./src/allmydata/storage/server.py 337
3413         # since all shares get the same lease data, we just grab the leases
3414         # from the first share
3415         try:
3416-            shnum, filename = self._get_shares(storageindex).next()
3417+            shnum, filename = self._get_shares(storage_index).next()
3418             sf = ShareFile(filename)
3419             return sf.get_leases()
3420         except StopIteration:
3421replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3422}
3423[adding comments to clarify what I'm about to do.
3424wilcoxjg@gmail.com**20110710220623
3425 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3426] {
3427hunk ./src/allmydata/storage/backends/das/core.py 8
3428 
3429 import os, re, weakref, struct, time
3430 
3431-from foolscap.api import Referenceable
3432+#from foolscap.api import Referenceable
3433 from twisted.application import service
3434 
3435 from zope.interface import implements
3436hunk ./src/allmydata/storage/backends/das/core.py 12
3437-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3438+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3439 from allmydata.util import fileutil, idlib, log, time_format
3440 import allmydata # for __full_version__
3441 
3442hunk ./src/allmydata/storage/server.py 219
3443             alreadygot.add(share.shnum)
3444             share.add_or_renew_lease(lease_info)
3445 
3446-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3447+        # fill incoming with all shares that are incoming use a set operation
3448+        # since there's no need to operate on individual pieces
3449         incoming = self.backend.get_incoming(storageindex)
3450 
3451         for shnum in ((sharenums - alreadygot) - incoming):
3452hunk ./src/allmydata/test/test_backends.py 245
3453         # with the same si, until BucketWriter.remote_close() has been called.
3454         # self.failIf(bsa)
3455 
3456-        # XXX (3) Inspect final and fail unless there's nothing there.
3457         bs[0].remote_write(0, 'a')
3458hunk ./src/allmydata/test/test_backends.py 246
3459-        # XXX (4a) Inspect final and fail unless share 0 is there.
3460-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3461         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3462         spaceint = self.s.allocated_size()
3463         self.failUnlessReallyEqual(spaceint, 1)
3464hunk ./src/allmydata/test/test_backends.py 250
3465 
3466-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3467+        # XXX (3) Inspect final and fail unless there's nothing there.
3468         bs[0].remote_close()
3469hunk ./src/allmydata/test/test_backends.py 252
3470+        # XXX (4a) Inspect final and fail unless share 0 is there.
3471+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3472 
3473         # What happens when there's not enough space for the client's request?
3474         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3475}
3476[branching back, no longer attempting to mock inside TestServerFSBackend
3477wilcoxjg@gmail.com**20110711190849
3478 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3479] {
3480hunk ./src/allmydata/storage/backends/das/core.py 75
3481         self.lease_checker.setServiceParent(self)
3482 
3483     def get_incoming(self, storageindex):
3484-        return set((1,))
3485-
3486-    def get_available_space(self):
3487-        if self.readonly:
3488-            return 0
3489-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3490+        """Return the set of incoming shnums."""
3491+        return set(os.listdir(self.incomingdir))
3492 
3493     def get_shares(self, storage_index):
3494         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3495hunk ./src/allmydata/storage/backends/das/core.py 90
3496             # Commonly caused by there being no shares at all.
3497             pass
3498         
3499+    def get_available_space(self):
3500+        if self.readonly:
3501+            return 0
3502+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3503+
3504     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3505         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3506         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3507hunk ./src/allmydata/test/test_backends.py 27
3508 
3509 testnodeid = 'testnodeidxxxxxxxxxx'
3510 tempdir = 'teststoredir'
3511-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3512-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3513+basedir = os.path.join(tempdir, 'shares')
3514+baseincdir = os.path.join(basedir, 'incoming')
3515+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3516+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3517 shareincomingname = os.path.join(sharedirincomingname, '0')
3518 sharefname = os.path.join(sharedirfinalname, '0')
3519 
3520hunk ./src/allmydata/test/test_backends.py 142
3521                              mockmake_dirs, mockrename):
3522         """ Write a new share. """
3523 
3524-        def call_listdir(dirname):
3525-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3526-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3527-
3528-        mocklistdir.side_effect = call_listdir
3529-
3530-        def call_isdir(dirname):
3531-            #XXX Should there be any other tests here?
3532-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3533-            return True
3534-
3535-        mockisdir.side_effect = call_isdir
3536-
3537-        def call_mkdir(dirname, permissions):
3538-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3539-                self.Fail
3540-            else:
3541-                return True
3542-
3543-        mockmkdir.side_effect = call_mkdir
3544-
3545-        def call_get_available_space(storedir, reserved_space):
3546-            self.failUnlessReallyEqual(storedir, tempdir)
3547-            return 1
3548-
3549-        mockget_available_space.side_effect = call_get_available_space
3550-
3551-        mocktime.return_value = 0
3552         class MockShare:
3553             def __init__(self):
3554                 self.shnum = 1
3555hunk ./src/allmydata/test/test_backends.py 152
3556                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3557                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3558                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3559-               
3560 
3561         share = MockShare()
3562hunk ./src/allmydata/test/test_backends.py 154
3563-        def call_get_shares(storageindex):
3564-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3565-            return []#share]
3566-
3567-        mockget_shares.side_effect = call_get_shares
3568 
3569         class MockFile:
3570             def __init__(self):
3571hunk ./src/allmydata/test/test_backends.py 176
3572             def tell(self):
3573                 return self.pos
3574 
3575-
3576         fobj = MockFile()
3577hunk ./src/allmydata/test/test_backends.py 177
3578+
3579+        directories = {}
3580+        def call_listdir(dirname):
3581+            if dirname not in directories:
3582+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3583+            else:
3584+                return directories[dirname].get_contents()
3585+
3586+        mocklistdir.side_effect = call_listdir
3587+
3588+        class MockDir:
3589+            def __init__(self, dirname):
3590+                self.name = dirname
3591+                self.contents = []
3592+   
3593+            def get_contents(self):
3594+                return self.contents
3595+
3596+        def call_isdir(dirname):
3597+            #XXX Should there be any other tests here?
3598+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3599+            return True
3600+
3601+        mockisdir.side_effect = call_isdir
3602+
3603+        def call_mkdir(dirname, permissions):
3604+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3605+                self.Fail
3606+            if dirname in directories:
3607+                raise OSError(17, "File exists: '%s'" % dirname)
3608+                self.Fail
3609+            elif dirname not in directories:
3610+                directories[dirname] = MockDir(dirname)
3611+                return True
3612+
3613+        mockmkdir.side_effect = call_mkdir
3614+
3615+        def call_get_available_space(storedir, reserved_space):
3616+            self.failUnlessReallyEqual(storedir, tempdir)
3617+            return 1
3618+
3619+        mockget_available_space.side_effect = call_get_available_space
3620+
3621+        mocktime.return_value = 0
3622+        def call_get_shares(storageindex):
3623+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3624+            return []#share]
3625+
3626+        mockget_shares.side_effect = call_get_shares
3627+
3628         def call_open(fname, mode):
3629             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3630             return fobj
3631}
3632[checkpoint12 TestServerFSBackend no longer mocks filesystem
3633wilcoxjg@gmail.com**20110711193357
3634 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3635] {
3636hunk ./src/allmydata/storage/backends/das/core.py 23
3637      create_mutable_sharefile
3638 from allmydata.storage.immutable import BucketWriter, BucketReader
3639 from allmydata.storage.crawler import FSBucketCountingCrawler
3640+from allmydata.util.hashutil import constant_time_compare
3641 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3642 
3643 from zope.interface import implements
3644hunk ./src/allmydata/storage/backends/das/core.py 28
3645 
3646+# storage/
3647+# storage/shares/incoming
3648+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3649+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3650+# storage/shares/$START/$STORAGEINDEX
3651+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3652+
3653+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3654+# base-32 chars).
3655 # $SHARENUM matches this regex:
3656 NUM_RE=re.compile("^[0-9]+$")
3657 
3658hunk ./src/allmydata/test/test_backends.py 126
3659         testbackend = DASCore(tempdir, expiration_policy)
3660         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3661 
3662-    @mock.patch('allmydata.util.fileutil.rename')
3663-    @mock.patch('allmydata.util.fileutil.make_dirs')
3664-    @mock.patch('os.path.exists')
3665-    @mock.patch('os.stat')
3666-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3667-    @mock.patch('allmydata.util.fileutil.get_available_space')
3668     @mock.patch('time.time')
3669hunk ./src/allmydata/test/test_backends.py 127
3670-    @mock.patch('os.mkdir')
3671-    @mock.patch('__builtin__.open')
3672-    @mock.patch('os.listdir')
3673-    @mock.patch('os.path.isdir')
3674-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3675-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3676-                             mockmake_dirs, mockrename):
3677+    def test_write_share(self, mocktime):
3678         """ Write a new share. """
3679 
3680         class MockShare:
3681hunk ./src/allmydata/test/test_backends.py 143
3682 
3683         share = MockShare()
3684 
3685-        class MockFile:
3686-            def __init__(self):
3687-                self.buffer = ''
3688-                self.pos = 0
3689-            def write(self, instring):
3690-                begin = self.pos
3691-                padlen = begin - len(self.buffer)
3692-                if padlen > 0:
3693-                    self.buffer += '\x00' * padlen
3694-                end = self.pos + len(instring)
3695-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3696-                self.pos = end
3697-            def close(self):
3698-                pass
3699-            def seek(self, pos):
3700-                self.pos = pos
3701-            def read(self, numberbytes):
3702-                return self.buffer[self.pos:self.pos+numberbytes]
3703-            def tell(self):
3704-                return self.pos
3705-
3706-        fobj = MockFile()
3707-
3708-        directories = {}
3709-        def call_listdir(dirname):
3710-            if dirname not in directories:
3711-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3712-            else:
3713-                return directories[dirname].get_contents()
3714-
3715-        mocklistdir.side_effect = call_listdir
3716-
3717-        class MockDir:
3718-            def __init__(self, dirname):
3719-                self.name = dirname
3720-                self.contents = []
3721-   
3722-            def get_contents(self):
3723-                return self.contents
3724-
3725-        def call_isdir(dirname):
3726-            #XXX Should there be any other tests here?
3727-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3728-            return True
3729-
3730-        mockisdir.side_effect = call_isdir
3731-
3732-        def call_mkdir(dirname, permissions):
3733-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3734-                self.Fail
3735-            if dirname in directories:
3736-                raise OSError(17, "File exists: '%s'" % dirname)
3737-                self.Fail
3738-            elif dirname not in directories:
3739-                directories[dirname] = MockDir(dirname)
3740-                return True
3741-
3742-        mockmkdir.side_effect = call_mkdir
3743-
3744-        def call_get_available_space(storedir, reserved_space):
3745-            self.failUnlessReallyEqual(storedir, tempdir)
3746-            return 1
3747-
3748-        mockget_available_space.side_effect = call_get_available_space
3749-
3750-        mocktime.return_value = 0
3751-        def call_get_shares(storageindex):
3752-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3753-            return []#share]
3754-
3755-        mockget_shares.side_effect = call_get_shares
3756-
3757-        def call_open(fname, mode):
3758-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3759-            return fobj
3760-
3761-        mockopen.side_effect = call_open
3762-
3763-        def call_make_dirs(dname):
3764-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3765-           
3766-        mockmake_dirs.side_effect = call_make_dirs
3767-
3768-        def call_rename(src, dst):
3769-            self.failUnlessReallyEqual(src, shareincomingname)
3770-            self.failUnlessReallyEqual(dst, sharefname)
3771-           
3772-        mockrename.side_effect = call_rename
3773-
3774-        def call_exists(fname):
3775-            self.failUnlessReallyEqual(fname, sharefname)
3776-
3777-        mockexists.side_effect = call_exists
3778-
3779         # Now begin the test.
3780 
3781         # XXX (0) ???  Fail unless something is not properly set-up?
3782}
3783[JACP
3784wilcoxjg@gmail.com**20110711194407
3785 Ignore-this: b54745de777c4bb58d68d708f010bbb
3786] {
3787hunk ./src/allmydata/storage/backends/das/core.py 86
3788 
3789     def get_incoming(self, storageindex):
3790         """Return the set of incoming shnums."""
3791-        return set(os.listdir(self.incomingdir))
3792+        try:
3793+            incominglist = os.listdir(self.incomingdir)
3794+            print "incominglist: ", incominglist
3795+            return set(incominglist)
3796+        except OSError:
3797+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3798+            pass
3799 
3800     def get_shares(self, storage_index):
3801         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3802hunk ./src/allmydata/storage/server.py 17
3803 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3804      create_mutable_sharefile
3805 
3806-# storage/
3807-# storage/shares/incoming
3808-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3809-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3810-# storage/shares/$START/$STORAGEINDEX
3811-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3812-
3813-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3814-# base-32 chars).
3815-
3816-
3817 class StorageServer(service.MultiService, Referenceable):
3818     implements(RIStorageServer, IStatsProducer)
3819     name = 'storage'
3820}
3821[testing get incoming
3822wilcoxjg@gmail.com**20110711210224
3823 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3824] {
3825hunk ./src/allmydata/storage/backends/das/core.py 87
3826     def get_incoming(self, storageindex):
3827         """Return the set of incoming shnums."""
3828         try:
3829-            incominglist = os.listdir(self.incomingdir)
3830+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3831+            incominglist = os.listdir(incomingsharesdir)
3832             print "incominglist: ", incominglist
3833             return set(incominglist)
3834         except OSError:
3835hunk ./src/allmydata/storage/backends/das/core.py 92
3836-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3837-            pass
3838-
3839+            # XXX I'd like to make this more specific. If there are no shares at all.
3840+            return set()
3841+           
3842     def get_shares(self, storage_index):
3843         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3844         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3845hunk ./src/allmydata/test/test_backends.py 149
3846         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3847 
3848         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3849+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3850         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3851 
3852hunk ./src/allmydata/test/test_backends.py 152
3853-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3854         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3855         # with the same si, until BucketWriter.remote_close() has been called.
3856         # self.failIf(bsa)
3857}
3858[ImmutableShareFile does not know its StorageIndex
3859wilcoxjg@gmail.com**20110711211424
3860 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3861] {
3862hunk ./src/allmydata/storage/backends/das/core.py 112
3863             return 0
3864         return fileutil.get_available_space(self.storedir, self.reserved_space)
3865 
3866-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3867-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3868+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3869+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3870+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3871+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3872         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3873         return bw
3874 
3875hunk ./src/allmydata/storage/backends/das/core.py 155
3876     LEASE_SIZE = struct.calcsize(">L32s32sL")
3877     sharetype = "immutable"
3878 
3879-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3880+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3881         """ If max_size is not None then I won't allow more than
3882         max_size to be written to me. If create=True then max_size
3883         must not be None. """
3884}
3885[get_incoming correctly reports the 0 share after it has arrived
3886wilcoxjg@gmail.com**20110712025157
3887 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3888] {
3889hunk ./src/allmydata/storage/backends/das/core.py 1
3890+import os, re, weakref, struct, time, stat
3891+
3892 from allmydata.interfaces import IStorageBackend
3893 from allmydata.storage.backends.base import Backend
3894 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3895hunk ./src/allmydata/storage/backends/das/core.py 8
3896 from allmydata.util.assertutil import precondition
3897 
3898-import os, re, weakref, struct, time
3899-
3900 #from foolscap.api import Referenceable
3901 from twisted.application import service
3902 
3903hunk ./src/allmydata/storage/backends/das/core.py 89
3904         try:
3905             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3906             incominglist = os.listdir(incomingsharesdir)
3907-            print "incominglist: ", incominglist
3908-            return set(incominglist)
3909+            incomingshnums = [int(x) for x in incominglist]
3910+            return set(incomingshnums)
3911         except OSError:
3912             # XXX I'd like to make this more specific. If there are no shares at all.
3913             return set()
3914hunk ./src/allmydata/storage/backends/das/core.py 113
3915         return fileutil.get_available_space(self.storedir, self.reserved_space)
3916 
3917     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3918-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3919-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3920-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3921+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3922+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3923+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3924         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3925         return bw
3926 
3927hunk ./src/allmydata/storage/backends/das/core.py 160
3928         max_size to be written to me. If create=True then max_size
3929         must not be None. """
3930         precondition((max_size is not None) or (not create), max_size, create)
3931-        self.shnum = shnum
3932-        self.storage_index = storageindex
3933-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3934         self._max_size = max_size
3935hunk ./src/allmydata/storage/backends/das/core.py 161
3936-        self.incomingdir = os.path.join(sharedir, 'incoming')
3937-        si_dir = storage_index_to_dir(storageindex)
3938-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3939-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3940-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3941+        self.incominghome = incominghome
3942+        self.finalhome = finalhome
3943         if create:
3944             # touch the file, so later callers will see that we're working on
3945             # it. Also construct the metadata.
3946hunk ./src/allmydata/storage/backends/das/core.py 166
3947-            assert not os.path.exists(self.fname)
3948-            fileutil.make_dirs(os.path.dirname(self.fname))
3949-            f = open(self.fname, 'wb')
3950+            assert not os.path.exists(self.finalhome)
3951+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3952+            f = open(self.incominghome, 'wb')
3953             # The second field -- the four-byte share data length -- is no
3954             # longer used as of Tahoe v1.3.0, but we continue to write it in
3955             # there in case someone downgrades a storage server from >=
3956hunk ./src/allmydata/storage/backends/das/core.py 183
3957             self._lease_offset = max_size + 0x0c
3958             self._num_leases = 0
3959         else:
3960-            f = open(self.fname, 'rb')
3961-            filesize = os.path.getsize(self.fname)
3962+            f = open(self.finalhome, 'rb')
3963+            filesize = os.path.getsize(self.finalhome)
3964             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3965             f.close()
3966             if version != 1:
3967hunk ./src/allmydata/storage/backends/das/core.py 189
3968                 msg = "sharefile %s had version %d but we wanted 1" % \
3969-                      (self.fname, version)
3970+                      (self.finalhome, version)
3971                 raise UnknownImmutableContainerVersionError(msg)
3972             self._num_leases = num_leases
3973             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3974hunk ./src/allmydata/storage/backends/das/core.py 225
3975         pass
3976         
3977     def stat(self):
3978-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3979+        return os.stat(self.finalhome)[stat.ST_SIZE]
3980+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
3981 
3982     def get_shnum(self):
3983         return self.shnum
3984hunk ./src/allmydata/storage/backends/das/core.py 232
3985 
3986     def unlink(self):
3987-        os.unlink(self.fname)
3988+        os.unlink(self.finalhome)
3989 
3990     def read_share_data(self, offset, length):
3991         precondition(offset >= 0)
3992hunk ./src/allmydata/storage/backends/das/core.py 239
3993         # Reads beyond the end of the data are truncated. Reads that start
3994         # beyond the end of the data return an empty string.
3995         seekpos = self._data_offset+offset
3996-        fsize = os.path.getsize(self.fname)
3997+        fsize = os.path.getsize(self.finalhome)
3998         actuallength = max(0, min(length, fsize-seekpos))
3999         if actuallength == 0:
4000             return ""
4001hunk ./src/allmydata/storage/backends/das/core.py 243
4002-        f = open(self.fname, 'rb')
4003+        f = open(self.finalhome, 'rb')
4004         f.seek(seekpos)
4005         return f.read(actuallength)
4006 
4007hunk ./src/allmydata/storage/backends/das/core.py 252
4008         precondition(offset >= 0, offset)
4009         if self._max_size is not None and offset+length > self._max_size:
4010             raise DataTooLargeError(self._max_size, offset, length)
4011-        f = open(self.fname, 'rb+')
4012+        f = open(self.incominghome, 'rb+')
4013         real_offset = self._data_offset+offset
4014         f.seek(real_offset)
4015         assert f.tell() == real_offset
4016hunk ./src/allmydata/storage/backends/das/core.py 279
4017 
4018     def get_leases(self):
4019         """Yields a LeaseInfo instance for all leases."""
4020-        f = open(self.fname, 'rb')
4021+        f = open(self.finalhome, 'rb')
4022         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
4023         f.seek(self._lease_offset)
4024         for i in range(num_leases):
4025hunk ./src/allmydata/storage/backends/das/core.py 288
4026                 yield LeaseInfo().from_immutable_data(data)
4027 
4028     def add_lease(self, lease_info):
4029-        f = open(self.fname, 'rb+')
4030+        f = open(self.incominghome, 'rb+')
4031         num_leases = self._read_num_leases(f)
4032         self._write_lease_record(f, num_leases, lease_info)
4033         self._write_num_leases(f, num_leases+1)
4034hunk ./src/allmydata/storage/backends/das/core.py 301
4035                 if new_expire_time > lease.expiration_time:
4036                     # yes
4037                     lease.expiration_time = new_expire_time
4038-                    f = open(self.fname, 'rb+')
4039+                    f = open(self.finalhome, 'rb+')
4040                     self._write_lease_record(f, i, lease)
4041                     f.close()
4042                 return
4043hunk ./src/allmydata/storage/backends/das/core.py 336
4044             # the same order as they were added, so that if we crash while
4045             # doing this, we won't lose any non-cancelled leases.
4046             leases = [l for l in leases if l] # remove the cancelled leases
4047-            f = open(self.fname, 'rb+')
4048+            f = open(self.finalhome, 'rb+')
4049             for i,lease in enumerate(leases):
4050                 self._write_lease_record(f, i, lease)
4051             self._write_num_leases(f, len(leases))
4052hunk ./src/allmydata/storage/backends/das/core.py 344
4053             f.close()
4054         space_freed = self.LEASE_SIZE * num_leases_removed
4055         if not len(leases):
4056-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
4057+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
4058             self.unlink()
4059         return space_freed
4060hunk ./src/allmydata/test/test_backends.py 129
4061     @mock.patch('time.time')
4062     def test_write_share(self, mocktime):
4063         """ Write a new share. """
4064-
4065-        class MockShare:
4066-            def __init__(self):
4067-                self.shnum = 1
4068-               
4069-            def add_or_renew_lease(elf, lease_info):
4070-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4071-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4072-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4073-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4074-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4075-
4076-        share = MockShare()
4077-
4078         # Now begin the test.
4079 
4080         # XXX (0) ???  Fail unless something is not properly set-up?
4081hunk ./src/allmydata/test/test_backends.py 143
4082         # self.failIf(bsa)
4083 
4084         bs[0].remote_write(0, 'a')
4085-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4086+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4087         spaceint = self.s.allocated_size()
4088         self.failUnlessReallyEqual(spaceint, 1)
4089 
4090hunk ./src/allmydata/test/test_backends.py 161
4091         #self.failIf(mockrename.called, mockrename.call_args_list)
4092         #self.failIf(mockstat.called, mockstat.call_args_list)
4093 
4094+    def test_handle_incoming(self):
4095+        incomingset = self.s.backend.get_incoming('teststorage_index')
4096+        self.failUnlessReallyEqual(incomingset, set())
4097+
4098+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4099+       
4100+        incomingset = self.s.backend.get_incoming('teststorage_index')
4101+        self.failUnlessReallyEqual(incomingset, set((0,)))
4102+
4103+        bs[0].remote_close()
4104+        self.failUnlessReallyEqual(incomingset, set())
4105+
4106     @mock.patch('os.path.exists')
4107     @mock.patch('os.path.getsize')
4108     @mock.patch('__builtin__.open')
4109hunk ./src/allmydata/test/test_backends.py 223
4110         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4111 
4112 
4113-
4114 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4115     @mock.patch('time.time')
4116     @mock.patch('os.mkdir')
4117hunk ./src/allmydata/test/test_backends.py 271
4118         DASCore('teststoredir', expiration_policy)
4119 
4120         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4121+
4122}
4123[jacp14
4124wilcoxjg@gmail.com**20110712061211
4125 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4126] {
4127hunk ./src/allmydata/storage/backends/das/core.py 95
4128             # XXX I'd like to make this more specific. If there are no shares at all.
4129             return set()
4130             
4131-    def get_shares(self, storage_index):
4132+    def get_shares(self, storageindex):
4133         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4134hunk ./src/allmydata/storage/backends/das/core.py 97
4135-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4136+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4137         try:
4138             for f in os.listdir(finalstoragedir):
4139                 if NUM_RE.match(f):
4140hunk ./src/allmydata/storage/backends/das/core.py 102
4141                     filename = os.path.join(finalstoragedir, f)
4142-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4143+                    yield ImmutableShare(filename, storageindex, f)
4144         except OSError:
4145             # Commonly caused by there being no shares at all.
4146             pass
4147hunk ./src/allmydata/storage/backends/das/core.py 115
4148     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4149         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4150         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4151-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4152+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4153         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4154         return bw
4155 
4156hunk ./src/allmydata/storage/backends/das/core.py 155
4157     LEASE_SIZE = struct.calcsize(">L32s32sL")
4158     sharetype = "immutable"
4159 
4160-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4161+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4162         """ If max_size is not None then I won't allow more than
4163         max_size to be written to me. If create=True then max_size
4164         must not be None. """
4165hunk ./src/allmydata/storage/backends/das/core.py 160
4166         precondition((max_size is not None) or (not create), max_size, create)
4167+        self.storageindex = storageindex
4168         self._max_size = max_size
4169         self.incominghome = incominghome
4170         self.finalhome = finalhome
4171hunk ./src/allmydata/storage/backends/das/core.py 164
4172+        self.shnum = shnum
4173         if create:
4174             # touch the file, so later callers will see that we're working on
4175             # it. Also construct the metadata.
4176hunk ./src/allmydata/storage/backends/das/core.py 212
4177             # their children to know when they should do the rmdir. This
4178             # approach is simpler, but relies on os.rmdir refusing to delete
4179             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4180+            #print "os.path.dirname(self.incominghome): "
4181+            #print os.path.dirname(self.incominghome)
4182             os.rmdir(os.path.dirname(self.incominghome))
4183             # we also delete the grandparent (prefix) directory, .../ab ,
4184             # again to avoid leaving directories lying around. This might
4185hunk ./src/allmydata/storage/immutable.py 93
4186     def __init__(self, ss, share):
4187         self.ss = ss
4188         self._share_file = share
4189-        self.storage_index = share.storage_index
4190+        self.storageindex = share.storageindex
4191         self.shnum = share.shnum
4192 
4193     def __repr__(self):
4194hunk ./src/allmydata/storage/immutable.py 98
4195         return "<%s %s %s>" % (self.__class__.__name__,
4196-                               base32.b2a_l(self.storage_index[:8], 60),
4197+                               base32.b2a_l(self.storageindex[:8], 60),
4198                                self.shnum)
4199 
4200     def remote_read(self, offset, length):
4201hunk ./src/allmydata/storage/immutable.py 110
4202 
4203     def remote_advise_corrupt_share(self, reason):
4204         return self.ss.remote_advise_corrupt_share("immutable",
4205-                                                   self.storage_index,
4206+                                                   self.storageindex,
4207                                                    self.shnum,
4208                                                    reason)
4209hunk ./src/allmydata/test/test_backends.py 20
4210 # The following share file contents was generated with
4211 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4212 # with share data == 'a'.
4213-renew_secret  = 'x'*32
4214-cancel_secret = 'y'*32
4215-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4216-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4217+shareversionnumber = '\x00\x00\x00\x01'
4218+sharedatalength = '\x00\x00\x00\x01'
4219+numberofleases = '\x00\x00\x00\x01'
4220+shareinputdata = 'a'
4221+ownernumber = '\x00\x00\x00\x00'
4222+renewsecret  = 'x'*32
4223+cancelsecret = 'y'*32
4224+expirationtime = '\x00(\xde\x80'
4225+nextlease = ''
4226+containerdata = shareversionnumber + sharedatalength + numberofleases
4227+client_data = shareinputdata + ownernumber + renewsecret + \
4228+    cancelsecret + expirationtime + nextlease
4229+share_data = containerdata + client_data
4230+
4231 
4232 testnodeid = 'testnodeidxxxxxxxxxx'
4233 tempdir = 'teststoredir'
4234hunk ./src/allmydata/test/test_backends.py 52
4235 
4236 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4237     def setUp(self):
4238-        self.s = StorageServer(testnodeid, backend=NullCore())
4239+        self.ss = StorageServer(testnodeid, backend=NullCore())
4240 
4241     @mock.patch('os.mkdir')
4242     @mock.patch('__builtin__.open')
4243hunk ./src/allmydata/test/test_backends.py 62
4244         """ Write a new share. """
4245 
4246         # Now begin the test.
4247-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4248+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4249         bs[0].remote_write(0, 'a')
4250         self.failIf(mockisdir.called)
4251         self.failIf(mocklistdir.called)
4252hunk ./src/allmydata/test/test_backends.py 133
4253                 _assert(False, "The tester code doesn't recognize this case.") 
4254 
4255         mockopen.side_effect = call_open
4256-        testbackend = DASCore(tempdir, expiration_policy)
4257-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4258+        self.backend = DASCore(tempdir, expiration_policy)
4259+        self.ss = StorageServer(testnodeid, self.backend)
4260+        self.ssinf = StorageServer(testnodeid, self.backend)
4261 
4262     @mock.patch('time.time')
4263     def test_write_share(self, mocktime):
4264hunk ./src/allmydata/test/test_backends.py 142
4265         """ Write a new share. """
4266         # Now begin the test.
4267 
4268-        # XXX (0) ???  Fail unless something is not properly set-up?
4269-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4270+        mocktime.return_value = 0
4271+        # Inspect incoming and fail unless it's empty.
4272+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4273+        self.failUnlessReallyEqual(incomingset, set())
4274+       
4275+        # Among other things, populate incoming with the sharenum: 0.
4276+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4277 
4278hunk ./src/allmydata/test/test_backends.py 150
4279-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4280-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4281-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4282+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4283+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4284+       
4285+        # Attempt to create a second share writer with the same share.
4286+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4287 
4288hunk ./src/allmydata/test/test_backends.py 156
4289-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4290+        # Show that no sharewriter results from a remote_allocate_buckets
4291         # with the same si, until BucketWriter.remote_close() has been called.
4292hunk ./src/allmydata/test/test_backends.py 158
4293-        # self.failIf(bsa)
4294+        self.failIf(bsa)
4295 
4296hunk ./src/allmydata/test/test_backends.py 160
4297+        # Write 'a' to shnum 0. Only tested together with close and read.
4298         bs[0].remote_write(0, 'a')
4299hunk ./src/allmydata/test/test_backends.py 162
4300-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4301-        spaceint = self.s.allocated_size()
4302+
4303+        # Test allocated size.
4304+        spaceint = self.ss.allocated_size()
4305         self.failUnlessReallyEqual(spaceint, 1)
4306 
4307         # XXX (3) Inspect final and fail unless there's nothing there.
4308hunk ./src/allmydata/test/test_backends.py 168
4309+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4310         bs[0].remote_close()
4311         # XXX (4a) Inspect final and fail unless share 0 is there.
4312hunk ./src/allmydata/test/test_backends.py 171
4313+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4314+        #contents = sharesinfinal[0].read_share_data(0,999)
4315+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4316         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4317 
4318         # What happens when there's not enough space for the client's request?
4319hunk ./src/allmydata/test/test_backends.py 177
4320-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4321+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4322 
4323         # Now test the allocated_size method.
4324         # self.failIf(mockexists.called, mockexists.call_args_list)
4325hunk ./src/allmydata/test/test_backends.py 185
4326         #self.failIf(mockrename.called, mockrename.call_args_list)
4327         #self.failIf(mockstat.called, mockstat.call_args_list)
4328 
4329-    def test_handle_incoming(self):
4330-        incomingset = self.s.backend.get_incoming('teststorage_index')
4331-        self.failUnlessReallyEqual(incomingset, set())
4332-
4333-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4334-       
4335-        incomingset = self.s.backend.get_incoming('teststorage_index')
4336-        self.failUnlessReallyEqual(incomingset, set((0,)))
4337-
4338-        bs[0].remote_close()
4339-        self.failUnlessReallyEqual(incomingset, set())
4340-
4341     @mock.patch('os.path.exists')
4342     @mock.patch('os.path.getsize')
4343     @mock.patch('__builtin__.open')
4344hunk ./src/allmydata/test/test_backends.py 208
4345             self.failUnless('r' in mode, mode)
4346             self.failUnless('b' in mode, mode)
4347 
4348-            return StringIO(share_file_data)
4349+            return StringIO(share_data)
4350         mockopen.side_effect = call_open
4351 
4352hunk ./src/allmydata/test/test_backends.py 211
4353-        datalen = len(share_file_data)
4354+        datalen = len(share_data)
4355         def call_getsize(fname):
4356             self.failUnlessReallyEqual(fname, sharefname)
4357             return datalen
4358hunk ./src/allmydata/test/test_backends.py 223
4359         mockexists.side_effect = call_exists
4360 
4361         # Now begin the test.
4362-        bs = self.s.remote_get_buckets('teststorage_index')
4363+        bs = self.ss.remote_get_buckets('teststorage_index')
4364 
4365         self.failUnlessEqual(len(bs), 1)
4366hunk ./src/allmydata/test/test_backends.py 226
4367-        b = bs[0]
4368+        b = bs['0']
4369         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4370hunk ./src/allmydata/test/test_backends.py 228
4371-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4372+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4373         # If you try to read past the end you get the as much data as is there.
4374hunk ./src/allmydata/test/test_backends.py 230
4375-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4376+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4377         # If you start reading past the end of the file you get the empty string.
4378         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4379 
4380}
4381[jacp14 or so
4382wilcoxjg@gmail.com**20110713060346
4383 Ignore-this: 7026810f60879d65b525d450e43ff87a
4384] {
4385hunk ./src/allmydata/storage/backends/das/core.py 102
4386             for f in os.listdir(finalstoragedir):
4387                 if NUM_RE.match(f):
4388                     filename = os.path.join(finalstoragedir, f)
4389-                    yield ImmutableShare(filename, storageindex, f)
4390+                    yield ImmutableShare(filename, storageindex, int(f))
4391         except OSError:
4392             # Commonly caused by there being no shares at all.
4393             pass
4394hunk ./src/allmydata/storage/backends/null/core.py 25
4395     def set_storage_server(self, ss):
4396         self.ss = ss
4397 
4398+    def get_incoming(self, storageindex):
4399+        return set()
4400+
4401 class ImmutableShare:
4402     sharetype = "immutable"
4403 
4404hunk ./src/allmydata/storage/immutable.py 19
4405 
4406     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4407         self.ss = ss
4408-        self._max_size = max_size # don't allow the client to write more than this
4409+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4410+
4411         self._canary = canary
4412         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4413         self.closed = False
4414hunk ./src/allmydata/test/test_backends.py 135
4415         mockopen.side_effect = call_open
4416         self.backend = DASCore(tempdir, expiration_policy)
4417         self.ss = StorageServer(testnodeid, self.backend)
4418-        self.ssinf = StorageServer(testnodeid, self.backend)
4419+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4420+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4421 
4422     @mock.patch('time.time')
4423     def test_write_share(self, mocktime):
4424hunk ./src/allmydata/test/test_backends.py 161
4425         # with the same si, until BucketWriter.remote_close() has been called.
4426         self.failIf(bsa)
4427 
4428-        # Write 'a' to shnum 0. Only tested together with close and read.
4429-        bs[0].remote_write(0, 'a')
4430-
4431         # Test allocated size.
4432         spaceint = self.ss.allocated_size()
4433         self.failUnlessReallyEqual(spaceint, 1)
4434hunk ./src/allmydata/test/test_backends.py 165
4435 
4436-        # XXX (3) Inspect final and fail unless there's nothing there.
4437+        # Write 'a' to shnum 0. Only tested together with close and read.
4438+        bs[0].remote_write(0, 'a')
4439+       
4440+        # Preclose: Inspect final, failUnless nothing there.
4441         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4442         bs[0].remote_close()
4443hunk ./src/allmydata/test/test_backends.py 171
4444-        # XXX (4a) Inspect final and fail unless share 0 is there.
4445-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4446-        #contents = sharesinfinal[0].read_share_data(0,999)
4447-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4448-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4449 
4450hunk ./src/allmydata/test/test_backends.py 172
4451-        # What happens when there's not enough space for the client's request?
4452-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4453+        # Postclose: (Omnibus) failUnless written data is in final.
4454+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4455+        contents = sharesinfinal[0].read_share_data(0,73)
4456+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4457 
4458hunk ./src/allmydata/test/test_backends.py 177
4459-        # Now test the allocated_size method.
4460-        # self.failIf(mockexists.called, mockexists.call_args_list)
4461-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4462-        #self.failIf(mockrename.called, mockrename.call_args_list)
4463-        #self.failIf(mockstat.called, mockstat.call_args_list)
4464+        # Cover interior of for share in get_shares loop.
4465+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4466+       
4467+    @mock.patch('time.time')
4468+    @mock.patch('allmydata.util.fileutil.get_available_space')
4469+    def test_out_of_space(self, mockget_available_space, mocktime):
4470+        mocktime.return_value = 0
4471+       
4472+        def call_get_available_space(dir, reserve):
4473+            return 0
4474+
4475+        mockget_available_space.side_effect = call_get_available_space
4476+       
4477+       
4478+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4479 
4480     @mock.patch('os.path.exists')
4481     @mock.patch('os.path.getsize')
4482hunk ./src/allmydata/test/test_backends.py 234
4483         bs = self.ss.remote_get_buckets('teststorage_index')
4484 
4485         self.failUnlessEqual(len(bs), 1)
4486-        b = bs['0']
4487+        b = bs[0]
4488         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4489         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4490         # If you try to read past the end you get the as much data as is there.
4491}
4492[temporary work-in-progress patch to be unrecorded
4493zooko@zooko.com**20110714003008
4494 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4495 tidy up a few tests, work done in pair-programming with Zancas
4496] {
4497hunk ./src/allmydata/storage/backends/das/core.py 65
4498         self._clean_incomplete()
4499 
4500     def _clean_incomplete(self):
4501-        fileutil.rm_dir(self.incomingdir)
4502+        fileutil.rmtree(self.incomingdir)
4503         fileutil.make_dirs(self.incomingdir)
4504 
4505     def _setup_corruption_advisory(self):
4506hunk ./src/allmydata/storage/immutable.py 1
4507-import os, stat, struct, time
4508+import os, time
4509 
4510 from foolscap.api import Referenceable
4511 
4512hunk ./src/allmydata/storage/server.py 1
4513-import os, re, weakref, struct, time
4514+import os, weakref, struct, time
4515 
4516 from foolscap.api import Referenceable
4517 from twisted.application import service
4518hunk ./src/allmydata/storage/server.py 7
4519 
4520 from zope.interface import implements
4521-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4522+from allmydata.interfaces import RIStorageServer, IStatsProducer
4523 from allmydata.util import fileutil, idlib, log, time_format
4524 import allmydata # for __full_version__
4525 
4526hunk ./src/allmydata/storage/server.py 313
4527         self.add_latency("get", time.time() - start)
4528         return bucketreaders
4529 
4530-    def remote_get_incoming(self, storageindex):
4531-        incoming_share_set = self.backend.get_incoming(storageindex)
4532-        return incoming_share_set
4533-
4534     def get_leases(self, storageindex):
4535         """Provide an iterator that yields all of the leases attached to this
4536         bucket. Each lease is returned as a LeaseInfo instance.
4537hunk ./src/allmydata/test/test_backends.py 3
4538 from twisted.trial import unittest
4539 
4540+from twisted.path.filepath import FilePath
4541+
4542 from StringIO import StringIO
4543 
4544 from allmydata.test.common_util import ReallyEqualMixin
4545hunk ./src/allmydata/test/test_backends.py 38
4546 
4547 
4548 testnodeid = 'testnodeidxxxxxxxxxx'
4549-tempdir = 'teststoredir'
4550-basedir = os.path.join(tempdir, 'shares')
4551+storedir = 'teststoredir'
4552+storedirfp = FilePath(storedir)
4553+basedir = os.path.join(storedir, 'shares')
4554 baseincdir = os.path.join(basedir, 'incoming')
4555 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4556 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4557hunk ./src/allmydata/test/test_backends.py 53
4558                      'cutoff_date' : None,
4559                      'sharetypes' : None}
4560 
4561-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4562+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4563+    """ NullBackend is just for testing and executable documentation, so
4564+    this test is actually a test of StorageServer in which we're using
4565+    NullBackend as helper code for the test, rather than a test of
4566+    NullBackend. """
4567     def setUp(self):
4568         self.ss = StorageServer(testnodeid, backend=NullCore())
4569 
4570hunk ./src/allmydata/test/test_backends.py 62
4571     @mock.patch('os.mkdir')
4572+
4573     @mock.patch('__builtin__.open')
4574     @mock.patch('os.listdir')
4575     @mock.patch('os.path.isdir')
4576hunk ./src/allmydata/test/test_backends.py 69
4577     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4578         """ Write a new share. """
4579 
4580-        # Now begin the test.
4581         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4582         bs[0].remote_write(0, 'a')
4583         self.failIf(mockisdir.called)
4584hunk ./src/allmydata/test/test_backends.py 83
4585     @mock.patch('os.listdir')
4586     @mock.patch('os.path.isdir')
4587     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4588-        """ This tests whether a server instance can be constructed
4589-        with a filesystem backend. To pass the test, it has to use the
4590-        filesystem in only the prescribed ways. """
4591+        """ This tests whether a server instance can be constructed with a
4592+        filesystem backend. To pass the test, it mustn't use the filesystem
4593+        outside of its configured storedir. """
4594 
4595         def call_open(fname, mode):
4596hunk ./src/allmydata/test/test_backends.py 88
4597-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4598-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4599-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4600-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4601-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4602+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4603+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4604+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4605+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4606+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4607                 return StringIO()
4608             else:
4609hunk ./src/allmydata/test/test_backends.py 95
4610-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4611+                fnamefp = FilePath(fname)
4612+                self.failUnless(storedirfp in fnamefp.parents(),
4613+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4614         mockopen.side_effect = call_open
4615 
4616         def call_isdir(fname):
4617hunk ./src/allmydata/test/test_backends.py 101
4618-            if fname == os.path.join(tempdir,'shares'):
4619+            if fname == os.path.join(storedir, 'shares'):
4620                 return True
4621hunk ./src/allmydata/test/test_backends.py 103
4622-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4623+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4624                 return True
4625             else:
4626                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4627hunk ./src/allmydata/test/test_backends.py 109
4628         mockisdir.side_effect = call_isdir
4629 
4630+        mocklistdir.return_value = []
4631+
4632         def call_mkdir(fname, mode):
4633hunk ./src/allmydata/test/test_backends.py 112
4634-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4635             self.failUnlessEqual(0777, mode)
4636hunk ./src/allmydata/test/test_backends.py 113
4637-            if fname == tempdir:
4638-                return None
4639-            elif fname == os.path.join(tempdir,'shares'):
4640-                return None
4641-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4642-                return None
4643-            else:
4644-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4645+            self.failUnlessIn(fname,
4646+                              [storedir,
4647+                               os.path.join(storedir, 'shares'),
4648+                               os.path.join(storedir, 'shares', 'incoming')],
4649+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4650         mockmkdir.side_effect = call_mkdir
4651 
4652         # Now begin the test.
4653hunk ./src/allmydata/test/test_backends.py 121
4654-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4655+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4656 
4657         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4658 
4659hunk ./src/allmydata/test/test_backends.py 126
4660 
4661-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4662+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4663+    """ This tests both the StorageServer xyz """
4664     @mock.patch('__builtin__.open')
4665     def setUp(self, mockopen):
4666         def call_open(fname, mode):
4667hunk ./src/allmydata/test/test_backends.py 131
4668-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4669-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4670-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4671-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4672-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4673+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4674+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4675+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4676+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4677+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4678                 return StringIO()
4679             else:
4680                 _assert(False, "The tester code doesn't recognize this case.") 
4681hunk ./src/allmydata/test/test_backends.py 141
4682 
4683         mockopen.side_effect = call_open
4684-        self.backend = DASCore(tempdir, expiration_policy)
4685+        self.backend = DASCore(storedir, expiration_policy)
4686         self.ss = StorageServer(testnodeid, self.backend)
4687hunk ./src/allmydata/test/test_backends.py 143
4688-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4689+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4690         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4691 
4692     @mock.patch('time.time')
4693hunk ./src/allmydata/test/test_backends.py 147
4694-    def test_write_share(self, mocktime):
4695-        """ Write a new share. """
4696-        # Now begin the test.
4697+    def test_write_and_read_share(self, mocktime):
4698+        """
4699+        Write a new share, read it, and test the server's (and FS backend's)
4700+        handling of simultaneous and successive attempts to write the same
4701+        share.
4702+        """
4703 
4704         mocktime.return_value = 0
4705         # Inspect incoming and fail unless it's empty.
4706hunk ./src/allmydata/test/test_backends.py 159
4707         incomingset = self.ss.backend.get_incoming('teststorage_index')
4708         self.failUnlessReallyEqual(incomingset, set())
4709         
4710-        # Among other things, populate incoming with the sharenum: 0.
4711+        # Populate incoming with the sharenum: 0.
4712         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4713 
4714         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4715hunk ./src/allmydata/test/test_backends.py 163
4716-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4717+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4718         
4719hunk ./src/allmydata/test/test_backends.py 165
4720-        # Attempt to create a second share writer with the same share.
4721+        # Attempt to create a second share writer with the same sharenum.
4722         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4723 
4724         # Show that no sharewriter results from a remote_allocate_buckets
4725hunk ./src/allmydata/test/test_backends.py 169
4726-        # with the same si, until BucketWriter.remote_close() has been called.
4727+        # with the same si and sharenum, until BucketWriter.remote_close()
4728+        # has been called.
4729         self.failIf(bsa)
4730 
4731         # Test allocated size.
4732hunk ./src/allmydata/test/test_backends.py 187
4733         # Postclose: (Omnibus) failUnless written data is in final.
4734         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4735         contents = sharesinfinal[0].read_share_data(0,73)
4736-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4737+        self.failUnlessReallyEqual(contents, client_data)
4738 
4739hunk ./src/allmydata/test/test_backends.py 189
4740-        # Cover interior of for share in get_shares loop.
4741-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4742+        # Exercise the case that the share we're asking to allocate is
4743+        # already (completely) uploaded.
4744+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4745         
4746     @mock.patch('time.time')
4747     @mock.patch('allmydata.util.fileutil.get_available_space')
4748hunk ./src/allmydata/test/test_backends.py 210
4749     @mock.patch('os.path.getsize')
4750     @mock.patch('__builtin__.open')
4751     @mock.patch('os.listdir')
4752-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4753+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4754         """ This tests whether the code correctly finds and reads
4755         shares written out by old (Tahoe-LAFS <= v1.8.2)
4756         servers. There is a similar test in test_download, but that one
4757hunk ./src/allmydata/test/test_backends.py 219
4758         StorageServer object. """
4759 
4760         def call_listdir(dirname):
4761-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4762+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4763             return ['0']
4764 
4765         mocklistdir.side_effect = call_listdir
4766hunk ./src/allmydata/test/test_backends.py 226
4767 
4768         def call_open(fname, mode):
4769             self.failUnlessReallyEqual(fname, sharefname)
4770-            self.failUnless('r' in mode, mode)
4771+            self.failUnlessEqual(mode[0], 'r', mode)
4772             self.failUnless('b' in mode, mode)
4773 
4774             return StringIO(share_data)
4775hunk ./src/allmydata/test/test_backends.py 268
4776         filesystem in only the prescribed ways. """
4777 
4778         def call_open(fname, mode):
4779-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4780-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4781-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4782-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4783-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4784+            if fname == os.path.join(storedir,'bucket_counter.state'):
4785+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4786+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4787+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4788+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4789                 return StringIO()
4790             else:
4791                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4792hunk ./src/allmydata/test/test_backends.py 279
4793         mockopen.side_effect = call_open
4794 
4795         def call_isdir(fname):
4796-            if fname == os.path.join(tempdir,'shares'):
4797+            if fname == os.path.join(storedir,'shares'):
4798                 return True
4799hunk ./src/allmydata/test/test_backends.py 281
4800-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4801+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4802                 return True
4803             else:
4804                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4805hunk ./src/allmydata/test/test_backends.py 290
4806         def call_mkdir(fname, mode):
4807             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4808             self.failUnlessEqual(0777, mode)
4809-            if fname == tempdir:
4810+            if fname == storedir:
4811                 return None
4812hunk ./src/allmydata/test/test_backends.py 292
4813-            elif fname == os.path.join(tempdir,'shares'):
4814+            elif fname == os.path.join(storedir,'shares'):
4815                 return None
4816hunk ./src/allmydata/test/test_backends.py 294
4817-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4818+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4819                 return None
4820             else:
4821                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4822hunk ./src/allmydata/util/fileutil.py 5
4823 Futz with files like a pro.
4824 """
4825 
4826-import sys, exceptions, os, stat, tempfile, time, binascii
4827+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4828 
4829 from twisted.python import log
4830 
4831hunk ./src/allmydata/util/fileutil.py 186
4832             raise tx
4833         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4834 
4835-def rm_dir(dirname):
4836+def rmtree(dirname):
4837     """
4838     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4839     already gone, do nothing and return without raising an exception.  If this
4840hunk ./src/allmydata/util/fileutil.py 205
4841             else:
4842                 remove(fullname)
4843         os.rmdir(dirname)
4844-    except Exception, le:
4845-        # Ignore "No such file or directory"
4846-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4847+    except EnvironmentError, le:
4848+        # Ignore "No such file or directory", collect any other exception.
4849+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4850             excs.append(le)
4851hunk ./src/allmydata/util/fileutil.py 209
4852+    except Exception, le:
4853+        excs.append(le)
4854 
4855     # Okay, now we've recursively removed everything, ignoring any "No
4856     # such file or directory" errors, and collecting any other errors.
4857hunk ./src/allmydata/util/fileutil.py 222
4858             raise OSError, "Failed to remove dir for unknown reason."
4859         raise OSError, excs
4860 
4861+def rm_dir(dirname):
4862+    # Renamed to be like shutil.rmtree and unlike rmdir.
4863+    return rmtree(dirname)
4864 
4865 def remove_if_possible(f):
4866     try:
4867}
4868[work in progress intended to be unrecorded and never committed to trunk
4869zooko@zooko.com**20110714212139
4870 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
4871 switch from os.path.join to filepath
4872 incomplete refactoring of common "stay in your subtree" tester code into a superclass
4873 
4874] {
4875hunk ./src/allmydata/test/test_backends.py 3
4876 from twisted.trial import unittest
4877 
4878-from twisted.path.filepath import FilePath
4879+from twisted.python.filepath import FilePath
4880 
4881 from StringIO import StringIO
4882 
4883hunk ./src/allmydata/test/test_backends.py 10
4884 from allmydata.test.common_util import ReallyEqualMixin
4885 from allmydata.util.assertutil import _assert
4886 
4887-import mock, os
4888+import mock
4889 
4890 # This is the code that we're going to be testing.
4891 from allmydata.storage.server import StorageServer
4892hunk ./src/allmydata/test/test_backends.py 25
4893 shareversionnumber = '\x00\x00\x00\x01'
4894 sharedatalength = '\x00\x00\x00\x01'
4895 numberofleases = '\x00\x00\x00\x01'
4896+
4897 shareinputdata = 'a'
4898 ownernumber = '\x00\x00\x00\x00'
4899 renewsecret  = 'x'*32
4900hunk ./src/allmydata/test/test_backends.py 39
4901 
4902 
4903 testnodeid = 'testnodeidxxxxxxxxxx'
4904-storedir = 'teststoredir'
4905-storedirfp = FilePath(storedir)
4906-basedir = os.path.join(storedir, 'shares')
4907-baseincdir = os.path.join(basedir, 'incoming')
4908-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4909-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4910-shareincomingname = os.path.join(sharedirincomingname, '0')
4911-sharefname = os.path.join(sharedirfinalname, '0')
4912+
4913+class TestFilesMixin(unittest.TestCase):
4914+    def setUp(self):
4915+        self.storedir = FilePath('teststoredir')
4916+        self.basedir = self.storedir.child('shares')
4917+        self.baseincdir = self.basedir.child('incoming')
4918+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4919+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4920+        self.shareincomingname = self.sharedirincomingname.child('0')
4921+        self.sharefname = self.sharedirfinalname.child('0')
4922+
4923+    def call_open(self, fname, mode):
4924+        fnamefp = FilePath(fname)
4925+        if fnamefp == self.storedir.child('bucket_counter.state'):
4926+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
4927+        elif fnamefp == self.storedir.child('lease_checker.state'):
4928+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
4929+        elif fnamefp == self.storedir.child('lease_checker.history'):
4930+            return StringIO()
4931+        else:
4932+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4933+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
4934+
4935+    def call_isdir(self, fname):
4936+        fnamefp = FilePath(fname)
4937+        if fnamefp == self.storedir.child('shares'):
4938+            return True
4939+        elif fnamefp == self.storedir.child('shares').child('incoming'):
4940+            return True
4941+        else:
4942+            self.failUnless(self.storedir in fnamefp.parents(),
4943+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4944+
4945+    def call_mkdir(self, fname, mode):
4946+        self.failUnlessEqual(0777, mode)
4947+        fnamefp = FilePath(fname)
4948+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4949+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4950+
4951+
4952+    @mock.patch('os.mkdir')
4953+    @mock.patch('__builtin__.open')
4954+    @mock.patch('os.listdir')
4955+    @mock.patch('os.path.isdir')
4956+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4957+        mocklistdir.return_value = []
4958+        mockmkdir.side_effect = self.call_mkdir
4959+        mockisdir.side_effect = self.call_isdir
4960+        mockopen.side_effect = self.call_open
4961+        mocklistdir.return_value = []
4962+       
4963+        test_func()
4964+       
4965+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4966 
4967 expiration_policy = {'enabled' : False,
4968                      'mode' : 'age',
4969hunk ./src/allmydata/test/test_backends.py 123
4970         self.failIf(mockopen.called)
4971         self.failIf(mockmkdir.called)
4972 
4973-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
4974-    @mock.patch('time.time')
4975-    @mock.patch('os.mkdir')
4976-    @mock.patch('__builtin__.open')
4977-    @mock.patch('os.listdir')
4978-    @mock.patch('os.path.isdir')
4979-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4980+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
4981+    def test_create_server_fs_backend(self):
4982         """ This tests whether a server instance can be constructed with a
4983         filesystem backend. To pass the test, it mustn't use the filesystem
4984         outside of its configured storedir. """
4985hunk ./src/allmydata/test/test_backends.py 129
4986 
4987-        def call_open(fname, mode):
4988-            if fname == os.path.join(storedir, 'bucket_counter.state'):
4989-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4990-            elif fname == os.path.join(storedir, 'lease_checker.state'):
4991-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4992-            elif fname == os.path.join(storedir, 'lease_checker.history'):
4993-                return StringIO()
4994-            else:
4995-                fnamefp = FilePath(fname)
4996-                self.failUnless(storedirfp in fnamefp.parents(),
4997-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4998-        mockopen.side_effect = call_open
4999+        def _f():
5000+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5001 
5002hunk ./src/allmydata/test/test_backends.py 132
5003-        def call_isdir(fname):
5004-            if fname == os.path.join(storedir, 'shares'):
5005-                return True
5006-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
5007-                return True
5008-            else:
5009-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5010-        mockisdir.side_effect = call_isdir
5011-
5012-        mocklistdir.return_value = []
5013-
5014-        def call_mkdir(fname, mode):
5015-            self.failUnlessEqual(0777, mode)
5016-            self.failUnlessIn(fname,
5017-                              [storedir,
5018-                               os.path.join(storedir, 'shares'),
5019-                               os.path.join(storedir, 'shares', 'incoming')],
5020-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
5021-        mockmkdir.side_effect = call_mkdir
5022-
5023-        # Now begin the test.
5024-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5025-
5026-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5027+        self._help_test_stay_in_your_subtree(_f)
5028 
5029 
5030 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5031}
5032[another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
5033zooko@zooko.com**20110715191500
5034 Ignore-this: af33336789041800761e80510ea2f583
5035 In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
5036] {
5037hunk ./src/allmydata/storage/backends/das/core.py 59
5038                 log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5039                         umid="0wZ27w", level=log.UNUSUAL)
5040 
5041-        self.sharedir = os.path.join(self.storedir, "shares")
5042-        fileutil.make_dirs(self.sharedir)
5043-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
5044+        self.sharedir = self.storedir.child("shares")
5045+        fileutil.fp_make_dirs(self.sharedir)
5046+        self.incomingdir = self.sharedir.child('incoming')
5047         self._clean_incomplete()
5048 
5049     def _clean_incomplete(self):
5050hunk ./src/allmydata/storage/backends/das/core.py 65
5051-        fileutil.rmtree(self.incomingdir)
5052-        fileutil.make_dirs(self.incomingdir)
5053+        fileutil.fp_remove(self.incomingdir)
5054+        fileutil.fp_make_dirs(self.incomingdir)
5055 
5056     def _setup_corruption_advisory(self):
5057         # we don't actually create the corruption-advisory dir until necessary
5058hunk ./src/allmydata/storage/backends/das/core.py 70
5059-        self.corruption_advisory_dir = os.path.join(self.storedir,
5060-                                                    "corruption-advisories")
5061+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
5062 
5063     def _setup_bucket_counter(self):
5064hunk ./src/allmydata/storage/backends/das/core.py 73
5065-        statefname = os.path.join(self.storedir, "bucket_counter.state")
5066+        statefname = self.storedir.child("bucket_counter.state")
5067         self.bucket_counter = FSBucketCountingCrawler(statefname)
5068         self.bucket_counter.setServiceParent(self)
5069 
5070hunk ./src/allmydata/storage/backends/das/core.py 78
5071     def _setup_lease_checkerf(self, expiration_policy):
5072-        statefile = os.path.join(self.storedir, "lease_checker.state")
5073-        historyfile = os.path.join(self.storedir, "lease_checker.history")
5074+        statefile = self.storedir.child("lease_checker.state")
5075+        historyfile = self.storedir.child("lease_checker.history")
5076         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
5077         self.lease_checker.setServiceParent(self)
5078 
5079hunk ./src/allmydata/storage/backends/das/core.py 83
5080-    def get_incoming(self, storageindex):
5081+    def get_incoming_shnums(self, storageindex):
5082         """Return the set of incoming shnums."""
5083         try:
5084hunk ./src/allmydata/storage/backends/das/core.py 86
5085-            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
5086-            incominglist = os.listdir(incomingsharesdir)
5087-            incomingshnums = [int(x) for x in incominglist]
5088-            return set(incomingshnums)
5089-        except OSError:
5090-            # XXX I'd like to make this more specific. If there are no shares at all.
5091-            return set()
5092+           
5093+            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5094+            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5095+            return frozenset(incomingshnums)
5096+        except UnlistableError:
5097+            # There is no shares directory at all.
5098+            return frozenset()
5099             
5100     def get_shares(self, storageindex):
5101         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5102hunk ./src/allmydata/storage/backends/das/core.py 96
5103-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
5104+        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5105         try:
5106hunk ./src/allmydata/storage/backends/das/core.py 98
5107-            for f in os.listdir(finalstoragedir):
5108-                if NUM_RE.match(f):
5109-                    filename = os.path.join(finalstoragedir, f)
5110-                    yield ImmutableShare(filename, storageindex, int(f))
5111-        except OSError:
5112-            # Commonly caused by there being no shares at all.
5113+            for f in finalstoragedir.listdir():
5114+                if NUM_RE.match(f.basename):
5115+                    yield ImmutableShare(f, storageindex, int(f))
5116+        except UnlistableError:
5117+            # There is no shares directory at all.
5118             pass
5119         
5120     def get_available_space(self):
5121hunk ./src/allmydata/storage/backends/das/core.py 149
5122 # then the value stored in this field will be the actual share data length
5123 # modulo 2**32.
5124 
5125-class ImmutableShare:
5126+class ImmutableShare(object):
5127     LEASE_SIZE = struct.calcsize(">L32s32sL")
5128     sharetype = "immutable"
5129 
5130hunk ./src/allmydata/storage/backends/das/core.py 166
5131         if create:
5132             # touch the file, so later callers will see that we're working on
5133             # it. Also construct the metadata.
5134-            assert not os.path.exists(self.finalhome)
5135-            fileutil.make_dirs(os.path.dirname(self.incominghome))
5136+            assert not finalhome.exists()
5137+            fp_make_dirs(self.incominghome)
5138             f = open(self.incominghome, 'wb')
5139             # The second field -- the four-byte share data length -- is no
5140             # longer used as of Tahoe v1.3.0, but we continue to write it in
5141hunk ./src/allmydata/storage/backends/das/core.py 316
5142         except IndexError:
5143             self.add_lease(lease_info)
5144 
5145-
5146     def cancel_lease(self, cancel_secret):
5147         """Remove a lease with the given cancel_secret. If the last lease is
5148         cancelled, the file will be removed. Return the number of bytes that
5149hunk ./src/allmydata/storage/common.py 19
5150 def si_a2b(ascii_storageindex):
5151     return base32.a2b(ascii_storageindex)
5152 
5153-def storage_index_to_dir(storageindex):
5154+def storage_index_to_dir(startfp, storageindex):
5155     sia = si_b2a(storageindex)
5156     return os.path.join(sia[:2], sia)
5157hunk ./src/allmydata/storage/server.py 210
5158 
5159         # fill incoming with all shares that are incoming use a set operation
5160         # since there's no need to operate on individual pieces
5161-        incoming = self.backend.get_incoming(storageindex)
5162+        incoming = self.backend.get_incoming_shnums(storageindex)
5163 
5164         for shnum in ((sharenums - alreadygot) - incoming):
5165             if (not limited) or (remaining_space >= max_space_per_bucket):
5166hunk ./src/allmydata/test/test_backends.py 5
5167 
5168 from twisted.python.filepath import FilePath
5169 
5170+from allmydata.util.log import msg
5171+
5172 from StringIO import StringIO
5173 
5174 from allmydata.test.common_util import ReallyEqualMixin
5175hunk ./src/allmydata/test/test_backends.py 42
5176 
5177 testnodeid = 'testnodeidxxxxxxxxxx'
5178 
5179-class TestFilesMixin(unittest.TestCase):
5180-    def setUp(self):
5181-        self.storedir = FilePath('teststoredir')
5182-        self.basedir = self.storedir.child('shares')
5183-        self.baseincdir = self.basedir.child('incoming')
5184-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5185-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5186-        self.shareincomingname = self.sharedirincomingname.child('0')
5187-        self.sharefname = self.sharedirfinalname.child('0')
5188+class MockStat:
5189+    def __init__(self):
5190+        self.st_mode = None
5191 
5192hunk ./src/allmydata/test/test_backends.py 46
5193+class MockFiles(unittest.TestCase):
5194+    """ I simulate a filesystem that the code under test can use. I flag the
5195+    code under test if it reads or writes outside of its prescribed
5196+    subtree. I simulate just the parts of the filesystem that the current
5197+    implementation of DAS backend needs. """
5198     def call_open(self, fname, mode):
5199         fnamefp = FilePath(fname)
5200hunk ./src/allmydata/test/test_backends.py 53
5201+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5202+                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5203+
5204         if fnamefp == self.storedir.child('bucket_counter.state'):
5205             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
5206         elif fnamefp == self.storedir.child('lease_checker.state'):
5207hunk ./src/allmydata/test/test_backends.py 61
5208             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
5209         elif fnamefp == self.storedir.child('lease_checker.history'):
5210+            # This is separated out from the else clause below just because
5211+            # we know this particular file is going to be used by the
5212+            # current implementation of DAS backend, and we might want to
5213+            # use this information in this test in the future...
5214             return StringIO()
5215         else:
5216hunk ./src/allmydata/test/test_backends.py 67
5217-            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5218-                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5219+            # Anything else you open inside your subtree appears to be an
5220+            # empty file.
5221+            return StringIO()
5222 
5223     def call_isdir(self, fname):
5224         fnamefp = FilePath(fname)
5225hunk ./src/allmydata/test/test_backends.py 73
5226-        if fnamefp == self.storedir.child('shares'):
5227+        return fnamefp.isdir()
5228+
5229+        self.failUnless(self.storedir == self or self.storedir in self.parents(),
5230+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
5231+
5232+        # The first two cases are separate from the else clause below just
5233+        # because we know that the current implementation of the DAS backend
5234+        # inspects these two directories and we might want to make use of
5235+        # that information in the tests in the future...
5236+        if self == self.storedir.child('shares'):
5237             return True
5238hunk ./src/allmydata/test/test_backends.py 84
5239-        elif fnamefp == self.storedir.child('shares').child('incoming'):
5240+        elif self == self.storedir.child('shares').child('incoming'):
5241             return True
5242         else:
5243hunk ./src/allmydata/test/test_backends.py 87
5244-            self.failUnless(self.storedir in fnamefp.parents(),
5245-                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5246+            # Anything else you open inside your subtree appears to be a
5247+            # directory.
5248+            return True
5249 
5250     def call_mkdir(self, fname, mode):
5251hunk ./src/allmydata/test/test_backends.py 92
5252-        self.failUnlessEqual(0777, mode)
5253         fnamefp = FilePath(fname)
5254         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5255                         "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5256hunk ./src/allmydata/test/test_backends.py 95
5257+        self.failUnlessEqual(0777, mode)
5258 
5259hunk ./src/allmydata/test/test_backends.py 97
5260+    def call_listdir(self, fname):
5261+        fnamefp = FilePath(fname)
5262+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5263+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5264 
5265hunk ./src/allmydata/test/test_backends.py 102
5266-    @mock.patch('os.mkdir')
5267-    @mock.patch('__builtin__.open')
5268-    @mock.patch('os.listdir')
5269-    @mock.patch('os.path.isdir')
5270-    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5271-        mocklistdir.return_value = []
5272+    def call_stat(self, fname):
5273+        fnamefp = FilePath(fname)
5274+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5275+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5276+
5277+        msg("%s.call_stat(%s)" % (self, fname,))
5278+        mstat = MockStat()
5279+        mstat.st_mode = 16893 # a directory
5280+        return mstat
5281+
5282+    def setUp(self):
5283+        msg( "%s.setUp()" % (self,))
5284+        self.storedir = FilePath('teststoredir')
5285+        self.basedir = self.storedir.child('shares')
5286+        self.baseincdir = self.basedir.child('incoming')
5287+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5288+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5289+        self.shareincomingname = self.sharedirincomingname.child('0')
5290+        self.sharefname = self.sharedirfinalname.child('0')
5291+
5292+        self.mocklistdirp = mock.patch('os.listdir')
5293+        mocklistdir = self.mocklistdirp.__enter__()
5294+        mocklistdir.side_effect = self.call_listdir
5295+
5296+        self.mockmkdirp = mock.patch('os.mkdir')
5297+        mockmkdir = self.mockmkdirp.__enter__()
5298         mockmkdir.side_effect = self.call_mkdir
5299hunk ./src/allmydata/test/test_backends.py 129
5300+
5301+        self.mockisdirp = mock.patch('os.path.isdir')
5302+        mockisdir = self.mockisdirp.__enter__()
5303         mockisdir.side_effect = self.call_isdir
5304hunk ./src/allmydata/test/test_backends.py 133
5305+
5306+        self.mockopenp = mock.patch('__builtin__.open')
5307+        mockopen = self.mockopenp.__enter__()
5308         mockopen.side_effect = self.call_open
5309hunk ./src/allmydata/test/test_backends.py 137
5310-        mocklistdir.return_value = []
5311-       
5312-        test_func()
5313-       
5314-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5315+
5316+        self.mockstatp = mock.patch('os.stat')
5317+        mockstat = self.mockstatp.__enter__()
5318+        mockstat.side_effect = self.call_stat
5319+
5320+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
5321+        mockfpstat = self.mockfpstatp.__enter__()
5322+        mockfpstat.side_effect = self.call_stat
5323+
5324+    def tearDown(self):
5325+        msg( "%s.tearDown()" % (self,))
5326+        self.mockfpstatp.__exit__()
5327+        self.mockstatp.__exit__()
5328+        self.mockopenp.__exit__()
5329+        self.mockisdirp.__exit__()
5330+        self.mockmkdirp.__exit__()
5331+        self.mocklistdirp.__exit__()
5332 
5333 expiration_policy = {'enabled' : False,
5334                      'mode' : 'age',
5335hunk ./src/allmydata/test/test_backends.py 184
5336         self.failIf(mockopen.called)
5337         self.failIf(mockmkdir.called)
5338 
5339-class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5340+class TestServerConstruction(MockFiles, ReallyEqualMixin):
5341     def test_create_server_fs_backend(self):
5342         """ This tests whether a server instance can be constructed with a
5343         filesystem backend. To pass the test, it mustn't use the filesystem
5344hunk ./src/allmydata/test/test_backends.py 190
5345         outside of its configured storedir. """
5346 
5347-        def _f():
5348-            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5349+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5350 
5351hunk ./src/allmydata/test/test_backends.py 192
5352-        self._help_test_stay_in_your_subtree(_f)
5353-
5354-
5355-class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5356-    """ This tests both the StorageServer xyz """
5357-    @mock.patch('__builtin__.open')
5358-    def setUp(self, mockopen):
5359-        def call_open(fname, mode):
5360-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5361-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5362-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5363-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5364-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5365-                return StringIO()
5366-            else:
5367-                _assert(False, "The tester code doesn't recognize this case.") 
5368-
5369-        mockopen.side_effect = call_open
5370-        self.backend = DASCore(storedir, expiration_policy)
5371-        self.ss = StorageServer(testnodeid, self.backend)
5372-        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
5373-        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5374+class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
5375+    """ This tests both the StorageServer and the DAS backend together. """
5376+    def setUp(self):
5377+        MockFiles.setUp(self)
5378+        try:
5379+            self.backend = DASCore(self.storedir, expiration_policy)
5380+            self.ss = StorageServer(testnodeid, self.backend)
5381+            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
5382+            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5383+        except:
5384+            MockFiles.tearDown(self)
5385+            raise
5386 
5387     @mock.patch('time.time')
5388     def test_write_and_read_share(self, mocktime):
5389hunk ./src/allmydata/util/fileutil.py 8
5390 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5391 
5392 from twisted.python import log
5393+from twisted.python.filepath import UnlistableError
5394 
5395 from pycryptopp.cipher.aes import AES
5396 
5397hunk ./src/allmydata/util/fileutil.py 187
5398             raise tx
5399         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5400 
5401+def fp_make_dirs(dirfp):
5402+    """
5403+    An idempotent version of FilePath.makedirs().  If the dir already
5404+    exists, do nothing and return without raising an exception.  If this
5405+    call creates the dir, return without raising an exception.  If there is
5406+    an error that prevents creation or if the directory gets deleted after
5407+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
5408+    exists, raise an exception.
5409+    """
5410+    log.msg( "xxx 0 %s" % (dirfp,))
5411+    tx = None
5412+    try:
5413+        dirfp.makedirs()
5414+    except OSError, x:
5415+        tx = x
5416+
5417+    if not dirfp.isdir():
5418+        if tx:
5419+            raise tx
5420+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5421+
5422 def rmtree(dirname):
5423     """
5424     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5425hunk ./src/allmydata/util/fileutil.py 244
5426             raise OSError, "Failed to remove dir for unknown reason."
5427         raise OSError, excs
5428 
5429+def fp_remove(dirfp):
5430+    try:
5431+        dirfp.remove()
5432+    except UnlistableError, e:
5433+        if e.originalException.errno != errno.ENOENT:
5434+            raise
5435+
5436 def rm_dir(dirname):
5437     # Renamed to be like shutil.rmtree and unlike rmdir.
5438     return rmtree(dirname)
5439}
5440[another temporary patch for sharing work-in-progress
5441zooko@zooko.com**20110720055918
5442 Ignore-this: dfa0270476cbc6511cdb54c5c9a55a8e
5443 A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
5444 There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
5445 (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
5446 
5447] {
5448hunk ./src/allmydata/storage/backends/das/core.py 5
5449 
5450 from allmydata.interfaces import IStorageBackend
5451 from allmydata.storage.backends.base import Backend
5452-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5453+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5454 from allmydata.util.assertutil import precondition
5455 
5456 #from foolscap.api import Referenceable
5457hunk ./src/allmydata/storage/backends/das/core.py 10
5458 from twisted.application import service
5459+from twisted.python.filepath import UnlistableError
5460 
5461 from zope.interface import implements
5462 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
5463hunk ./src/allmydata/storage/backends/das/core.py 17
5464 from allmydata.util import fileutil, idlib, log, time_format
5465 import allmydata # for __full_version__
5466 
5467-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5468-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5469+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5470+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5471 from allmydata.storage.lease import LeaseInfo
5472 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5473      create_mutable_sharefile
5474hunk ./src/allmydata/storage/backends/das/core.py 41
5475 # $SHARENUM matches this regex:
5476 NUM_RE=re.compile("^[0-9]+$")
5477 
5478+def is_num(fp):
5479+    return NUM_RE.match(fp.basename)
5480+
5481 class DASCore(Backend):
5482     implements(IStorageBackend)
5483     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
5484hunk ./src/allmydata/storage/backends/das/core.py 58
5485         self.storedir = storedir
5486         self.readonly = readonly
5487         self.reserved_space = int(reserved_space)
5488-        if self.reserved_space:
5489-            if self.get_available_space() is None:
5490-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5491-                        umid="0wZ27w", level=log.UNUSUAL)
5492-
5493         self.sharedir = self.storedir.child("shares")
5494         fileutil.fp_make_dirs(self.sharedir)
5495         self.incomingdir = self.sharedir.child('incoming')
5496hunk ./src/allmydata/storage/backends/das/core.py 62
5497         self._clean_incomplete()
5498+        if self.reserved_space and (self.get_available_space() is None):
5499+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5500+                    umid="0wZ27w", level=log.UNUSUAL)
5501+
5502 
5503     def _clean_incomplete(self):
5504         fileutil.fp_remove(self.incomingdir)
5505hunk ./src/allmydata/storage/backends/das/core.py 87
5506         self.lease_checker.setServiceParent(self)
5507 
5508     def get_incoming_shnums(self, storageindex):
5509-        """Return the set of incoming shnums."""
5510+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
5511+        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
5512         try:
5513hunk ./src/allmydata/storage/backends/das/core.py 90
5514-           
5515-            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5516-            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5517-            return frozenset(incomingshnums)
5518+            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
5519+            shnums = [ int(fp.basename) for fp in childfps ]
5520+            return frozenset(shnums)
5521         except UnlistableError:
5522             # There is no shares directory at all.
5523             return frozenset()
5524hunk ./src/allmydata/storage/backends/das/core.py 98
5525             
5526     def get_shares(self, storageindex):
5527-        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5528+        """ Generate ImmutableShare objects for shares we have for this
5529+        storageindex. ("Shares we have" means completed ones, excluding
5530+        incoming ones.)"""
5531         finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5532         try:
5533hunk ./src/allmydata/storage/backends/das/core.py 103
5534-            for f in finalstoragedir.listdir():
5535-                if NUM_RE.match(f.basename):
5536-                    yield ImmutableShare(f, storageindex, int(f))
5537+            for fp in finalstoragedir.children():
5538+                if is_num(fp):
5539+                    yield ImmutableShare(fp, storageindex)
5540         except UnlistableError:
5541             # There is no shares directory at all.
5542             pass
5543hunk ./src/allmydata/storage/backends/das/core.py 116
5544         return fileutil.get_available_space(self.storedir, self.reserved_space)
5545 
5546     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
5547-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
5548-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
5549+        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
5550+        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
5551         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
5552         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
5553         return bw
5554hunk ./src/allmydata/storage/backends/das/expirer.py 50
5555     slow_start = 360 # wait 6 minutes after startup
5556     minimum_cycle_time = 12*60*60 # not more than twice per day
5557 
5558-    def __init__(self, statefile, historyfile, expiration_policy):
5559-        self.historyfile = historyfile
5560+    def __init__(self, statefile, historyfp, expiration_policy):
5561+        self.historyfp = historyfp
5562         self.expiration_enabled = expiration_policy['enabled']
5563         self.mode = expiration_policy['mode']
5564         self.override_lease_duration = None
5565hunk ./src/allmydata/storage/backends/das/expirer.py 80
5566             self.state["cycle-to-date"].setdefault(k, so_far[k])
5567 
5568         # initialize history
5569-        if not os.path.exists(self.historyfile):
5570+        if not self.historyfp.exists():
5571             history = {} # cyclenum -> dict
5572hunk ./src/allmydata/storage/backends/das/expirer.py 82
5573-            f = open(self.historyfile, "wb")
5574-            pickle.dump(history, f)
5575-            f.close()
5576+            self.historyfp.setContent(pickle.dumps(history))
5577 
5578     def create_empty_cycle_dict(self):
5579         recovered = self.create_empty_recovered_dict()
5580hunk ./src/allmydata/storage/backends/das/expirer.py 305
5581         # copy() needs to become a deepcopy
5582         h["space-recovered"] = s["space-recovered"].copy()
5583 
5584-        history = pickle.load(open(self.historyfile, "rb"))
5585+        history = pickle.load(self.historyfp.getContent())
5586         history[cycle] = h
5587         while len(history) > 10:
5588             oldcycles = sorted(history.keys())
5589hunk ./src/allmydata/storage/backends/das/expirer.py 310
5590             del history[oldcycles[0]]
5591-        f = open(self.historyfile, "wb")
5592-        pickle.dump(history, f)
5593-        f.close()
5594+        self.historyfp.setContent(pickle.dumps(history))
5595 
5596     def get_state(self):
5597         """In addition to the crawler state described in
5598hunk ./src/allmydata/storage/backends/das/expirer.py 379
5599         progress = self.get_progress()
5600 
5601         state = ShareCrawler.get_state(self) # does a shallow copy
5602-        history = pickle.load(open(self.historyfile, "rb"))
5603+        history = pickle.load(self.historyfp.getContent())
5604         state["history"] = history
5605 
5606         if not progress["cycle-in-progress"]:
5607hunk ./src/allmydata/storage/common.py 19
5608 def si_a2b(ascii_storageindex):
5609     return base32.a2b(ascii_storageindex)
5610 
5611-def storage_index_to_dir(startfp, storageindex):
5612+def si_dir(startfp, storageindex):
5613     sia = si_b2a(storageindex)
5614hunk ./src/allmydata/storage/common.py 21
5615-    return os.path.join(sia[:2], sia)
5616+    return startfp.child(sia[:2]).child(sia)
5617hunk ./src/allmydata/storage/crawler.py 68
5618     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
5619     minimum_cycle_time = 300 # don't run a cycle faster than this
5620 
5621-    def __init__(self, statefname, allowed_cpu_percentage=None):
5622+    def __init__(self, statefp, allowed_cpu_percentage=None):
5623         service.MultiService.__init__(self)
5624         if allowed_cpu_percentage is not None:
5625             self.allowed_cpu_percentage = allowed_cpu_percentage
5626hunk ./src/allmydata/storage/crawler.py 72
5627-        self.statefname = statefname
5628+        self.statefp = statefp
5629         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
5630                          for i in range(2**10)]
5631         self.prefixes.sort()
5632hunk ./src/allmydata/storage/crawler.py 192
5633         #                            of the last bucket to be processed, or
5634         #                            None if we are sleeping between cycles
5635         try:
5636-            f = open(self.statefname, "rb")
5637-            state = pickle.load(f)
5638-            f.close()
5639+            state = pickle.loads(self.statefp.getContent())
5640         except EnvironmentError:
5641             state = {"version": 1,
5642                      "last-cycle-finished": None,
5643hunk ./src/allmydata/storage/crawler.py 228
5644         else:
5645             last_complete_prefix = self.prefixes[lcpi]
5646         self.state["last-complete-prefix"] = last_complete_prefix
5647-        tmpfile = self.statefname + ".tmp"
5648-        f = open(tmpfile, "wb")
5649-        pickle.dump(self.state, f)
5650-        f.close()
5651-        fileutil.move_into_place(tmpfile, self.statefname)
5652+        self.statefp.setContent(pickle.dumps(self.state))
5653 
5654     def startService(self):
5655         # arrange things to look like we were just sleeping, so
5656hunk ./src/allmydata/storage/crawler.py 440
5657 
5658     minimum_cycle_time = 60*60 # we don't need this more than once an hour
5659 
5660-    def __init__(self, statefname, num_sample_prefixes=1):
5661-        FSShareCrawler.__init__(self, statefname)
5662+    def __init__(self, statefp, num_sample_prefixes=1):
5663+        FSShareCrawler.__init__(self, statefp)
5664         self.num_sample_prefixes = num_sample_prefixes
5665 
5666     def add_initial_state(self):
5667hunk ./src/allmydata/storage/server.py 11
5668 from allmydata.util import fileutil, idlib, log, time_format
5669 import allmydata # for __full_version__
5670 
5671-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5672-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5673+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5674+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5675 from allmydata.storage.lease import LeaseInfo
5676 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5677      create_mutable_sharefile
5678hunk ./src/allmydata/storage/server.py 173
5679         # to a particular owner.
5680         start = time.time()
5681         self.count("allocate")
5682-        alreadygot = set()
5683         incoming = set()
5684         bucketwriters = {} # k: shnum, v: BucketWriter
5685 
5686hunk ./src/allmydata/storage/server.py 199
5687             remaining_space -= self.allocated_size()
5688         # self.readonly_storage causes remaining_space <= 0
5689 
5690-        # fill alreadygot with all shares that we have, not just the ones
5691+        # Fill alreadygot with all shares that we have, not just the ones
5692         # they asked about: this will save them a lot of work. Add or update
5693         # leases for all of them: if they want us to hold shares for this
5694hunk ./src/allmydata/storage/server.py 202
5695-        # file, they'll want us to hold leases for this file.
5696+        # file, they'll want us to hold leases for all the shares of it.
5697+        alreadygot = set()
5698         for share in self.backend.get_shares(storageindex):
5699hunk ./src/allmydata/storage/server.py 205
5700-            alreadygot.add(share.shnum)
5701             share.add_or_renew_lease(lease_info)
5702hunk ./src/allmydata/storage/server.py 206
5703+            alreadygot.add(share.shnum)
5704 
5705hunk ./src/allmydata/storage/server.py 208
5706-        # fill incoming with all shares that are incoming use a set operation
5707-        # since there's no need to operate on individual pieces
5708+        # all share numbers that are incoming
5709         incoming = self.backend.get_incoming_shnums(storageindex)
5710 
5711         for shnum in ((sharenums - alreadygot) - incoming):
5712hunk ./src/allmydata/storage/server.py 282
5713             total_space_freed += sf.cancel_lease(cancel_secret)
5714 
5715         if found_buckets:
5716-            storagedir = os.path.join(self.sharedir,
5717-                                      storage_index_to_dir(storageindex))
5718-            if not os.listdir(storagedir):
5719-                os.rmdir(storagedir)
5720+            storagedir = si_dir(self.sharedir, storageindex)
5721+            fp_rmdir_if_empty(storagedir)
5722 
5723         if self.stats_provider:
5724             self.stats_provider.count('storage_server.bytes_freed',
5725hunk ./src/allmydata/test/test_backends.py 52
5726     subtree. I simulate just the parts of the filesystem that the current
5727     implementation of DAS backend needs. """
5728     def call_open(self, fname, mode):
5729+        assert isinstance(fname, basestring), fname
5730         fnamefp = FilePath(fname)
5731         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5732                         "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5733hunk ./src/allmydata/test/test_backends.py 104
5734                         "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5735 
5736     def call_stat(self, fname):
5737+        assert isinstance(fname, basestring), fname
5738         fnamefp = FilePath(fname)
5739         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5740                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5741hunk ./src/allmydata/test/test_backends.py 217
5742 
5743         mocktime.return_value = 0
5744         # Inspect incoming and fail unless it's empty.
5745-        incomingset = self.ss.backend.get_incoming('teststorage_index')
5746-        self.failUnlessReallyEqual(incomingset, set())
5747+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
5748+        self.failUnlessReallyEqual(incomingset, frozenset())
5749         
5750         # Populate incoming with the sharenum: 0.
5751hunk ./src/allmydata/test/test_backends.py 221
5752-        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5753+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5754 
5755         # Inspect incoming and fail unless the sharenum: 0 is listed there.
5756hunk ./src/allmydata/test/test_backends.py 224
5757-        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
5758+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
5759         
5760         # Attempt to create a second share writer with the same sharenum.
5761hunk ./src/allmydata/test/test_backends.py 227
5762-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5763+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5764 
5765         # Show that no sharewriter results from a remote_allocate_buckets
5766         # with the same si and sharenum, until BucketWriter.remote_close()
5767hunk ./src/allmydata/test/test_backends.py 280
5768         StorageServer object. """
5769 
5770         def call_listdir(dirname):
5771+            precondition(isinstance(dirname, basestring), dirname)
5772             self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
5773             return ['0']
5774 
5775hunk ./src/allmydata/test/test_backends.py 287
5776         mocklistdir.side_effect = call_listdir
5777 
5778         def call_open(fname, mode):
5779+            precondition(isinstance(fname, basestring), fname)
5780             self.failUnlessReallyEqual(fname, sharefname)
5781             self.failUnlessEqual(mode[0], 'r', mode)
5782             self.failUnless('b' in mode, mode)
5783hunk ./src/allmydata/test/test_backends.py 297
5784 
5785         datalen = len(share_data)
5786         def call_getsize(fname):
5787+            precondition(isinstance(fname, basestring), fname)
5788             self.failUnlessReallyEqual(fname, sharefname)
5789             return datalen
5790         mockgetsize.side_effect = call_getsize
5791hunk ./src/allmydata/test/test_backends.py 303
5792 
5793         def call_exists(fname):
5794+            precondition(isinstance(fname, basestring), fname)
5795             self.failUnlessReallyEqual(fname, sharefname)
5796             return True
5797         mockexists.side_effect = call_exists
5798hunk ./src/allmydata/test/test_backends.py 321
5799         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
5800 
5801 
5802-class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
5803-    @mock.patch('time.time')
5804-    @mock.patch('os.mkdir')
5805-    @mock.patch('__builtin__.open')
5806-    @mock.patch('os.listdir')
5807-    @mock.patch('os.path.isdir')
5808-    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5809+class TestBackendConstruction(MockFiles, ReallyEqualMixin):
5810+    def test_create_fs_backend(self):
5811         """ This tests whether a file system backend instance can be
5812         constructed. To pass the test, it has to use the
5813         filesystem in only the prescribed ways. """
5814hunk ./src/allmydata/test/test_backends.py 327
5815 
5816-        def call_open(fname, mode):
5817-            if fname == os.path.join(storedir,'bucket_counter.state'):
5818-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5819-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5820-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5821-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5822-                return StringIO()
5823-            else:
5824-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5825-        mockopen.side_effect = call_open
5826-
5827-        def call_isdir(fname):
5828-            if fname == os.path.join(storedir,'shares'):
5829-                return True
5830-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5831-                return True
5832-            else:
5833-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5834-        mockisdir.side_effect = call_isdir
5835-
5836-        def call_mkdir(fname, mode):
5837-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
5838-            self.failUnlessEqual(0777, mode)
5839-            if fname == storedir:
5840-                return None
5841-            elif fname == os.path.join(storedir,'shares'):
5842-                return None
5843-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5844-                return None
5845-            else:
5846-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
5847-        mockmkdir.side_effect = call_mkdir
5848-
5849         # Now begin the test.
5850hunk ./src/allmydata/test/test_backends.py 328
5851-        DASCore('teststoredir', expiration_policy)
5852-
5853-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5854-
5855+        DASCore(self.storedir, expiration_policy)
5856hunk ./src/allmydata/util/fileutil.py 7
5857 
5858 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5859 
5860+from allmydata.util.assertutil import precondition
5861+
5862 from twisted.python import log
5863hunk ./src/allmydata/util/fileutil.py 10
5864-from twisted.python.filepath import UnlistableError
5865+from twisted.python.filepath import FilePath, UnlistableError
5866 
5867 from pycryptopp.cipher.aes import AES
5868 
5869hunk ./src/allmydata/util/fileutil.py 210
5870             raise tx
5871         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5872 
5873+def fp_rmdir_if_empty(dirfp):
5874+    """ Remove the directory if it is empty. """
5875+    try:
5876+        os.rmdir(dirfp.path)
5877+    except OSError, e:
5878+        if e.errno != errno.ENOTEMPTY:
5879+            raise
5880+    else:
5881+        dirfp.changed()
5882+
5883 def rmtree(dirname):
5884     """
5885     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5886hunk ./src/allmydata/util/fileutil.py 257
5887         raise OSError, excs
5888 
5889 def fp_remove(dirfp):
5890+    """
5891+    An idempotent version of shutil.rmtree().  If the dir is already gone,
5892+    do nothing and return without raising an exception.  If this call
5893+    removes the dir, return without raising an exception.  If there is an
5894+    error that prevents removal or if the directory gets created again by
5895+    someone else after this deletes it and before this checks that it is
5896+    gone, raise an exception.
5897+    """
5898     try:
5899         dirfp.remove()
5900     except UnlistableError, e:
5901hunk ./src/allmydata/util/fileutil.py 270
5902         if e.originalException.errno != errno.ENOENT:
5903             raise
5904+    except OSError, e:
5905+        if e.errno != errno.ENOENT:
5906+            raise
5907 
5908 def rm_dir(dirname):
5909     # Renamed to be like shutil.rmtree and unlike rmdir.
5910hunk ./src/allmydata/util/fileutil.py 387
5911         import traceback
5912         traceback.print_exc()
5913 
5914-def get_disk_stats(whichdir, reserved_space=0):
5915+def get_disk_stats(whichdirfp, reserved_space=0):
5916     """Return disk statistics for the storage disk, in the form of a dict
5917     with the following fields.
5918       total:            total bytes on disk
5919hunk ./src/allmydata/util/fileutil.py 408
5920     you can pass how many bytes you would like to leave unused on this
5921     filesystem as reserved_space.
5922     """
5923+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5924 
5925     if have_GetDiskFreeSpaceExW:
5926         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
5927hunk ./src/allmydata/util/fileutil.py 419
5928         n_free_for_nonroot = c_ulonglong(0)
5929         n_total            = c_ulonglong(0)
5930         n_free_for_root    = c_ulonglong(0)
5931-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
5932+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
5933                                                byref(n_total),
5934                                                byref(n_free_for_root))
5935         if retval == 0:
5936hunk ./src/allmydata/util/fileutil.py 424
5937             raise OSError("Windows error %d attempting to get disk statistics for %r"
5938-                          % (GetLastError(), whichdir))
5939+                          % (GetLastError(), whichdirfp.path))
5940         free_for_nonroot = n_free_for_nonroot.value
5941         total            = n_total.value
5942         free_for_root    = n_free_for_root.value
5943hunk ./src/allmydata/util/fileutil.py 433
5944         # <http://docs.python.org/library/os.html#os.statvfs>
5945         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
5946         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
5947-        s = os.statvfs(whichdir)
5948+        s = os.statvfs(whichdirfp.path)
5949 
5950         # on my mac laptop:
5951         #  statvfs(2) is a wrapper around statfs(2).
5952hunk ./src/allmydata/util/fileutil.py 460
5953              'avail': avail,
5954            }
5955 
5956-def get_available_space(whichdir, reserved_space):
5957+def get_available_space(whichdirfp, reserved_space):
5958     """Returns available space for share storage in bytes, or None if no
5959     API to get this information is available.
5960 
5961hunk ./src/allmydata/util/fileutil.py 472
5962     you can pass how many bytes you would like to leave unused on this
5963     filesystem as reserved_space.
5964     """
5965+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5966     try:
5967hunk ./src/allmydata/util/fileutil.py 474
5968-        return get_disk_stats(whichdir, reserved_space)['avail']
5969+        return get_disk_stats(whichdirfp, reserved_space)['avail']
5970     except AttributeError:
5971         return None
5972hunk ./src/allmydata/util/fileutil.py 477
5973-    except EnvironmentError:
5974-        log.msg("OS call to get disk statistics failed")
5975-        return 0
5976}
5977[jacp16 or so
5978wilcoxjg@gmail.com**20110722070036
5979 Ignore-this: 7548785cad146056eede9a16b93b569f
5980] {
5981hunk ./src/allmydata/_auto_deps.py 19
5982 
5983     "zope.interface",
5984 
5985-    "Twisted >= 2.4.0",
5986+    "Twisted >= 11.0",
5987 
5988     # foolscap < 0.5.1 had a performance bug which spent
5989     # O(N**2) CPU for transferring large mutable files
5990hunk ./src/allmydata/storage/backends/das/core.py 2
5991 import os, re, weakref, struct, time, stat
5992+from twisted.application import service
5993+from twisted.python.filepath import UnlistableError
5994+from twisted.python.filepath import FilePath
5995+from zope.interface import implements
5996 
5997hunk ./src/allmydata/storage/backends/das/core.py 7
5998+import allmydata # for __full_version__
5999 from allmydata.interfaces import IStorageBackend
6000 from allmydata.storage.backends.base import Backend
6001hunk ./src/allmydata/storage/backends/das/core.py 10
6002-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6003+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6004 from allmydata.util.assertutil import precondition
6005hunk ./src/allmydata/storage/backends/das/core.py 12
6006-
6007-#from foolscap.api import Referenceable
6008-from twisted.application import service
6009-from twisted.python.filepath import UnlistableError
6010-
6011-from zope.interface import implements
6012 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6013 from allmydata.util import fileutil, idlib, log, time_format
6014hunk ./src/allmydata/storage/backends/das/core.py 14
6015-import allmydata # for __full_version__
6016-
6017-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6018-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6019 from allmydata.storage.lease import LeaseInfo
6020 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6021      create_mutable_sharefile
6022hunk ./src/allmydata/storage/backends/das/core.py 21
6023 from allmydata.storage.crawler import FSBucketCountingCrawler
6024 from allmydata.util.hashutil import constant_time_compare
6025 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6026-
6027-from zope.interface import implements
6028+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6029 
6030 # storage/
6031 # storage/shares/incoming
6032hunk ./src/allmydata/storage/backends/das/core.py 49
6033         self._setup_lease_checkerf(expiration_policy)
6034 
6035     def _setup_storage(self, storedir, readonly, reserved_space):
6036+        precondition(isinstance(storedir, FilePath)) 
6037         self.storedir = storedir
6038         self.readonly = readonly
6039         self.reserved_space = int(reserved_space)
6040hunk ./src/allmydata/storage/backends/das/core.py 83
6041 
6042     def get_incoming_shnums(self, storageindex):
6043         """ Return a frozenset of the shnum (as ints) of incoming shares. """
6044-        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
6045+        incomingdir = si_si2dir(self.incomingdir, storageindex)
6046         try:
6047             childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
6048             shnums = [ int(fp.basename) for fp in childfps ]
6049hunk ./src/allmydata/storage/backends/das/core.py 96
6050         """ Generate ImmutableShare objects for shares we have for this
6051         storageindex. ("Shares we have" means completed ones, excluding
6052         incoming ones.)"""
6053-        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
6054+        finalstoragedir = si_si2dir(self.sharedir, storageindex)
6055         try:
6056             for fp in finalstoragedir.children():
6057                 if is_num(fp):
6058hunk ./src/allmydata/storage/backends/das/core.py 111
6059         return fileutil.get_available_space(self.storedir, self.reserved_space)
6060 
6061     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6062-        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
6063-        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6064+        finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
6065+        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6066         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
6067         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
6068         return bw
6069hunk ./src/allmydata/storage/backends/null/core.py 18
6070         return None
6071 
6072     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6073-       
6074-        immutableshare = ImmutableShare()
6075+        immutableshare = ImmutableShare()
6076         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
6077 
6078     def set_storage_server(self, ss):
6079hunk ./src/allmydata/storage/backends/null/core.py 24
6080         self.ss = ss
6081 
6082-    def get_incoming(self, storageindex):
6083-        return set()
6084+    def get_incoming_shnums(self, storageindex):
6085+        return frozenset()
6086 
6087 class ImmutableShare:
6088     sharetype = "immutable"
6089hunk ./src/allmydata/storage/common.py 19
6090 def si_a2b(ascii_storageindex):
6091     return base32.a2b(ascii_storageindex)
6092 
6093-def si_dir(startfp, storageindex):
6094+def si_si2dir(startfp, storageindex):
6095     sia = si_b2a(storageindex)
6096     return startfp.child(sia[:2]).child(sia)
6097hunk ./src/allmydata/storage/immutable.py 20
6098     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
6099         self.ss = ss
6100         self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
6101-
6102         self._canary = canary
6103         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
6104         self.closed = False
6105hunk ./src/allmydata/storage/lease.py 17
6106 
6107     def get_expiration_time(self):
6108         return self.expiration_time
6109+
6110     def get_grant_renew_time_time(self):
6111         # hack, based upon fixed 31day expiration period
6112         return self.expiration_time - 31*24*60*60
6113hunk ./src/allmydata/storage/lease.py 21
6114+
6115     def get_age(self):
6116         return time.time() - self.get_grant_renew_time_time()
6117 
6118hunk ./src/allmydata/storage/lease.py 32
6119          self.expiration_time) = struct.unpack(">L32s32sL", data)
6120         self.nodeid = None
6121         return self
6122+
6123     def to_immutable_data(self):
6124         return struct.pack(">L32s32sL",
6125                            self.owner_num,
6126hunk ./src/allmydata/storage/lease.py 45
6127                            int(self.expiration_time),
6128                            self.renew_secret, self.cancel_secret,
6129                            self.nodeid)
6130+
6131     def from_mutable_data(self, data):
6132         (self.owner_num,
6133          self.expiration_time,
6134hunk ./src/allmydata/storage/server.py 11
6135 from allmydata.util import fileutil, idlib, log, time_format
6136 import allmydata # for __full_version__
6137 
6138-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6139-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6140+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6141+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6142 from allmydata.storage.lease import LeaseInfo
6143 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6144      create_mutable_sharefile
6145hunk ./src/allmydata/storage/server.py 88
6146             else:
6147                 stats["mean"] = None
6148 
6149-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
6150-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
6151-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
6152+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
6153+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
6154+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
6155                              (0.999, "99_9_percentile", 1000)]
6156 
6157             for percentile, percentilestring, minnumtoobserve in orderstatlist:
6158hunk ./src/allmydata/storage/server.py 231
6159             header = f.read(32)
6160             f.close()
6161             if header[:32] == MutableShareFile.MAGIC:
6162+                # XXX  Can I exploit this code?
6163                 sf = MutableShareFile(filename, self)
6164                 # note: if the share has been migrated, the renew_lease()
6165                 # call will throw an exception, with information to help the
6166hunk ./src/allmydata/storage/server.py 237
6167                 # client update the lease.
6168             elif header[:4] == struct.pack(">L", 1):
6169+                # Check if version number is "1".
6170+                # XXX WHAT ABOUT OTHER VERSIONS!!!!!!!?
6171                 sf = ShareFile(filename)
6172             else:
6173                 continue # non-sharefile
6174hunk ./src/allmydata/storage/server.py 285
6175             total_space_freed += sf.cancel_lease(cancel_secret)
6176 
6177         if found_buckets:
6178-            storagedir = si_dir(self.sharedir, storageindex)
6179+            # XXX  Yikes looks like code that shouldn't be in the server!
6180+            storagedir = si_si2dir(self.sharedir, storageindex)
6181             fp_rmdir_if_empty(storagedir)
6182 
6183         if self.stats_provider:
6184hunk ./src/allmydata/storage/server.py 301
6185             self.stats_provider.count('storage_server.bytes_added', consumed_size)
6186         del self._active_writers[bw]
6187 
6188-
6189     def remote_get_buckets(self, storageindex):
6190         start = time.time()
6191         self.count("get")
6192hunk ./src/allmydata/storage/server.py 329
6193         except StopIteration:
6194             return iter([])
6195 
6196+    #  XXX  As far as Zancas' grockery has gotten.
6197     def remote_slot_testv_and_readv_and_writev(self, storageindex,
6198                                                secrets,
6199                                                test_and_write_vectors,
6200hunk ./src/allmydata/storage/server.py 338
6201         self.count("writev")
6202         si_s = si_b2a(storageindex)
6203         log.msg("storage: slot_writev %s" % si_s)
6204-        si_dir = storage_index_to_dir(storageindex)
6205+       
6206         (write_enabler, renew_secret, cancel_secret) = secrets
6207         # shares exist if there is a file for them
6208hunk ./src/allmydata/storage/server.py 341
6209-        bucketdir = os.path.join(self.sharedir, si_dir)
6210+        bucketdir = si_si2dir(self.sharedir, storageindex)
6211         shares = {}
6212         if os.path.isdir(bucketdir):
6213             for sharenum_s in os.listdir(bucketdir):
6214hunk ./src/allmydata/storage/server.py 430
6215         si_s = si_b2a(storageindex)
6216         lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
6217                      facility="tahoe.storage", level=log.OPERATIONAL)
6218-        si_dir = storage_index_to_dir(storageindex)
6219         # shares exist if there is a file for them
6220hunk ./src/allmydata/storage/server.py 431
6221-        bucketdir = os.path.join(self.sharedir, si_dir)
6222+        bucketdir = si_si2dir(self.sharedir, storageindex)
6223         if not os.path.isdir(bucketdir):
6224             self.add_latency("readv", time.time() - start)
6225             return {}
6226hunk ./src/allmydata/test/test_backends.py 2
6227 from twisted.trial import unittest
6228-
6229 from twisted.python.filepath import FilePath
6230hunk ./src/allmydata/test/test_backends.py 3
6231-
6232 from allmydata.util.log import msg
6233hunk ./src/allmydata/test/test_backends.py 4
6234-
6235 from StringIO import StringIO
6236hunk ./src/allmydata/test/test_backends.py 5
6237-
6238 from allmydata.test.common_util import ReallyEqualMixin
6239 from allmydata.util.assertutil import _assert
6240hunk ./src/allmydata/test/test_backends.py 7
6241-
6242 import mock
6243 
6244 # This is the code that we're going to be testing.
6245hunk ./src/allmydata/test/test_backends.py 11
6246 from allmydata.storage.server import StorageServer
6247-
6248 from allmydata.storage.backends.das.core import DASCore
6249 from allmydata.storage.backends.null.core import NullCore
6250 
6251hunk ./src/allmydata/test/test_backends.py 14
6252-
6253-# The following share file contents was generated with
6254+# The following share file content was generated with
6255 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
6256hunk ./src/allmydata/test/test_backends.py 16
6257-# with share data == 'a'.
6258+# with share data == 'a'. The total size of this input
6259+# is 85 bytes.
6260 shareversionnumber = '\x00\x00\x00\x01'
6261 sharedatalength = '\x00\x00\x00\x01'
6262 numberofleases = '\x00\x00\x00\x01'
6263hunk ./src/allmydata/test/test_backends.py 21
6264-
6265 shareinputdata = 'a'
6266 ownernumber = '\x00\x00\x00\x00'
6267 renewsecret  = 'x'*32
6268hunk ./src/allmydata/test/test_backends.py 31
6269 client_data = shareinputdata + ownernumber + renewsecret + \
6270     cancelsecret + expirationtime + nextlease
6271 share_data = containerdata + client_data
6272-
6273-
6274 testnodeid = 'testnodeidxxxxxxxxxx'
6275 
6276 class MockStat:
6277hunk ./src/allmydata/test/test_backends.py 105
6278         mstat.st_mode = 16893 # a directory
6279         return mstat
6280 
6281+    def call_get_available_space(self, storedir, reservedspace):
6282+        # The input vector has an input size of 85.
6283+        return 85 - reservedspace
6284+
6285+    def call_exists(self):
6286+        # I'm only called in the ImmutableShareFile constructor.
6287+        return False
6288+
6289     def setUp(self):
6290         msg( "%s.setUp()" % (self,))
6291         self.storedir = FilePath('teststoredir')
6292hunk ./src/allmydata/test/test_backends.py 147
6293         mockfpstat = self.mockfpstatp.__enter__()
6294         mockfpstat.side_effect = self.call_stat
6295 
6296+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6297+        mockget_available_space = self.mockget_available_space.__enter__()
6298+        mockget_available_space.side_effect = self.call_get_available_space
6299+
6300+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6301+        mockfpexists = self.mockfpexists.__enter__()
6302+        mockfpexists.side_effect = self.call_exists
6303+
6304     def tearDown(self):
6305         msg( "%s.tearDown()" % (self,))
6306hunk ./src/allmydata/test/test_backends.py 157
6307+        self.mockfpexists.__exit__()
6308+        self.mockget_available_space.__exit__()
6309         self.mockfpstatp.__exit__()
6310         self.mockstatp.__exit__()
6311         self.mockopenp.__exit__()
6312hunk ./src/allmydata/test/test_backends.py 166
6313         self.mockmkdirp.__exit__()
6314         self.mocklistdirp.__exit__()
6315 
6316+
6317 expiration_policy = {'enabled' : False,
6318                      'mode' : 'age',
6319                      'override_lease_duration' : None,
6320hunk ./src/allmydata/test/test_backends.py 182
6321         self.ss = StorageServer(testnodeid, backend=NullCore())
6322 
6323     @mock.patch('os.mkdir')
6324-
6325     @mock.patch('__builtin__.open')
6326     @mock.patch('os.listdir')
6327     @mock.patch('os.path.isdir')
6328hunk ./src/allmydata/test/test_backends.py 201
6329         filesystem backend. To pass the test, it mustn't use the filesystem
6330         outside of its configured storedir. """
6331 
6332-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
6333+        StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
6334 
6335 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
6336     """ This tests both the StorageServer and the DAS backend together. """
6337hunk ./src/allmydata/test/test_backends.py 205
6338+   
6339     def setUp(self):
6340         MockFiles.setUp(self)
6341         try:
6342hunk ./src/allmydata/test/test_backends.py 211
6343             self.backend = DASCore(self.storedir, expiration_policy)
6344             self.ss = StorageServer(testnodeid, self.backend)
6345-            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6346-            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
6347+            self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6348+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
6349         except:
6350             MockFiles.tearDown(self)
6351             raise
6352hunk ./src/allmydata/test/test_backends.py 233
6353         # Populate incoming with the sharenum: 0.
6354         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6355 
6356-        # Inspect incoming and fail unless the sharenum: 0 is listed there.
6357-        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6358+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
6359+        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6360         
6361         # Attempt to create a second share writer with the same sharenum.
6362         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6363hunk ./src/allmydata/test/test_backends.py 257
6364 
6365         # Postclose: (Omnibus) failUnless written data is in final.
6366         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
6367-        contents = sharesinfinal[0].read_share_data(0,73)
6368+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
6369+        contents = sharesinfinal[0].read_share_data(0, 73)
6370         self.failUnlessReallyEqual(contents, client_data)
6371 
6372         # Exercise the case that the share we're asking to allocate is
6373hunk ./src/allmydata/test/test_backends.py 276
6374         mockget_available_space.side_effect = call_get_available_space
6375         
6376         
6377-        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6378+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6379 
6380     @mock.patch('os.path.exists')
6381     @mock.patch('os.path.getsize')
6382}
6383[jacp17
6384wilcoxjg@gmail.com**20110722203244
6385 Ignore-this: e79a5924fb2eb786ee4e9737a8228f87
6386] {
6387hunk ./src/allmydata/storage/backends/das/core.py 14
6388 from allmydata.util.assertutil import precondition
6389 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6390 from allmydata.util import fileutil, idlib, log, time_format
6391+from allmydata.util.fileutil import fp_make_dirs
6392 from allmydata.storage.lease import LeaseInfo
6393 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6394      create_mutable_sharefile
6395hunk ./src/allmydata/storage/backends/das/core.py 19
6396 from allmydata.storage.immutable import BucketWriter, BucketReader
6397-from allmydata.storage.crawler import FSBucketCountingCrawler
6398+from allmydata.storage.crawler import BucketCountingCrawler
6399 from allmydata.util.hashutil import constant_time_compare
6400hunk ./src/allmydata/storage/backends/das/core.py 21
6401-from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6402+from allmydata.storage.backends.das.expirer import LeaseCheckingCrawler
6403 _pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6404 
6405 # storage/
6406hunk ./src/allmydata/storage/backends/das/core.py 43
6407     implements(IStorageBackend)
6408     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
6409         Backend.__init__(self)
6410-
6411         self._setup_storage(storedir, readonly, reserved_space)
6412         self._setup_corruption_advisory()
6413         self._setup_bucket_counter()
6414hunk ./src/allmydata/storage/backends/das/core.py 72
6415 
6416     def _setup_bucket_counter(self):
6417         statefname = self.storedir.child("bucket_counter.state")
6418-        self.bucket_counter = FSBucketCountingCrawler(statefname)
6419+        self.bucket_counter = BucketCountingCrawler(statefname)
6420         self.bucket_counter.setServiceParent(self)
6421 
6422     def _setup_lease_checkerf(self, expiration_policy):
6423hunk ./src/allmydata/storage/backends/das/core.py 78
6424         statefile = self.storedir.child("lease_checker.state")
6425         historyfile = self.storedir.child("lease_checker.history")
6426-        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6427+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6428         self.lease_checker.setServiceParent(self)
6429 
6430     def get_incoming_shnums(self, storageindex):
6431hunk ./src/allmydata/storage/backends/das/core.py 168
6432             # it. Also construct the metadata.
6433             assert not finalhome.exists()
6434             fp_make_dirs(self.incominghome)
6435-            f = open(self.incominghome, 'wb')
6436+            f = self.incominghome.child(str(self.shnum))
6437             # The second field -- the four-byte share data length -- is no
6438             # longer used as of Tahoe v1.3.0, but we continue to write it in
6439             # there in case someone downgrades a storage server from >=
6440hunk ./src/allmydata/storage/backends/das/core.py 178
6441             # the largest length that can fit into the field. That way, even
6442             # if this does happen, the old < v1.3.0 server will still allow
6443             # clients to read the first part of the share.
6444-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6445-            f.close()
6446+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6447+            #f.close()
6448             self._lease_offset = max_size + 0x0c
6449             self._num_leases = 0
6450         else:
6451hunk ./src/allmydata/storage/backends/das/core.py 261
6452         f.write(data)
6453         f.close()
6454 
6455-    def _write_lease_record(self, f, lease_number, lease_info):
6456+    def _write_lease_record(self, lease_number, lease_info):
6457         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6458         f.seek(offset)
6459         assert f.tell() == offset
6460hunk ./src/allmydata/storage/backends/das/core.py 290
6461                 yield LeaseInfo().from_immutable_data(data)
6462 
6463     def add_lease(self, lease_info):
6464-        f = open(self.incominghome, 'rb+')
6465+        self.incominghome, 'rb+')
6466         num_leases = self._read_num_leases(f)
6467         self._write_lease_record(f, num_leases, lease_info)
6468         self._write_num_leases(f, num_leases+1)
6469hunk ./src/allmydata/storage/backends/das/expirer.py 1
6470-import time, os, pickle, struct
6471-from allmydata.storage.crawler import FSShareCrawler
6472+import time, os, pickle, struct # os, pickle, and struct will almost certainly be migrated to the backend...
6473+from allmydata.storage.crawler import ShareCrawler
6474 from allmydata.storage.common import UnknownMutableContainerVersionError, \
6475      UnknownImmutableContainerVersionError
6476 from twisted.python import log as twlog
6477hunk ./src/allmydata/storage/backends/das/expirer.py 7
6478 
6479-class FSLeaseCheckingCrawler(FSShareCrawler):
6480+class LeaseCheckingCrawler(ShareCrawler):
6481     """I examine the leases on all shares, determining which are still valid
6482     and which have expired. I can remove the expired leases (if so
6483     configured), and the share will be deleted when the last lease is
6484hunk ./src/allmydata/storage/backends/das/expirer.py 66
6485         else:
6486             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
6487         self.sharetypes_to_expire = expiration_policy['sharetypes']
6488-        FSShareCrawler.__init__(self, statefile)
6489+        ShareCrawler.__init__(self, statefile)
6490 
6491     def add_initial_state(self):
6492         # we fill ["cycle-to-date"] here (even though they will be reset in
6493hunk ./src/allmydata/storage/crawler.py 1
6494-
6495 import os, time, struct
6496 import cPickle as pickle
6497 from twisted.internet import reactor
6498hunk ./src/allmydata/storage/crawler.py 11
6499 class TimeSliceExceeded(Exception):
6500     pass
6501 
6502-class FSShareCrawler(service.MultiService):
6503-    """A subcless of ShareCrawler is attached to a StorageServer, and
6504+class ShareCrawler(service.MultiService):
6505+    """A subclass of ShareCrawler is attached to a StorageServer, and
6506     periodically walks all of its shares, processing each one in some
6507     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
6508     since large servers can easily have a terabyte of shares, in several
6509hunk ./src/allmydata/storage/crawler.py 426
6510         pass
6511 
6512 
6513-class FSBucketCountingCrawler(FSShareCrawler):
6514+class BucketCountingCrawler(ShareCrawler):
6515     """I keep track of how many buckets are being managed by this server.
6516     This is equivalent to the number of distributed files and directories for
6517     which I am providing storage. The actual number of files+directories in
6518hunk ./src/allmydata/storage/crawler.py 440
6519     minimum_cycle_time = 60*60 # we don't need this more than once an hour
6520 
6521     def __init__(self, statefp, num_sample_prefixes=1):
6522-        FSShareCrawler.__init__(self, statefp)
6523+        ShareCrawler.__init__(self, statefp)
6524         self.num_sample_prefixes = num_sample_prefixes
6525 
6526     def add_initial_state(self):
6527hunk ./src/allmydata/test/test_backends.py 113
6528         # I'm only called in the ImmutableShareFile constructor.
6529         return False
6530 
6531+    def call_setContent(self, inputstring):
6532+        # XXX Good enough for expirer, not sure about elsewhere...
6533+        return True
6534+
6535     def setUp(self):
6536         msg( "%s.setUp()" % (self,))
6537         self.storedir = FilePath('teststoredir')
6538hunk ./src/allmydata/test/test_backends.py 159
6539         mockfpexists = self.mockfpexists.__enter__()
6540         mockfpexists.side_effect = self.call_exists
6541 
6542+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6543+        mocksetContent = self.mocksetContent.__enter__()
6544+        mocksetContent.side_effect = self.call_setContent
6545+
6546     def tearDown(self):
6547         msg( "%s.tearDown()" % (self,))
6548hunk ./src/allmydata/test/test_backends.py 165
6549+        self.mocksetContent.__exit__()
6550         self.mockfpexists.__exit__()
6551         self.mockget_available_space.__exit__()
6552         self.mockfpstatp.__exit__()
6553}
6554
6555Context:
6556
6557[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
6558david-sarah@jacaranda.org**20110718005949
6559 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
6560]
6561[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
6562david-sarah@jacaranda.org**20110717194315
6563 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
6564]
6565[README.txt: say that quickstart.rst is in the docs directory.
6566david-sarah@jacaranda.org**20110717192400
6567 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
6568]
6569[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
6570zooko@zooko.com**20110717114226
6571 Ignore-this: df222120d41447ce4102616921626c82
6572 fixes #1383
6573]
6574[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
6575david-sarah@jacaranda.org**20110716181813
6576 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
6577]
6578[docs: add missing link in NEWS.rst
6579zooko@zooko.com**20110712153307
6580 Ignore-this: be7b7eb81c03700b739daa1027d72b35
6581]
6582[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
6583zooko@zooko.com**20110712153229
6584 Ignore-this: 723c4f9e2211027c79d711715d972c5
6585 Also remove a couple of vestigial references to figleaf, which is long gone.
6586 fixes #1409 (remove contrib/fuse)
6587]
6588[add Protovis.js-based download-status timeline visualization
6589Brian Warner <warner@lothar.com>**20110629222606
6590 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
6591 
6592 provide status overlap info on the webapi t=json output, add decode/decrypt
6593 rate tooltips, add zoomin/zoomout buttons
6594]
6595[add more download-status data, fix tests
6596Brian Warner <warner@lothar.com>**20110629222555
6597 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
6598]
6599[prepare for viz: improve DownloadStatus events
6600Brian Warner <warner@lothar.com>**20110629222542
6601 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
6602 
6603 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
6604]
6605[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
6606zooko@zooko.com**20110629185711
6607 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
6608]
6609[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
6610david-sarah@jacaranda.org**20110130235809
6611 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
6612]
6613[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
6614david-sarah@jacaranda.org**20110626054124
6615 Ignore-this: abb864427a1b91bd10d5132b4589fd90
6616]
6617[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
6618david-sarah@jacaranda.org**20110623205528
6619 Ignore-this: c63e23146c39195de52fb17c7c49b2da
6620]
6621[Rename test_package_initialization.py to (much shorter) test_import.py .
6622Brian Warner <warner@lothar.com>**20110611190234
6623 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
6624 
6625 The former name was making my 'ls' listings hard to read, by forcing them
6626 down to just two columns.
6627]
6628[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
6629zooko@zooko.com**20110611163741
6630 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
6631 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
6632 fixes #1412
6633]
6634[wui: right-align the size column in the WUI
6635zooko@zooko.com**20110611153758
6636 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
6637 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
6638 fixes #1412
6639]
6640[docs: three minor fixes
6641zooko@zooko.com**20110610121656
6642 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
6643 CREDITS for arc for stats tweak
6644 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
6645 English usage tweak
6646]
6647[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
6648david-sarah@jacaranda.org**20110609223719
6649 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
6650]
6651[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
6652wilcoxjg@gmail.com**20110527120135
6653 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
6654 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
6655 NEWS.rst, stats.py: documentation of change to get_latencies
6656 stats.rst: now documents percentile modification in get_latencies
6657 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
6658 fixes #1392
6659]
6660[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
6661david-sarah@jacaranda.org**20110517011214
6662 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
6663]
6664[docs: convert NEWS to NEWS.rst and change all references to it.
6665david-sarah@jacaranda.org**20110517010255
6666 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
6667]
6668[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
6669david-sarah@jacaranda.org**20110512140559
6670 Ignore-this: 784548fc5367fac5450df1c46890876d
6671]
6672[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
6673david-sarah@jacaranda.org**20110130164923
6674 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
6675]
6676[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
6677zooko@zooko.com**20110128142006
6678 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
6679 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
6680]
6681[M-x whitespace-cleanup
6682zooko@zooko.com**20110510193653
6683 Ignore-this: dea02f831298c0f65ad096960e7df5c7
6684]
6685[docs: fix typo in running.rst, thanks to arch_o_median
6686zooko@zooko.com**20110510193633
6687 Ignore-this: ca06de166a46abbc61140513918e79e8
6688]
6689[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
6690david-sarah@jacaranda.org**20110204204902
6691 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
6692]
6693[relnotes.txt: forseeable -> foreseeable. refs #1342
6694david-sarah@jacaranda.org**20110204204116
6695 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
6696]
6697[replace remaining .html docs with .rst docs
6698zooko@zooko.com**20110510191650
6699 Ignore-this: d557d960a986d4ac8216d1677d236399
6700 Remove install.html (long since deprecated).
6701 Also replace some obsolete references to install.html with references to quickstart.rst.
6702 Fix some broken internal references within docs/historical/historical_known_issues.txt.
6703 Thanks to Ravi Pinjala and Patrick McDonald.
6704 refs #1227
6705]
6706[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
6707zooko@zooko.com**20110428055232
6708 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
6709]
6710[munin tahoe_files plugin: fix incorrect file count
6711francois@ctrlaltdel.ch**20110428055312
6712 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
6713 fixes #1391
6714]
6715[corrected "k must never be smaller than N" to "k must never be greater than N"
6716secorp@allmydata.org**20110425010308
6717 Ignore-this: 233129505d6c70860087f22541805eac
6718]
6719[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
6720david-sarah@jacaranda.org**20110411190738
6721 Ignore-this: 7847d26bc117c328c679f08a7baee519
6722]
6723[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
6724david-sarah@jacaranda.org**20110410155844
6725 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
6726]
6727[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
6728david-sarah@jacaranda.org**20110410155705
6729 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
6730]
6731[remove unused variable detected by pyflakes
6732zooko@zooko.com**20110407172231
6733 Ignore-this: 7344652d5e0720af822070d91f03daf9
6734]
6735[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
6736david-sarah@jacaranda.org**20110401202750
6737 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
6738]
6739[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
6740Brian Warner <warner@lothar.com>**20110325232511
6741 Ignore-this: d5307faa6900f143193bfbe14e0f01a
6742]
6743[control.py: remove all uses of s.get_serverid()
6744warner@lothar.com**20110227011203
6745 Ignore-this: f80a787953bd7fa3d40e828bde00e855
6746]
6747[web: remove some uses of s.get_serverid(), not all
6748warner@lothar.com**20110227011159
6749 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
6750]
6751[immutable/downloader/fetcher.py: remove all get_serverid() calls
6752warner@lothar.com**20110227011156
6753 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
6754]
6755[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
6756warner@lothar.com**20110227011153
6757 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
6758 
6759 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
6760 _shares_from_server dict was being popped incorrectly (using shnum as the
6761 index instead of serverid). I'm still thinking through the consequences of
6762 this bug. It was probably benign and really hard to detect. I think it would
6763 cause us to incorrectly believe that we're pulling too many shares from a
6764 server, and thus prefer a different server rather than asking for a second
6765 share from the first server. The diversity code is intended to spread out the
6766 number of shares simultaneously being requested from each server, but with
6767 this bug, it might be spreading out the total number of shares requested at
6768 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
6769 segment, so the effect doesn't last very long).
6770]
6771[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
6772warner@lothar.com**20110227011150
6773 Ignore-this: d8d56dd8e7b280792b40105e13664554
6774 
6775 test_download.py: create+check MyShare instances better, make sure they share
6776 Server objects, now that finder.py cares
6777]
6778[immutable/downloader/finder.py: reduce use of get_serverid(), one left
6779warner@lothar.com**20110227011146
6780 Ignore-this: 5785be173b491ae8a78faf5142892020
6781]
6782[immutable/offloaded.py: reduce use of get_serverid() a bit more
6783warner@lothar.com**20110227011142
6784 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
6785]
6786[immutable/upload.py: reduce use of get_serverid()
6787warner@lothar.com**20110227011138
6788 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
6789]
6790[immutable/checker.py: remove some uses of s.get_serverid(), not all
6791warner@lothar.com**20110227011134
6792 Ignore-this: e480a37efa9e94e8016d826c492f626e
6793]
6794[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
6795warner@lothar.com**20110227011132
6796 Ignore-this: 6078279ddf42b179996a4b53bee8c421
6797 MockIServer stubs
6798]
6799[upload.py: rearrange _make_trackers a bit, no behavior changes
6800warner@lothar.com**20110227011128
6801 Ignore-this: 296d4819e2af452b107177aef6ebb40f
6802]
6803[happinessutil.py: finally rename merge_peers to merge_servers
6804warner@lothar.com**20110227011124
6805 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
6806]
6807[test_upload.py: factor out FakeServerTracker
6808warner@lothar.com**20110227011120
6809 Ignore-this: 6c182cba90e908221099472cc159325b
6810]
6811[test_upload.py: server-vs-tracker cleanup
6812warner@lothar.com**20110227011115
6813 Ignore-this: 2915133be1a3ba456e8603885437e03
6814]
6815[happinessutil.py: server-vs-tracker cleanup
6816warner@lothar.com**20110227011111
6817 Ignore-this: b856c84033562d7d718cae7cb01085a9
6818]
6819[upload.py: more tracker-vs-server cleanup
6820warner@lothar.com**20110227011107
6821 Ignore-this: bb75ed2afef55e47c085b35def2de315
6822]
6823[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
6824warner@lothar.com**20110227011103
6825 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
6826]
6827[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
6828warner@lothar.com**20110227011100
6829 Ignore-this: 7ea858755cbe5896ac212a925840fe68
6830 
6831 No behavioral changes, just updating variable/method names and log messages.
6832 The effects outside these three files should be minimal: some exception
6833 messages changed (to say "server" instead of "peer"), and some internal class
6834 names were changed. A few things still use "peer" to minimize external
6835 changes, like UploadResults.timings["peer_selection"] and
6836 happinessutil.merge_peers, which can be changed later.
6837]
6838[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
6839warner@lothar.com**20110227011056
6840 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
6841]
6842[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
6843warner@lothar.com**20110227011051
6844 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
6845]
6846[test: increase timeout on a network test because Francois's ARM machine hit that timeout
6847zooko@zooko.com**20110317165909
6848 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
6849 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
6850]
6851[docs/configuration.rst: add a "Frontend Configuration" section
6852Brian Warner <warner@lothar.com>**20110222014323
6853 Ignore-this: 657018aa501fe4f0efef9851628444ca
6854 
6855 this points to docs/frontends/*.rst, which were previously underlinked
6856]
6857[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
6858"Brian Warner <warner@lothar.com>"**20110221061544
6859 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
6860]
6861[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
6862david-sarah@jacaranda.org**20110221015817
6863 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
6864]
6865[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
6866david-sarah@jacaranda.org**20110221020125
6867 Ignore-this: b0744ed58f161bf188e037bad077fc48
6868]
6869[Refactor StorageFarmBroker handling of servers
6870Brian Warner <warner@lothar.com>**20110221015804
6871 Ignore-this: 842144ed92f5717699b8f580eab32a51
6872 
6873 Pass around IServer instance instead of (peerid, rref) tuple. Replace
6874 "descriptor" with "server". Other replacements:
6875 
6876  get_all_servers -> get_connected_servers/get_known_servers
6877  get_servers_for_index -> get_servers_for_psi (now returns IServers)
6878 
6879 This change still needs to be pushed further down: lots of code is now
6880 getting the IServer and then distributing (peerid, rref) internally.
6881 Instead, it ought to distribute the IServer internally and delay
6882 extracting a serverid or rref until the last moment.
6883 
6884 no_network.py was updated to retain parallelism.
6885]
6886[TAG allmydata-tahoe-1.8.2
6887warner@lothar.com**20110131020101]
6888Patch bundle hash:
6889a4b7ef3c5cefd598ed2d5a2245afd046c0898130