Ticket #999: work-in-progress-2011-07-20_06_05Z.darcs.patch

File work-in-progress-2011-07-20_06_05Z.darcs.patch, 276.7 KB (added by zooko, at 2011-07-20T06:10:25Z)
Line 
128 patches for repository /home/zooko/playground/tahoe-lafs/pristine:
2
3Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
4  * storage: new mocking tests of storage server read and write
5  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
6
7Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
8  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
9  sloppy not for production
10
11Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
12  * a temp patch used as a snapshot
13
14Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
15  * snapshot of progress on backend implementation (not suitable for trunk)
16
17Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
18  * checkpoint patch
19
20Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
21  * checkpoint4
22
23Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
24  * checkpoint5
25
26Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
27  * checkpoint 6
28
29Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
30  * checkpoint 7
31
32Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
33  * checkpoint8
34    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
35
36Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
37  * checkpoint 9
38
39Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
40  * checkpoint10
41
42Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
43  * jacp 11
44
45Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
46  * checkpoint12 testing correct behavior with regard to incoming and final
47
48Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
49  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
50
51Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
52  * adding comments to clarify what I'm about to do.
53
54Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
55  * branching back, no longer attempting to mock inside TestServerFSBackend
56
57Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
58  * checkpoint12 TestServerFSBackend no longer mocks filesystem
59
60Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
61  * JACP
62
63Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
64  * testing get incoming
65
66Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
67  * ImmutableShareFile does not know its StorageIndex
68
69Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
70  * get_incoming correctly reports the 0 share after it has arrived
71
72Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
73  * jacp14
74
75Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
76  * jacp14 or so
77
78Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
79  * temporary work-in-progress patch to be unrecorded
80  tidy up a few tests, work done in pair-programming with Zancas
81
82Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
83  * work in progress intended to be unrecorded and never committed to trunk
84  switch from os.path.join to filepath
85  incomplete refactoring of common "stay in your subtree" tester code into a superclass
86 
87
88Fri Jul 15 13:15:00 MDT 2011  zooko@zooko.com
89  * another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
90  In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
91
92Tue Jul 19 23:59:18 MDT 2011  zooko@zooko.com
93  * another temporary patch for sharing work-in-progress
94  A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
95  There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
96  (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
97 
98
99New patches:
100
101[storage: new mocking tests of storage server read and write
102wilcoxjg@gmail.com**20110325203514
103 Ignore-this: df65c3c4f061dd1516f88662023fdb41
104 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
105] {
106addfile ./src/allmydata/test/test_server.py
107hunk ./src/allmydata/test/test_server.py 1
108+from twisted.trial import unittest
109+
110+from StringIO import StringIO
111+
112+from allmydata.test.common_util import ReallyEqualMixin
113+
114+import mock
115+
116+# This is the code that we're going to be testing.
117+from allmydata.storage.server import StorageServer
118+
119+# The following share file contents was generated with
120+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
121+# with share data == 'a'.
122+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
123+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
124+
125+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
126+
127+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
128+    @mock.patch('__builtin__.open')
129+    def test_create_server(self, mockopen):
130+        """ This tests whether a server instance can be constructed. """
131+
132+        def call_open(fname, mode):
133+            if fname == 'testdir/bucket_counter.state':
134+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
135+            elif fname == 'testdir/lease_checker.state':
136+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
137+            elif fname == 'testdir/lease_checker.history':
138+                return StringIO()
139+        mockopen.side_effect = call_open
140+
141+        # Now begin the test.
142+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
143+
144+        # You passed!
145+
146+class TestServer(unittest.TestCase, ReallyEqualMixin):
147+    @mock.patch('__builtin__.open')
148+    def setUp(self, mockopen):
149+        def call_open(fname, mode):
150+            if fname == 'testdir/bucket_counter.state':
151+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
152+            elif fname == 'testdir/lease_checker.state':
153+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
154+            elif fname == 'testdir/lease_checker.history':
155+                return StringIO()
156+        mockopen.side_effect = call_open
157+
158+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
159+
160+
161+    @mock.patch('time.time')
162+    @mock.patch('os.mkdir')
163+    @mock.patch('__builtin__.open')
164+    @mock.patch('os.listdir')
165+    @mock.patch('os.path.isdir')
166+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
167+        """Handle a report of corruption."""
168+
169+        def call_listdir(dirname):
170+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
171+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
172+
173+        mocklistdir.side_effect = call_listdir
174+
175+        class MockFile:
176+            def __init__(self):
177+                self.buffer = ''
178+                self.pos = 0
179+            def write(self, instring):
180+                begin = self.pos
181+                padlen = begin - len(self.buffer)
182+                if padlen > 0:
183+                    self.buffer += '\x00' * padlen
184+                end = self.pos + len(instring)
185+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
186+                self.pos = end
187+            def close(self):
188+                pass
189+            def seek(self, pos):
190+                self.pos = pos
191+            def read(self, numberbytes):
192+                return self.buffer[self.pos:self.pos+numberbytes]
193+            def tell(self):
194+                return self.pos
195+
196+        mocktime.return_value = 0
197+
198+        sharefile = MockFile()
199+        def call_open(fname, mode):
200+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
201+            return sharefile
202+
203+        mockopen.side_effect = call_open
204+        # Now begin the test.
205+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
206+        print bs
207+        bs[0].remote_write(0, 'a')
208+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
209+
210+
211+    @mock.patch('os.path.exists')
212+    @mock.patch('os.path.getsize')
213+    @mock.patch('__builtin__.open')
214+    @mock.patch('os.listdir')
215+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
216+        """ This tests whether the code correctly finds and reads
217+        shares written out by old (Tahoe-LAFS <= v1.8.2)
218+        servers. There is a similar test in test_download, but that one
219+        is from the perspective of the client and exercises a deeper
220+        stack of code. This one is for exercising just the
221+        StorageServer object. """
222+
223+        def call_listdir(dirname):
224+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
225+            return ['0']
226+
227+        mocklistdir.side_effect = call_listdir
228+
229+        def call_open(fname, mode):
230+            self.failUnlessReallyEqual(fname, sharefname)
231+            self.failUnless('r' in mode, mode)
232+            self.failUnless('b' in mode, mode)
233+
234+            return StringIO(share_file_data)
235+        mockopen.side_effect = call_open
236+
237+        datalen = len(share_file_data)
238+        def call_getsize(fname):
239+            self.failUnlessReallyEqual(fname, sharefname)
240+            return datalen
241+        mockgetsize.side_effect = call_getsize
242+
243+        def call_exists(fname):
244+            self.failUnlessReallyEqual(fname, sharefname)
245+            return True
246+        mockexists.side_effect = call_exists
247+
248+        # Now begin the test.
249+        bs = self.s.remote_get_buckets('teststorage_index')
250+
251+        self.failUnlessEqual(len(bs), 1)
252+        b = bs[0]
253+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
254+        # If you try to read past the end you get the as much data as is there.
255+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
256+        # If you start reading past the end of the file you get the empty string.
257+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
258}
259[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
260wilcoxjg@gmail.com**20110624202850
261 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
262 sloppy not for production
263] {
264move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
265hunk ./src/allmydata/storage/crawler.py 13
266     pass
267 
268 class ShareCrawler(service.MultiService):
269-    """A ShareCrawler subclass is attached to a StorageServer, and
270+    """A subcless of ShareCrawler is attached to a StorageServer, and
271     periodically walks all of its shares, processing each one in some
272     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
273     since large servers can easily have a terabyte of shares, in several
274hunk ./src/allmydata/storage/crawler.py 31
275     We assume that the normal upload/download/get_buckets traffic of a tahoe
276     grid will cause the prefixdir contents to be mostly cached in the kernel,
277     or that the number of buckets in each prefixdir will be small enough to
278-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
279+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
280     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
281     prefix. On this server, each prefixdir took 130ms-200ms to list the first
282     time, and 17ms to list the second time.
283hunk ./src/allmydata/storage/crawler.py 68
284     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
285     minimum_cycle_time = 300 # don't run a cycle faster than this
286 
287-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
288+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
289         service.MultiService.__init__(self)
290         if allowed_cpu_percentage is not None:
291             self.allowed_cpu_percentage = allowed_cpu_percentage
292hunk ./src/allmydata/storage/crawler.py 72
293-        self.server = server
294-        self.sharedir = server.sharedir
295-        self.statefile = statefile
296+        self.backend = backend
297         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
298                          for i in range(2**10)]
299         self.prefixes.sort()
300hunk ./src/allmydata/storage/crawler.py 446
301 
302     minimum_cycle_time = 60*60 # we don't need this more than once an hour
303 
304-    def __init__(self, server, statefile, num_sample_prefixes=1):
305-        ShareCrawler.__init__(self, server, statefile)
306+    def __init__(self, statefile, num_sample_prefixes=1):
307+        ShareCrawler.__init__(self, statefile)
308         self.num_sample_prefixes = num_sample_prefixes
309 
310     def add_initial_state(self):
311hunk ./src/allmydata/storage/expirer.py 15
312     removed.
313 
314     I collect statistics on the leases and make these available to a web
315-    status page, including::
316+    status page, including:
317 
318     Space recovered during this cycle-so-far:
319      actual (only if expiration_enabled=True):
320hunk ./src/allmydata/storage/expirer.py 51
321     slow_start = 360 # wait 6 minutes after startup
322     minimum_cycle_time = 12*60*60 # not more than twice per day
323 
324-    def __init__(self, server, statefile, historyfile,
325+    def __init__(self, statefile, historyfile,
326                  expiration_enabled, mode,
327                  override_lease_duration, # used if expiration_mode=="age"
328                  cutoff_date, # used if expiration_mode=="cutoff-date"
329hunk ./src/allmydata/storage/expirer.py 71
330         else:
331             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
332         self.sharetypes_to_expire = sharetypes
333-        ShareCrawler.__init__(self, server, statefile)
334+        ShareCrawler.__init__(self, statefile)
335 
336     def add_initial_state(self):
337         # we fill ["cycle-to-date"] here (even though they will be reset in
338hunk ./src/allmydata/storage/immutable.py 44
339     sharetype = "immutable"
340 
341     def __init__(self, filename, max_size=None, create=False):
342-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
343+        """ If max_size is not None then I won't allow more than
344+        max_size to be written to me. If create=True then max_size
345+        must not be None. """
346         precondition((max_size is not None) or (not create), max_size, create)
347         self.home = filename
348         self._max_size = max_size
349hunk ./src/allmydata/storage/immutable.py 87
350 
351     def read_share_data(self, offset, length):
352         precondition(offset >= 0)
353-        # reads beyond the end of the data are truncated. Reads that start
354-        # beyond the end of the data return an empty string. I wonder why
355-        # Python doesn't do the following computation for me?
356+        # Reads beyond the end of the data are truncated. Reads that start
357+        # beyond the end of the data return an empty string.
358         seekpos = self._data_offset+offset
359         fsize = os.path.getsize(self.home)
360         actuallength = max(0, min(length, fsize-seekpos))
361hunk ./src/allmydata/storage/immutable.py 198
362             space_freed += os.stat(self.home)[stat.ST_SIZE]
363             self.unlink()
364         return space_freed
365+class NullBucketWriter(Referenceable):
366+    implements(RIBucketWriter)
367 
368hunk ./src/allmydata/storage/immutable.py 201
369+    def remote_write(self, offset, data):
370+        return
371 
372 class BucketWriter(Referenceable):
373     implements(RIBucketWriter)
374hunk ./src/allmydata/storage/server.py 7
375 from twisted.application import service
376 
377 from zope.interface import implements
378-from allmydata.interfaces import RIStorageServer, IStatsProducer
379+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
380 from allmydata.util import fileutil, idlib, log, time_format
381 import allmydata # for __full_version__
382 
383hunk ./src/allmydata/storage/server.py 16
384 from allmydata.storage.lease import LeaseInfo
385 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
386      create_mutable_sharefile
387-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
388+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
389 from allmydata.storage.crawler import BucketCountingCrawler
390 from allmydata.storage.expirer import LeaseCheckingCrawler
391 
392hunk ./src/allmydata/storage/server.py 20
393+from zope.interface import implements
394+
395+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
396+# be started and stopped.
397+class Backend(service.MultiService):
398+    implements(IStatsProducer)
399+    def __init__(self):
400+        service.MultiService.__init__(self)
401+
402+    def get_bucket_shares(self):
403+        """XXX"""
404+        raise NotImplementedError
405+
406+    def get_share(self):
407+        """XXX"""
408+        raise NotImplementedError
409+
410+    def make_bucket_writer(self):
411+        """XXX"""
412+        raise NotImplementedError
413+
414+class NullBackend(Backend):
415+    def __init__(self):
416+        Backend.__init__(self)
417+
418+    def get_available_space(self):
419+        return None
420+
421+    def get_bucket_shares(self, storage_index):
422+        return set()
423+
424+    def get_share(self, storage_index, sharenum):
425+        return None
426+
427+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
428+        return NullBucketWriter()
429+
430+class FSBackend(Backend):
431+    def __init__(self, storedir, readonly=False, reserved_space=0):
432+        Backend.__init__(self)
433+
434+        self._setup_storage(storedir, readonly, reserved_space)
435+        self._setup_corruption_advisory()
436+        self._setup_bucket_counter()
437+        self._setup_lease_checkerf()
438+
439+    def _setup_storage(self, storedir, readonly, reserved_space):
440+        self.storedir = storedir
441+        self.readonly = readonly
442+        self.reserved_space = int(reserved_space)
443+        if self.reserved_space:
444+            if self.get_available_space() is None:
445+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
446+                        umid="0wZ27w", level=log.UNUSUAL)
447+
448+        self.sharedir = os.path.join(self.storedir, "shares")
449+        fileutil.make_dirs(self.sharedir)
450+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
451+        self._clean_incomplete()
452+
453+    def _clean_incomplete(self):
454+        fileutil.rm_dir(self.incomingdir)
455+        fileutil.make_dirs(self.incomingdir)
456+
457+    def _setup_corruption_advisory(self):
458+        # we don't actually create the corruption-advisory dir until necessary
459+        self.corruption_advisory_dir = os.path.join(self.storedir,
460+                                                    "corruption-advisories")
461+
462+    def _setup_bucket_counter(self):
463+        statefile = os.path.join(self.storedir, "bucket_counter.state")
464+        self.bucket_counter = BucketCountingCrawler(statefile)
465+        self.bucket_counter.setServiceParent(self)
466+
467+    def _setup_lease_checkerf(self):
468+        statefile = os.path.join(self.storedir, "lease_checker.state")
469+        historyfile = os.path.join(self.storedir, "lease_checker.history")
470+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
471+                                   expiration_enabled, expiration_mode,
472+                                   expiration_override_lease_duration,
473+                                   expiration_cutoff_date,
474+                                   expiration_sharetypes)
475+        self.lease_checker.setServiceParent(self)
476+
477+    def get_available_space(self):
478+        if self.readonly:
479+            return 0
480+        return fileutil.get_available_space(self.storedir, self.reserved_space)
481+
482+    def get_bucket_shares(self, storage_index):
483+        """Return a list of (shnum, pathname) tuples for files that hold
484+        shares for this storage_index. In each tuple, 'shnum' will always be
485+        the integer form of the last component of 'pathname'."""
486+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
487+        try:
488+            for f in os.listdir(storagedir):
489+                if NUM_RE.match(f):
490+                    filename = os.path.join(storagedir, f)
491+                    yield (int(f), filename)
492+        except OSError:
493+            # Commonly caused by there being no buckets at all.
494+            pass
495+
496 # storage/
497 # storage/shares/incoming
498 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
499hunk ./src/allmydata/storage/server.py 143
500     name = 'storage'
501     LeaseCheckerClass = LeaseCheckingCrawler
502 
503-    def __init__(self, storedir, nodeid, reserved_space=0,
504-                 discard_storage=False, readonly_storage=False,
505+    def __init__(self, nodeid, backend, reserved_space=0,
506+                 readonly_storage=False,
507                  stats_provider=None,
508                  expiration_enabled=False,
509                  expiration_mode="age",
510hunk ./src/allmydata/storage/server.py 155
511         assert isinstance(nodeid, str)
512         assert len(nodeid) == 20
513         self.my_nodeid = nodeid
514-        self.storedir = storedir
515-        sharedir = os.path.join(storedir, "shares")
516-        fileutil.make_dirs(sharedir)
517-        self.sharedir = sharedir
518-        # we don't actually create the corruption-advisory dir until necessary
519-        self.corruption_advisory_dir = os.path.join(storedir,
520-                                                    "corruption-advisories")
521-        self.reserved_space = int(reserved_space)
522-        self.no_storage = discard_storage
523-        self.readonly_storage = readonly_storage
524         self.stats_provider = stats_provider
525         if self.stats_provider:
526             self.stats_provider.register_producer(self)
527hunk ./src/allmydata/storage/server.py 158
528-        self.incomingdir = os.path.join(sharedir, 'incoming')
529-        self._clean_incomplete()
530-        fileutil.make_dirs(self.incomingdir)
531         self._active_writers = weakref.WeakKeyDictionary()
532hunk ./src/allmydata/storage/server.py 159
533+        self.backend = backend
534+        self.backend.setServiceParent(self)
535         log.msg("StorageServer created", facility="tahoe.storage")
536 
537hunk ./src/allmydata/storage/server.py 163
538-        if reserved_space:
539-            if self.get_available_space() is None:
540-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
541-                        umin="0wZ27w", level=log.UNUSUAL)
542-
543         self.latencies = {"allocate": [], # immutable
544                           "write": [],
545                           "close": [],
546hunk ./src/allmydata/storage/server.py 174
547                           "renew": [],
548                           "cancel": [],
549                           }
550-        self.add_bucket_counter()
551-
552-        statefile = os.path.join(self.storedir, "lease_checker.state")
553-        historyfile = os.path.join(self.storedir, "lease_checker.history")
554-        klass = self.LeaseCheckerClass
555-        self.lease_checker = klass(self, statefile, historyfile,
556-                                   expiration_enabled, expiration_mode,
557-                                   expiration_override_lease_duration,
558-                                   expiration_cutoff_date,
559-                                   expiration_sharetypes)
560-        self.lease_checker.setServiceParent(self)
561 
562     def __repr__(self):
563         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
564hunk ./src/allmydata/storage/server.py 178
565 
566-    def add_bucket_counter(self):
567-        statefile = os.path.join(self.storedir, "bucket_counter.state")
568-        self.bucket_counter = BucketCountingCrawler(self, statefile)
569-        self.bucket_counter.setServiceParent(self)
570-
571     def count(self, name, delta=1):
572         if self.stats_provider:
573             self.stats_provider.count("storage_server." + name, delta)
574hunk ./src/allmydata/storage/server.py 233
575             kwargs["facility"] = "tahoe.storage"
576         return log.msg(*args, **kwargs)
577 
578-    def _clean_incomplete(self):
579-        fileutil.rm_dir(self.incomingdir)
580-
581     def get_stats(self):
582         # remember: RIStatsProvider requires that our return dict
583         # contains numeric values.
584hunk ./src/allmydata/storage/server.py 269
585             stats['storage_server.total_bucket_count'] = bucket_count
586         return stats
587 
588-    def get_available_space(self):
589-        """Returns available space for share storage in bytes, or None if no
590-        API to get this information is available."""
591-
592-        if self.readonly_storage:
593-            return 0
594-        return fileutil.get_available_space(self.storedir, self.reserved_space)
595-
596     def allocated_size(self):
597         space = 0
598         for bw in self._active_writers:
599hunk ./src/allmydata/storage/server.py 276
600         return space
601 
602     def remote_get_version(self):
603-        remaining_space = self.get_available_space()
604+        remaining_space = self.backend.get_available_space()
605         if remaining_space is None:
606             # We're on a platform that has no API to get disk stats.
607             remaining_space = 2**64
608hunk ./src/allmydata/storage/server.py 301
609         self.count("allocate")
610         alreadygot = set()
611         bucketwriters = {} # k: shnum, v: BucketWriter
612-        si_dir = storage_index_to_dir(storage_index)
613-        si_s = si_b2a(storage_index)
614 
615hunk ./src/allmydata/storage/server.py 302
616+        si_s = si_b2a(storage_index)
617         log.msg("storage: allocate_buckets %s" % si_s)
618 
619         # in this implementation, the lease information (including secrets)
620hunk ./src/allmydata/storage/server.py 316
621 
622         max_space_per_bucket = allocated_size
623 
624-        remaining_space = self.get_available_space()
625+        remaining_space = self.backend.get_available_space()
626         limited = remaining_space is not None
627         if limited:
628             # this is a bit conservative, since some of this allocated_size()
629hunk ./src/allmydata/storage/server.py 329
630         # they asked about: this will save them a lot of work. Add or update
631         # leases for all of them: if they want us to hold shares for this
632         # file, they'll want us to hold leases for this file.
633-        for (shnum, fn) in self._get_bucket_shares(storage_index):
634+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
635             alreadygot.add(shnum)
636             sf = ShareFile(fn)
637             sf.add_or_renew_lease(lease_info)
638hunk ./src/allmydata/storage/server.py 335
639 
640         for shnum in sharenums:
641-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
642-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
643-            if os.path.exists(finalhome):
644+            share = self.backend.get_share(storage_index, shnum)
645+
646+            if not share:
647+                if (not limited) or (remaining_space >= max_space_per_bucket):
648+                    # ok! we need to create the new share file.
649+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
650+                                      max_space_per_bucket, lease_info, canary)
651+                    bucketwriters[shnum] = bw
652+                    self._active_writers[bw] = 1
653+                    if limited:
654+                        remaining_space -= max_space_per_bucket
655+                else:
656+                    # bummer! not enough space to accept this bucket
657+                    pass
658+
659+            elif share.is_complete():
660                 # great! we already have it. easy.
661                 pass
662hunk ./src/allmydata/storage/server.py 353
663-            elif os.path.exists(incominghome):
664+            elif not share.is_complete():
665                 # Note that we don't create BucketWriters for shnums that
666                 # have a partial share (in incoming/), so if a second upload
667                 # occurs while the first is still in progress, the second
668hunk ./src/allmydata/storage/server.py 359
669                 # uploader will use different storage servers.
670                 pass
671-            elif (not limited) or (remaining_space >= max_space_per_bucket):
672-                # ok! we need to create the new share file.
673-                bw = BucketWriter(self, incominghome, finalhome,
674-                                  max_space_per_bucket, lease_info, canary)
675-                if self.no_storage:
676-                    bw.throw_out_all_data = True
677-                bucketwriters[shnum] = bw
678-                self._active_writers[bw] = 1
679-                if limited:
680-                    remaining_space -= max_space_per_bucket
681-            else:
682-                # bummer! not enough space to accept this bucket
683-                pass
684-
685-        if bucketwriters:
686-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
687 
688         self.add_latency("allocate", time.time() - start)
689         return alreadygot, bucketwriters
690hunk ./src/allmydata/storage/server.py 437
691             self.stats_provider.count('storage_server.bytes_added', consumed_size)
692         del self._active_writers[bw]
693 
694-    def _get_bucket_shares(self, storage_index):
695-        """Return a list of (shnum, pathname) tuples for files that hold
696-        shares for this storage_index. In each tuple, 'shnum' will always be
697-        the integer form of the last component of 'pathname'."""
698-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
699-        try:
700-            for f in os.listdir(storagedir):
701-                if NUM_RE.match(f):
702-                    filename = os.path.join(storagedir, f)
703-                    yield (int(f), filename)
704-        except OSError:
705-            # Commonly caused by there being no buckets at all.
706-            pass
707 
708     def remote_get_buckets(self, storage_index):
709         start = time.time()
710hunk ./src/allmydata/storage/server.py 444
711         si_s = si_b2a(storage_index)
712         log.msg("storage: get_buckets %s" % si_s)
713         bucketreaders = {} # k: sharenum, v: BucketReader
714-        for shnum, filename in self._get_bucket_shares(storage_index):
715+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
716             bucketreaders[shnum] = BucketReader(self, filename,
717                                                 storage_index, shnum)
718         self.add_latency("get", time.time() - start)
719hunk ./src/allmydata/test/test_backends.py 10
720 import mock
721 
722 # This is the code that we're going to be testing.
723-from allmydata.storage.server import StorageServer
724+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
725 
726 # The following share file contents was generated with
727 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
728hunk ./src/allmydata/test/test_backends.py 21
729 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
730 
731 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
732+    @mock.patch('time.time')
733+    @mock.patch('os.mkdir')
734+    @mock.patch('__builtin__.open')
735+    @mock.patch('os.listdir')
736+    @mock.patch('os.path.isdir')
737+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
738+        """ This tests whether a server instance can be constructed
739+        with a null backend. The server instance fails the test if it
740+        tries to read or write to the file system. """
741+
742+        # Now begin the test.
743+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
744+
745+        self.failIf(mockisdir.called)
746+        self.failIf(mocklistdir.called)
747+        self.failIf(mockopen.called)
748+        self.failIf(mockmkdir.called)
749+
750+        # You passed!
751+
752+    @mock.patch('time.time')
753+    @mock.patch('os.mkdir')
754     @mock.patch('__builtin__.open')
755hunk ./src/allmydata/test/test_backends.py 44
756-    def test_create_server(self, mockopen):
757-        """ This tests whether a server instance can be constructed. """
758+    @mock.patch('os.listdir')
759+    @mock.patch('os.path.isdir')
760+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
761+        """ This tests whether a server instance can be constructed
762+        with a filesystem backend. To pass the test, it has to use the
763+        filesystem in only the prescribed ways. """
764 
765         def call_open(fname, mode):
766             if fname == 'testdir/bucket_counter.state':
767hunk ./src/allmydata/test/test_backends.py 58
768                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
769             elif fname == 'testdir/lease_checker.history':
770                 return StringIO()
771+            else:
772+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
773         mockopen.side_effect = call_open
774 
775         # Now begin the test.
776hunk ./src/allmydata/test/test_backends.py 63
777-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
778+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
779+
780+        self.failIf(mockisdir.called)
781+        self.failIf(mocklistdir.called)
782+        self.failIf(mockopen.called)
783+        self.failIf(mockmkdir.called)
784+        self.failIf(mocktime.called)
785 
786         # You passed!
787 
788hunk ./src/allmydata/test/test_backends.py 73
789-class TestServer(unittest.TestCase, ReallyEqualMixin):
790+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
791+    def setUp(self):
792+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
793+
794+    @mock.patch('os.mkdir')
795+    @mock.patch('__builtin__.open')
796+    @mock.patch('os.listdir')
797+    @mock.patch('os.path.isdir')
798+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
799+        """ Write a new share. """
800+
801+        # Now begin the test.
802+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
803+        bs[0].remote_write(0, 'a')
804+        self.failIf(mockisdir.called)
805+        self.failIf(mocklistdir.called)
806+        self.failIf(mockopen.called)
807+        self.failIf(mockmkdir.called)
808+
809+    @mock.patch('os.path.exists')
810+    @mock.patch('os.path.getsize')
811+    @mock.patch('__builtin__.open')
812+    @mock.patch('os.listdir')
813+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
814+        """ This tests whether the code correctly finds and reads
815+        shares written out by old (Tahoe-LAFS <= v1.8.2)
816+        servers. There is a similar test in test_download, but that one
817+        is from the perspective of the client and exercises a deeper
818+        stack of code. This one is for exercising just the
819+        StorageServer object. """
820+
821+        # Now begin the test.
822+        bs = self.s.remote_get_buckets('teststorage_index')
823+
824+        self.failUnlessEqual(len(bs), 0)
825+        self.failIf(mocklistdir.called)
826+        self.failIf(mockopen.called)
827+        self.failIf(mockgetsize.called)
828+        self.failIf(mockexists.called)
829+
830+
831+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
832     @mock.patch('__builtin__.open')
833     def setUp(self, mockopen):
834         def call_open(fname, mode):
835hunk ./src/allmydata/test/test_backends.py 126
836                 return StringIO()
837         mockopen.side_effect = call_open
838 
839-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
840-
841+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
842 
843     @mock.patch('time.time')
844     @mock.patch('os.mkdir')
845hunk ./src/allmydata/test/test_backends.py 134
846     @mock.patch('os.listdir')
847     @mock.patch('os.path.isdir')
848     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
849-        """Handle a report of corruption."""
850+        """ Write a new share. """
851 
852         def call_listdir(dirname):
853             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
854hunk ./src/allmydata/test/test_backends.py 173
855         mockopen.side_effect = call_open
856         # Now begin the test.
857         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
858-        print bs
859         bs[0].remote_write(0, 'a')
860         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
861 
862hunk ./src/allmydata/test/test_backends.py 176
863-
864     @mock.patch('os.path.exists')
865     @mock.patch('os.path.getsize')
866     @mock.patch('__builtin__.open')
867hunk ./src/allmydata/test/test_backends.py 218
868 
869         self.failUnlessEqual(len(bs), 1)
870         b = bs[0]
871+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
872         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
873         # If you try to read past the end you get the as much data as is there.
874         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
875hunk ./src/allmydata/test/test_backends.py 224
876         # If you start reading past the end of the file you get the empty string.
877         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
878+
879+
880}
881[a temp patch used as a snapshot
882wilcoxjg@gmail.com**20110626052732
883 Ignore-this: 95f05e314eaec870afa04c76d979aa44
884] {
885hunk ./docs/configuration.rst 637
886   [storage]
887   enabled = True
888   readonly = True
889-  sizelimit = 10000000000
890 
891 
892   [helper]
893hunk ./docs/garbage-collection.rst 16
894 
895 When a file or directory in the virtual filesystem is no longer referenced,
896 the space that its shares occupied on each storage server can be freed,
897-making room for other shares. Tahoe currently uses a garbage collection
898+making room for other shares. Tahoe uses a garbage collection
899 ("GC") mechanism to implement this space-reclamation process. Each share has
900 one or more "leases", which are managed by clients who want the
901 file/directory to be retained. The storage server accepts each share for a
902hunk ./docs/garbage-collection.rst 34
903 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
904 If lease renewal occurs quickly and with 100% reliability, than any renewal
905 time that is shorter than the lease duration will suffice, but a larger ratio
906-of duration-over-renewal-time will be more robust in the face of occasional
907+of lease duration to renewal time will be more robust in the face of occasional
908 delays or failures.
909 
910 The current recommended values for a small Tahoe grid are to renew the leases
911replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
912hunk ./src/allmydata/client.py 260
913             sharetypes.append("mutable")
914         expiration_sharetypes = tuple(sharetypes)
915 
916+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
917+            xyz
918+        xyz
919         ss = StorageServer(storedir, self.nodeid,
920                            reserved_space=reserved,
921                            discard_storage=discard,
922hunk ./src/allmydata/storage/crawler.py 234
923         f = open(tmpfile, "wb")
924         pickle.dump(self.state, f)
925         f.close()
926-        fileutil.move_into_place(tmpfile, self.statefile)
927+        fileutil.move_into_place(tmpfile, self.statefname)
928 
929     def startService(self):
930         # arrange things to look like we were just sleeping, so
931}
932[snapshot of progress on backend implementation (not suitable for trunk)
933wilcoxjg@gmail.com**20110626053244
934 Ignore-this: 50c764af791c2b99ada8289546806a0a
935] {
936adddir ./src/allmydata/storage/backends
937adddir ./src/allmydata/storage/backends/das
938move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
939adddir ./src/allmydata/storage/backends/null
940hunk ./src/allmydata/interfaces.py 270
941         store that on disk.
942         """
943 
944+class IStorageBackend(Interface):
945+    """
946+    Objects of this kind live on the server side and are used by the
947+    storage server object.
948+    """
949+    def get_available_space(self, reserved_space):
950+        """ Returns available space for share storage in bytes, or
951+        None if this information is not available or if the available
952+        space is unlimited.
953+
954+        If the backend is configured for read-only mode then this will
955+        return 0.
956+
957+        reserved_space is how many bytes to subtract from the answer, so
958+        you can pass how many bytes you would like to leave unused on this
959+        filesystem as reserved_space. """
960+
961+    def get_bucket_shares(self):
962+        """XXX"""
963+
964+    def get_share(self):
965+        """XXX"""
966+
967+    def make_bucket_writer(self):
968+        """XXX"""
969+
970+class IStorageBackendShare(Interface):
971+    """
972+    This object contains as much as all of the share data.  It is intended
973+    for lazy evaluation such that in many use cases substantially less than
974+    all of the share data will be accessed.
975+    """
976+    def is_complete(self):
977+        """
978+        Returns the share state, or None if the share does not exist.
979+        """
980+
981 class IStorageBucketWriter(Interface):
982     """
983     Objects of this kind live on the client side.
984hunk ./src/allmydata/interfaces.py 2492
985 
986 class EmptyPathnameComponentError(Exception):
987     """The webapi disallows empty pathname components."""
988+
989+class IShareStore(Interface):
990+    pass
991+
992addfile ./src/allmydata/storage/backends/__init__.py
993addfile ./src/allmydata/storage/backends/das/__init__.py
994addfile ./src/allmydata/storage/backends/das/core.py
995hunk ./src/allmydata/storage/backends/das/core.py 1
996+from allmydata.interfaces import IStorageBackend
997+from allmydata.storage.backends.base import Backend
998+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
999+from allmydata.util.assertutil import precondition
1000+
1001+import os, re, weakref, struct, time
1002+
1003+from foolscap.api import Referenceable
1004+from twisted.application import service
1005+
1006+from zope.interface import implements
1007+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
1008+from allmydata.util import fileutil, idlib, log, time_format
1009+import allmydata # for __full_version__
1010+
1011+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1012+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
1013+from allmydata.storage.lease import LeaseInfo
1014+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1015+     create_mutable_sharefile
1016+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1017+from allmydata.storage.crawler import FSBucketCountingCrawler
1018+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1019+
1020+from zope.interface import implements
1021+
1022+class DASCore(Backend):
1023+    implements(IStorageBackend)
1024+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1025+        Backend.__init__(self)
1026+
1027+        self._setup_storage(storedir, readonly, reserved_space)
1028+        self._setup_corruption_advisory()
1029+        self._setup_bucket_counter()
1030+        self._setup_lease_checkerf(expiration_policy)
1031+
1032+    def _setup_storage(self, storedir, readonly, reserved_space):
1033+        self.storedir = storedir
1034+        self.readonly = readonly
1035+        self.reserved_space = int(reserved_space)
1036+        if self.reserved_space:
1037+            if self.get_available_space() is None:
1038+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1039+                        umid="0wZ27w", level=log.UNUSUAL)
1040+
1041+        self.sharedir = os.path.join(self.storedir, "shares")
1042+        fileutil.make_dirs(self.sharedir)
1043+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1044+        self._clean_incomplete()
1045+
1046+    def _clean_incomplete(self):
1047+        fileutil.rm_dir(self.incomingdir)
1048+        fileutil.make_dirs(self.incomingdir)
1049+
1050+    def _setup_corruption_advisory(self):
1051+        # we don't actually create the corruption-advisory dir until necessary
1052+        self.corruption_advisory_dir = os.path.join(self.storedir,
1053+                                                    "corruption-advisories")
1054+
1055+    def _setup_bucket_counter(self):
1056+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1057+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1058+        self.bucket_counter.setServiceParent(self)
1059+
1060+    def _setup_lease_checkerf(self, expiration_policy):
1061+        statefile = os.path.join(self.storedir, "lease_checker.state")
1062+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1063+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1064+        self.lease_checker.setServiceParent(self)
1065+
1066+    def get_available_space(self):
1067+        if self.readonly:
1068+            return 0
1069+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1070+
1071+    def get_shares(self, storage_index):
1072+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1073+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1074+        try:
1075+            for f in os.listdir(finalstoragedir):
1076+                if NUM_RE.match(f):
1077+                    filename = os.path.join(finalstoragedir, f)
1078+                    yield FSBShare(filename, int(f))
1079+        except OSError:
1080+            # Commonly caused by there being no buckets at all.
1081+            pass
1082+       
1083+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1084+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1085+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1086+        return bw
1087+       
1088+
1089+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1090+# and share data. The share data is accessed by RIBucketWriter.write and
1091+# RIBucketReader.read . The lease information is not accessible through these
1092+# interfaces.
1093+
1094+# The share file has the following layout:
1095+#  0x00: share file version number, four bytes, current version is 1
1096+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1097+#  0x08: number of leases, four bytes big-endian
1098+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1099+#  A+0x0c = B: first lease. Lease format is:
1100+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1101+#   B+0x04: renew secret, 32 bytes (SHA256)
1102+#   B+0x24: cancel secret, 32 bytes (SHA256)
1103+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1104+#   B+0x48: next lease, or end of record
1105+
1106+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1107+# but it is still filled in by storage servers in case the storage server
1108+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1109+# share file is moved from one storage server to another. The value stored in
1110+# this field is truncated, so if the actual share data length is >= 2**32,
1111+# then the value stored in this field will be the actual share data length
1112+# modulo 2**32.
1113+
1114+class ImmutableShare:
1115+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1116+    sharetype = "immutable"
1117+
1118+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1119+        """ If max_size is not None then I won't allow more than
1120+        max_size to be written to me. If create=True then max_size
1121+        must not be None. """
1122+        precondition((max_size is not None) or (not create), max_size, create)
1123+        self.shnum = shnum
1124+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1125+        self._max_size = max_size
1126+        if create:
1127+            # touch the file, so later callers will see that we're working on
1128+            # it. Also construct the metadata.
1129+            assert not os.path.exists(self.fname)
1130+            fileutil.make_dirs(os.path.dirname(self.fname))
1131+            f = open(self.fname, 'wb')
1132+            # The second field -- the four-byte share data length -- is no
1133+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1134+            # there in case someone downgrades a storage server from >=
1135+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1136+            # server to another, etc. We do saturation -- a share data length
1137+            # larger than 2**32-1 (what can fit into the field) is marked as
1138+            # the largest length that can fit into the field. That way, even
1139+            # if this does happen, the old < v1.3.0 server will still allow
1140+            # clients to read the first part of the share.
1141+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1142+            f.close()
1143+            self._lease_offset = max_size + 0x0c
1144+            self._num_leases = 0
1145+        else:
1146+            f = open(self.fname, 'rb')
1147+            filesize = os.path.getsize(self.fname)
1148+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1149+            f.close()
1150+            if version != 1:
1151+                msg = "sharefile %s had version %d but we wanted 1" % \
1152+                      (self.fname, version)
1153+                raise UnknownImmutableContainerVersionError(msg)
1154+            self._num_leases = num_leases
1155+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1156+        self._data_offset = 0xc
1157+
1158+    def unlink(self):
1159+        os.unlink(self.fname)
1160+
1161+    def read_share_data(self, offset, length):
1162+        precondition(offset >= 0)
1163+        # Reads beyond the end of the data are truncated. Reads that start
1164+        # beyond the end of the data return an empty string.
1165+        seekpos = self._data_offset+offset
1166+        fsize = os.path.getsize(self.fname)
1167+        actuallength = max(0, min(length, fsize-seekpos))
1168+        if actuallength == 0:
1169+            return ""
1170+        f = open(self.fname, 'rb')
1171+        f.seek(seekpos)
1172+        return f.read(actuallength)
1173+
1174+    def write_share_data(self, offset, data):
1175+        length = len(data)
1176+        precondition(offset >= 0, offset)
1177+        if self._max_size is not None and offset+length > self._max_size:
1178+            raise DataTooLargeError(self._max_size, offset, length)
1179+        f = open(self.fname, 'rb+')
1180+        real_offset = self._data_offset+offset
1181+        f.seek(real_offset)
1182+        assert f.tell() == real_offset
1183+        f.write(data)
1184+        f.close()
1185+
1186+    def _write_lease_record(self, f, lease_number, lease_info):
1187+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1188+        f.seek(offset)
1189+        assert f.tell() == offset
1190+        f.write(lease_info.to_immutable_data())
1191+
1192+    def _read_num_leases(self, f):
1193+        f.seek(0x08)
1194+        (num_leases,) = struct.unpack(">L", f.read(4))
1195+        return num_leases
1196+
1197+    def _write_num_leases(self, f, num_leases):
1198+        f.seek(0x08)
1199+        f.write(struct.pack(">L", num_leases))
1200+
1201+    def _truncate_leases(self, f, num_leases):
1202+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1203+
1204+    def get_leases(self):
1205+        """Yields a LeaseInfo instance for all leases."""
1206+        f = open(self.fname, 'rb')
1207+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1208+        f.seek(self._lease_offset)
1209+        for i in range(num_leases):
1210+            data = f.read(self.LEASE_SIZE)
1211+            if data:
1212+                yield LeaseInfo().from_immutable_data(data)
1213+
1214+    def add_lease(self, lease_info):
1215+        f = open(self.fname, 'rb+')
1216+        num_leases = self._read_num_leases(f)
1217+        self._write_lease_record(f, num_leases, lease_info)
1218+        self._write_num_leases(f, num_leases+1)
1219+        f.close()
1220+
1221+    def renew_lease(self, renew_secret, new_expire_time):
1222+        for i,lease in enumerate(self.get_leases()):
1223+            if constant_time_compare(lease.renew_secret, renew_secret):
1224+                # yup. See if we need to update the owner time.
1225+                if new_expire_time > lease.expiration_time:
1226+                    # yes
1227+                    lease.expiration_time = new_expire_time
1228+                    f = open(self.fname, 'rb+')
1229+                    self._write_lease_record(f, i, lease)
1230+                    f.close()
1231+                return
1232+        raise IndexError("unable to renew non-existent lease")
1233+
1234+    def add_or_renew_lease(self, lease_info):
1235+        try:
1236+            self.renew_lease(lease_info.renew_secret,
1237+                             lease_info.expiration_time)
1238+        except IndexError:
1239+            self.add_lease(lease_info)
1240+
1241+
1242+    def cancel_lease(self, cancel_secret):
1243+        """Remove a lease with the given cancel_secret. If the last lease is
1244+        cancelled, the file will be removed. Return the number of bytes that
1245+        were freed (by truncating the list of leases, and possibly by
1246+        deleting the file. Raise IndexError if there was no lease with the
1247+        given cancel_secret.
1248+        """
1249+
1250+        leases = list(self.get_leases())
1251+        num_leases_removed = 0
1252+        for i,lease in enumerate(leases):
1253+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1254+                leases[i] = None
1255+                num_leases_removed += 1
1256+        if not num_leases_removed:
1257+            raise IndexError("unable to find matching lease to cancel")
1258+        if num_leases_removed:
1259+            # pack and write out the remaining leases. We write these out in
1260+            # the same order as they were added, so that if we crash while
1261+            # doing this, we won't lose any non-cancelled leases.
1262+            leases = [l for l in leases if l] # remove the cancelled leases
1263+            f = open(self.fname, 'rb+')
1264+            for i,lease in enumerate(leases):
1265+                self._write_lease_record(f, i, lease)
1266+            self._write_num_leases(f, len(leases))
1267+            self._truncate_leases(f, len(leases))
1268+            f.close()
1269+        space_freed = self.LEASE_SIZE * num_leases_removed
1270+        if not len(leases):
1271+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1272+            self.unlink()
1273+        return space_freed
1274hunk ./src/allmydata/storage/backends/das/expirer.py 2
1275 import time, os, pickle, struct
1276-from allmydata.storage.crawler import ShareCrawler
1277-from allmydata.storage.shares import get_share_file
1278+from allmydata.storage.crawler import FSShareCrawler
1279 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1280      UnknownImmutableContainerVersionError
1281 from twisted.python import log as twlog
1282hunk ./src/allmydata/storage/backends/das/expirer.py 7
1283 
1284-class LeaseCheckingCrawler(ShareCrawler):
1285+class FSLeaseCheckingCrawler(FSShareCrawler):
1286     """I examine the leases on all shares, determining which are still valid
1287     and which have expired. I can remove the expired leases (if so
1288     configured), and the share will be deleted when the last lease is
1289hunk ./src/allmydata/storage/backends/das/expirer.py 50
1290     slow_start = 360 # wait 6 minutes after startup
1291     minimum_cycle_time = 12*60*60 # not more than twice per day
1292 
1293-    def __init__(self, statefile, historyfile,
1294-                 expiration_enabled, mode,
1295-                 override_lease_duration, # used if expiration_mode=="age"
1296-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1297-                 sharetypes):
1298+    def __init__(self, statefile, historyfile, expiration_policy):
1299         self.historyfile = historyfile
1300hunk ./src/allmydata/storage/backends/das/expirer.py 52
1301-        self.expiration_enabled = expiration_enabled
1302-        self.mode = mode
1303+        self.expiration_enabled = expiration_policy['enabled']
1304+        self.mode = expiration_policy['mode']
1305         self.override_lease_duration = None
1306         self.cutoff_date = None
1307         if self.mode == "age":
1308hunk ./src/allmydata/storage/backends/das/expirer.py 57
1309-            assert isinstance(override_lease_duration, (int, type(None)))
1310-            self.override_lease_duration = override_lease_duration # seconds
1311+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1312+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1313         elif self.mode == "cutoff-date":
1314hunk ./src/allmydata/storage/backends/das/expirer.py 60
1315-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1316+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1317             assert cutoff_date is not None
1318hunk ./src/allmydata/storage/backends/das/expirer.py 62
1319-            self.cutoff_date = cutoff_date
1320+            self.cutoff_date = expiration_policy['cutoff_date']
1321         else:
1322hunk ./src/allmydata/storage/backends/das/expirer.py 64
1323-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1324-        self.sharetypes_to_expire = sharetypes
1325-        ShareCrawler.__init__(self, statefile)
1326+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1327+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1328+        FSShareCrawler.__init__(self, statefile)
1329 
1330     def add_initial_state(self):
1331         # we fill ["cycle-to-date"] here (even though they will be reset in
1332hunk ./src/allmydata/storage/backends/das/expirer.py 156
1333 
1334     def process_share(self, sharefilename):
1335         # first, find out what kind of a share it is
1336-        sf = get_share_file(sharefilename)
1337+        f = open(sharefilename, "rb")
1338+        prefix = f.read(32)
1339+        f.close()
1340+        if prefix == MutableShareFile.MAGIC:
1341+            sf = MutableShareFile(sharefilename)
1342+        else:
1343+            # otherwise assume it's immutable
1344+            sf = FSBShare(sharefilename)
1345         sharetype = sf.sharetype
1346         now = time.time()
1347         s = self.stat(sharefilename)
1348addfile ./src/allmydata/storage/backends/null/__init__.py
1349addfile ./src/allmydata/storage/backends/null/core.py
1350hunk ./src/allmydata/storage/backends/null/core.py 1
1351+from allmydata.storage.backends.base import Backend
1352+
1353+class NullCore(Backend):
1354+    def __init__(self):
1355+        Backend.__init__(self)
1356+
1357+    def get_available_space(self):
1358+        return None
1359+
1360+    def get_shares(self, storage_index):
1361+        return set()
1362+
1363+    def get_share(self, storage_index, sharenum):
1364+        return None
1365+
1366+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1367+        return NullBucketWriter()
1368hunk ./src/allmydata/storage/crawler.py 12
1369 class TimeSliceExceeded(Exception):
1370     pass
1371 
1372-class ShareCrawler(service.MultiService):
1373+class FSShareCrawler(service.MultiService):
1374     """A subcless of ShareCrawler is attached to a StorageServer, and
1375     periodically walks all of its shares, processing each one in some
1376     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1377hunk ./src/allmydata/storage/crawler.py 68
1378     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1379     minimum_cycle_time = 300 # don't run a cycle faster than this
1380 
1381-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1382+    def __init__(self, statefname, allowed_cpu_percentage=None):
1383         service.MultiService.__init__(self)
1384         if allowed_cpu_percentage is not None:
1385             self.allowed_cpu_percentage = allowed_cpu_percentage
1386hunk ./src/allmydata/storage/crawler.py 72
1387-        self.backend = backend
1388+        self.statefname = statefname
1389         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1390                          for i in range(2**10)]
1391         self.prefixes.sort()
1392hunk ./src/allmydata/storage/crawler.py 192
1393         #                            of the last bucket to be processed, or
1394         #                            None if we are sleeping between cycles
1395         try:
1396-            f = open(self.statefile, "rb")
1397+            f = open(self.statefname, "rb")
1398             state = pickle.load(f)
1399             f.close()
1400         except EnvironmentError:
1401hunk ./src/allmydata/storage/crawler.py 230
1402         else:
1403             last_complete_prefix = self.prefixes[lcpi]
1404         self.state["last-complete-prefix"] = last_complete_prefix
1405-        tmpfile = self.statefile + ".tmp"
1406+        tmpfile = self.statefname + ".tmp"
1407         f = open(tmpfile, "wb")
1408         pickle.dump(self.state, f)
1409         f.close()
1410hunk ./src/allmydata/storage/crawler.py 433
1411         pass
1412 
1413 
1414-class BucketCountingCrawler(ShareCrawler):
1415+class FSBucketCountingCrawler(FSShareCrawler):
1416     """I keep track of how many buckets are being managed by this server.
1417     This is equivalent to the number of distributed files and directories for
1418     which I am providing storage. The actual number of files+directories in
1419hunk ./src/allmydata/storage/crawler.py 446
1420 
1421     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1422 
1423-    def __init__(self, statefile, num_sample_prefixes=1):
1424-        ShareCrawler.__init__(self, statefile)
1425+    def __init__(self, statefname, num_sample_prefixes=1):
1426+        FSShareCrawler.__init__(self, statefname)
1427         self.num_sample_prefixes = num_sample_prefixes
1428 
1429     def add_initial_state(self):
1430hunk ./src/allmydata/storage/immutable.py 14
1431 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1432      DataTooLargeError
1433 
1434-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1435-# and share data. The share data is accessed by RIBucketWriter.write and
1436-# RIBucketReader.read . The lease information is not accessible through these
1437-# interfaces.
1438-
1439-# The share file has the following layout:
1440-#  0x00: share file version number, four bytes, current version is 1
1441-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1442-#  0x08: number of leases, four bytes big-endian
1443-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1444-#  A+0x0c = B: first lease. Lease format is:
1445-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1446-#   B+0x04: renew secret, 32 bytes (SHA256)
1447-#   B+0x24: cancel secret, 32 bytes (SHA256)
1448-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1449-#   B+0x48: next lease, or end of record
1450-
1451-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1452-# but it is still filled in by storage servers in case the storage server
1453-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1454-# share file is moved from one storage server to another. The value stored in
1455-# this field is truncated, so if the actual share data length is >= 2**32,
1456-# then the value stored in this field will be the actual share data length
1457-# modulo 2**32.
1458-
1459-class ShareFile:
1460-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1461-    sharetype = "immutable"
1462-
1463-    def __init__(self, filename, max_size=None, create=False):
1464-        """ If max_size is not None then I won't allow more than
1465-        max_size to be written to me. If create=True then max_size
1466-        must not be None. """
1467-        precondition((max_size is not None) or (not create), max_size, create)
1468-        self.home = filename
1469-        self._max_size = max_size
1470-        if create:
1471-            # touch the file, so later callers will see that we're working on
1472-            # it. Also construct the metadata.
1473-            assert not os.path.exists(self.home)
1474-            fileutil.make_dirs(os.path.dirname(self.home))
1475-            f = open(self.home, 'wb')
1476-            # The second field -- the four-byte share data length -- is no
1477-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1478-            # there in case someone downgrades a storage server from >=
1479-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1480-            # server to another, etc. We do saturation -- a share data length
1481-            # larger than 2**32-1 (what can fit into the field) is marked as
1482-            # the largest length that can fit into the field. That way, even
1483-            # if this does happen, the old < v1.3.0 server will still allow
1484-            # clients to read the first part of the share.
1485-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1486-            f.close()
1487-            self._lease_offset = max_size + 0x0c
1488-            self._num_leases = 0
1489-        else:
1490-            f = open(self.home, 'rb')
1491-            filesize = os.path.getsize(self.home)
1492-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1493-            f.close()
1494-            if version != 1:
1495-                msg = "sharefile %s had version %d but we wanted 1" % \
1496-                      (filename, version)
1497-                raise UnknownImmutableContainerVersionError(msg)
1498-            self._num_leases = num_leases
1499-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1500-        self._data_offset = 0xc
1501-
1502-    def unlink(self):
1503-        os.unlink(self.home)
1504-
1505-    def read_share_data(self, offset, length):
1506-        precondition(offset >= 0)
1507-        # Reads beyond the end of the data are truncated. Reads that start
1508-        # beyond the end of the data return an empty string.
1509-        seekpos = self._data_offset+offset
1510-        fsize = os.path.getsize(self.home)
1511-        actuallength = max(0, min(length, fsize-seekpos))
1512-        if actuallength == 0:
1513-            return ""
1514-        f = open(self.home, 'rb')
1515-        f.seek(seekpos)
1516-        return f.read(actuallength)
1517-
1518-    def write_share_data(self, offset, data):
1519-        length = len(data)
1520-        precondition(offset >= 0, offset)
1521-        if self._max_size is not None and offset+length > self._max_size:
1522-            raise DataTooLargeError(self._max_size, offset, length)
1523-        f = open(self.home, 'rb+')
1524-        real_offset = self._data_offset+offset
1525-        f.seek(real_offset)
1526-        assert f.tell() == real_offset
1527-        f.write(data)
1528-        f.close()
1529-
1530-    def _write_lease_record(self, f, lease_number, lease_info):
1531-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1532-        f.seek(offset)
1533-        assert f.tell() == offset
1534-        f.write(lease_info.to_immutable_data())
1535-
1536-    def _read_num_leases(self, f):
1537-        f.seek(0x08)
1538-        (num_leases,) = struct.unpack(">L", f.read(4))
1539-        return num_leases
1540-
1541-    def _write_num_leases(self, f, num_leases):
1542-        f.seek(0x08)
1543-        f.write(struct.pack(">L", num_leases))
1544-
1545-    def _truncate_leases(self, f, num_leases):
1546-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1547-
1548-    def get_leases(self):
1549-        """Yields a LeaseInfo instance for all leases."""
1550-        f = open(self.home, 'rb')
1551-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1552-        f.seek(self._lease_offset)
1553-        for i in range(num_leases):
1554-            data = f.read(self.LEASE_SIZE)
1555-            if data:
1556-                yield LeaseInfo().from_immutable_data(data)
1557-
1558-    def add_lease(self, lease_info):
1559-        f = open(self.home, 'rb+')
1560-        num_leases = self._read_num_leases(f)
1561-        self._write_lease_record(f, num_leases, lease_info)
1562-        self._write_num_leases(f, num_leases+1)
1563-        f.close()
1564-
1565-    def renew_lease(self, renew_secret, new_expire_time):
1566-        for i,lease in enumerate(self.get_leases()):
1567-            if constant_time_compare(lease.renew_secret, renew_secret):
1568-                # yup. See if we need to update the owner time.
1569-                if new_expire_time > lease.expiration_time:
1570-                    # yes
1571-                    lease.expiration_time = new_expire_time
1572-                    f = open(self.home, 'rb+')
1573-                    self._write_lease_record(f, i, lease)
1574-                    f.close()
1575-                return
1576-        raise IndexError("unable to renew non-existent lease")
1577-
1578-    def add_or_renew_lease(self, lease_info):
1579-        try:
1580-            self.renew_lease(lease_info.renew_secret,
1581-                             lease_info.expiration_time)
1582-        except IndexError:
1583-            self.add_lease(lease_info)
1584-
1585-
1586-    def cancel_lease(self, cancel_secret):
1587-        """Remove a lease with the given cancel_secret. If the last lease is
1588-        cancelled, the file will be removed. Return the number of bytes that
1589-        were freed (by truncating the list of leases, and possibly by
1590-        deleting the file. Raise IndexError if there was no lease with the
1591-        given cancel_secret.
1592-        """
1593-
1594-        leases = list(self.get_leases())
1595-        num_leases_removed = 0
1596-        for i,lease in enumerate(leases):
1597-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1598-                leases[i] = None
1599-                num_leases_removed += 1
1600-        if not num_leases_removed:
1601-            raise IndexError("unable to find matching lease to cancel")
1602-        if num_leases_removed:
1603-            # pack and write out the remaining leases. We write these out in
1604-            # the same order as they were added, so that if we crash while
1605-            # doing this, we won't lose any non-cancelled leases.
1606-            leases = [l for l in leases if l] # remove the cancelled leases
1607-            f = open(self.home, 'rb+')
1608-            for i,lease in enumerate(leases):
1609-                self._write_lease_record(f, i, lease)
1610-            self._write_num_leases(f, len(leases))
1611-            self._truncate_leases(f, len(leases))
1612-            f.close()
1613-        space_freed = self.LEASE_SIZE * num_leases_removed
1614-        if not len(leases):
1615-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1616-            self.unlink()
1617-        return space_freed
1618-class NullBucketWriter(Referenceable):
1619-    implements(RIBucketWriter)
1620-
1621-    def remote_write(self, offset, data):
1622-        return
1623-
1624 class BucketWriter(Referenceable):
1625     implements(RIBucketWriter)
1626 
1627hunk ./src/allmydata/storage/immutable.py 17
1628-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1629+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1630         self.ss = ss
1631hunk ./src/allmydata/storage/immutable.py 19
1632-        self.incominghome = incominghome
1633-        self.finalhome = finalhome
1634         self._max_size = max_size # don't allow the client to write more than this
1635         self._canary = canary
1636         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1637hunk ./src/allmydata/storage/immutable.py 24
1638         self.closed = False
1639         self.throw_out_all_data = False
1640-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1641+        self._sharefile = immutableshare
1642         # also, add our lease to the file now, so that other ones can be
1643         # added by simultaneous uploaders
1644         self._sharefile.add_lease(lease_info)
1645hunk ./src/allmydata/storage/server.py 16
1646 from allmydata.storage.lease import LeaseInfo
1647 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1648      create_mutable_sharefile
1649-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1650-from allmydata.storage.crawler import BucketCountingCrawler
1651-from allmydata.storage.expirer import LeaseCheckingCrawler
1652 
1653 from zope.interface import implements
1654 
1655hunk ./src/allmydata/storage/server.py 19
1656-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1657-# be started and stopped.
1658-class Backend(service.MultiService):
1659-    implements(IStatsProducer)
1660-    def __init__(self):
1661-        service.MultiService.__init__(self)
1662-
1663-    def get_bucket_shares(self):
1664-        """XXX"""
1665-        raise NotImplementedError
1666-
1667-    def get_share(self):
1668-        """XXX"""
1669-        raise NotImplementedError
1670-
1671-    def make_bucket_writer(self):
1672-        """XXX"""
1673-        raise NotImplementedError
1674-
1675-class NullBackend(Backend):
1676-    def __init__(self):
1677-        Backend.__init__(self)
1678-
1679-    def get_available_space(self):
1680-        return None
1681-
1682-    def get_bucket_shares(self, storage_index):
1683-        return set()
1684-
1685-    def get_share(self, storage_index, sharenum):
1686-        return None
1687-
1688-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1689-        return NullBucketWriter()
1690-
1691-class FSBackend(Backend):
1692-    def __init__(self, storedir, readonly=False, reserved_space=0):
1693-        Backend.__init__(self)
1694-
1695-        self._setup_storage(storedir, readonly, reserved_space)
1696-        self._setup_corruption_advisory()
1697-        self._setup_bucket_counter()
1698-        self._setup_lease_checkerf()
1699-
1700-    def _setup_storage(self, storedir, readonly, reserved_space):
1701-        self.storedir = storedir
1702-        self.readonly = readonly
1703-        self.reserved_space = int(reserved_space)
1704-        if self.reserved_space:
1705-            if self.get_available_space() is None:
1706-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1707-                        umid="0wZ27w", level=log.UNUSUAL)
1708-
1709-        self.sharedir = os.path.join(self.storedir, "shares")
1710-        fileutil.make_dirs(self.sharedir)
1711-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1712-        self._clean_incomplete()
1713-
1714-    def _clean_incomplete(self):
1715-        fileutil.rm_dir(self.incomingdir)
1716-        fileutil.make_dirs(self.incomingdir)
1717-
1718-    def _setup_corruption_advisory(self):
1719-        # we don't actually create the corruption-advisory dir until necessary
1720-        self.corruption_advisory_dir = os.path.join(self.storedir,
1721-                                                    "corruption-advisories")
1722-
1723-    def _setup_bucket_counter(self):
1724-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1725-        self.bucket_counter = BucketCountingCrawler(statefile)
1726-        self.bucket_counter.setServiceParent(self)
1727-
1728-    def _setup_lease_checkerf(self):
1729-        statefile = os.path.join(self.storedir, "lease_checker.state")
1730-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1731-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1732-                                   expiration_enabled, expiration_mode,
1733-                                   expiration_override_lease_duration,
1734-                                   expiration_cutoff_date,
1735-                                   expiration_sharetypes)
1736-        self.lease_checker.setServiceParent(self)
1737-
1738-    def get_available_space(self):
1739-        if self.readonly:
1740-            return 0
1741-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1742-
1743-    def get_bucket_shares(self, storage_index):
1744-        """Return a list of (shnum, pathname) tuples for files that hold
1745-        shares for this storage_index. In each tuple, 'shnum' will always be
1746-        the integer form of the last component of 'pathname'."""
1747-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1748-        try:
1749-            for f in os.listdir(storagedir):
1750-                if NUM_RE.match(f):
1751-                    filename = os.path.join(storagedir, f)
1752-                    yield (int(f), filename)
1753-        except OSError:
1754-            # Commonly caused by there being no buckets at all.
1755-            pass
1756-
1757 # storage/
1758 # storage/shares/incoming
1759 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1760hunk ./src/allmydata/storage/server.py 32
1761 # $SHARENUM matches this regex:
1762 NUM_RE=re.compile("^[0-9]+$")
1763 
1764-
1765-
1766 class StorageServer(service.MultiService, Referenceable):
1767     implements(RIStorageServer, IStatsProducer)
1768     name = 'storage'
1769hunk ./src/allmydata/storage/server.py 35
1770-    LeaseCheckerClass = LeaseCheckingCrawler
1771 
1772     def __init__(self, nodeid, backend, reserved_space=0,
1773                  readonly_storage=False,
1774hunk ./src/allmydata/storage/server.py 38
1775-                 stats_provider=None,
1776-                 expiration_enabled=False,
1777-                 expiration_mode="age",
1778-                 expiration_override_lease_duration=None,
1779-                 expiration_cutoff_date=None,
1780-                 expiration_sharetypes=("mutable", "immutable")):
1781+                 stats_provider=None ):
1782         service.MultiService.__init__(self)
1783         assert isinstance(nodeid, str)
1784         assert len(nodeid) == 20
1785hunk ./src/allmydata/storage/server.py 217
1786         # they asked about: this will save them a lot of work. Add or update
1787         # leases for all of them: if they want us to hold shares for this
1788         # file, they'll want us to hold leases for this file.
1789-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1790-            alreadygot.add(shnum)
1791-            sf = ShareFile(fn)
1792-            sf.add_or_renew_lease(lease_info)
1793-
1794-        for shnum in sharenums:
1795-            share = self.backend.get_share(storage_index, shnum)
1796+        for share in self.backend.get_shares(storage_index):
1797+            alreadygot.add(share.shnum)
1798+            share.add_or_renew_lease(lease_info)
1799 
1800hunk ./src/allmydata/storage/server.py 221
1801-            if not share:
1802-                if (not limited) or (remaining_space >= max_space_per_bucket):
1803-                    # ok! we need to create the new share file.
1804-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1805-                                      max_space_per_bucket, lease_info, canary)
1806-                    bucketwriters[shnum] = bw
1807-                    self._active_writers[bw] = 1
1808-                    if limited:
1809-                        remaining_space -= max_space_per_bucket
1810-                else:
1811-                    # bummer! not enough space to accept this bucket
1812-                    pass
1813+        for shnum in (sharenums - alreadygot):
1814+            if (not limited) or (remaining_space >= max_space_per_bucket):
1815+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1816+                self.backend.set_storage_server(self)
1817+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1818+                                                     max_space_per_bucket, lease_info, canary)
1819+                bucketwriters[shnum] = bw
1820+                self._active_writers[bw] = 1
1821+                if limited:
1822+                    remaining_space -= max_space_per_bucket
1823 
1824hunk ./src/allmydata/storage/server.py 232
1825-            elif share.is_complete():
1826-                # great! we already have it. easy.
1827-                pass
1828-            elif not share.is_complete():
1829-                # Note that we don't create BucketWriters for shnums that
1830-                # have a partial share (in incoming/), so if a second upload
1831-                # occurs while the first is still in progress, the second
1832-                # uploader will use different storage servers.
1833-                pass
1834+        #XXX We SHOULD DOCUMENT LATER.
1835 
1836         self.add_latency("allocate", time.time() - start)
1837         return alreadygot, bucketwriters
1838hunk ./src/allmydata/storage/server.py 238
1839 
1840     def _iter_share_files(self, storage_index):
1841-        for shnum, filename in self._get_bucket_shares(storage_index):
1842+        for shnum, filename in self._get_shares(storage_index):
1843             f = open(filename, 'rb')
1844             header = f.read(32)
1845             f.close()
1846hunk ./src/allmydata/storage/server.py 318
1847         si_s = si_b2a(storage_index)
1848         log.msg("storage: get_buckets %s" % si_s)
1849         bucketreaders = {} # k: sharenum, v: BucketReader
1850-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1851+        for shnum, filename in self.backend.get_shares(storage_index):
1852             bucketreaders[shnum] = BucketReader(self, filename,
1853                                                 storage_index, shnum)
1854         self.add_latency("get", time.time() - start)
1855hunk ./src/allmydata/storage/server.py 334
1856         # since all shares get the same lease data, we just grab the leases
1857         # from the first share
1858         try:
1859-            shnum, filename = self._get_bucket_shares(storage_index).next()
1860+            shnum, filename = self._get_shares(storage_index).next()
1861             sf = ShareFile(filename)
1862             return sf.get_leases()
1863         except StopIteration:
1864hunk ./src/allmydata/storage/shares.py 1
1865-#! /usr/bin/python
1866-
1867-from allmydata.storage.mutable import MutableShareFile
1868-from allmydata.storage.immutable import ShareFile
1869-
1870-def get_share_file(filename):
1871-    f = open(filename, "rb")
1872-    prefix = f.read(32)
1873-    f.close()
1874-    if prefix == MutableShareFile.MAGIC:
1875-        return MutableShareFile(filename)
1876-    # otherwise assume it's immutable
1877-    return ShareFile(filename)
1878-
1879rmfile ./src/allmydata/storage/shares.py
1880hunk ./src/allmydata/test/common_util.py 20
1881 
1882 def flip_one_bit(s, offset=0, size=None):
1883     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1884-    than offset+size. """
1885+    than offset+size. Return the new string. """
1886     if size is None:
1887         size=len(s)-offset
1888     i = randrange(offset, offset+size)
1889hunk ./src/allmydata/test/test_backends.py 7
1890 
1891 from allmydata.test.common_util import ReallyEqualMixin
1892 
1893-import mock
1894+import mock, os
1895 
1896 # This is the code that we're going to be testing.
1897hunk ./src/allmydata/test/test_backends.py 10
1898-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1899+from allmydata.storage.server import StorageServer
1900+
1901+from allmydata.storage.backends.das.core import DASCore
1902+from allmydata.storage.backends.null.core import NullCore
1903+
1904 
1905 # The following share file contents was generated with
1906 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1907hunk ./src/allmydata/test/test_backends.py 22
1908 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1909 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1910 
1911-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1912+tempdir = 'teststoredir'
1913+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1914+sharefname = os.path.join(sharedirname, '0')
1915 
1916 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1917     @mock.patch('time.time')
1918hunk ./src/allmydata/test/test_backends.py 58
1919         filesystem in only the prescribed ways. """
1920 
1921         def call_open(fname, mode):
1922-            if fname == 'testdir/bucket_counter.state':
1923-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1924-            elif fname == 'testdir/lease_checker.state':
1925-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1926-            elif fname == 'testdir/lease_checker.history':
1927+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1928+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1929+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1930+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1931+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1932                 return StringIO()
1933             else:
1934                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1935hunk ./src/allmydata/test/test_backends.py 124
1936     @mock.patch('__builtin__.open')
1937     def setUp(self, mockopen):
1938         def call_open(fname, mode):
1939-            if fname == 'testdir/bucket_counter.state':
1940-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1941-            elif fname == 'testdir/lease_checker.state':
1942-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1943-            elif fname == 'testdir/lease_checker.history':
1944+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1945+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1946+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1947+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1948+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1949                 return StringIO()
1950         mockopen.side_effect = call_open
1951hunk ./src/allmydata/test/test_backends.py 131
1952-
1953-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1954+        expiration_policy = {'enabled' : False,
1955+                             'mode' : 'age',
1956+                             'override_lease_duration' : None,
1957+                             'cutoff_date' : None,
1958+                             'sharetypes' : None}
1959+        testbackend = DASCore(tempdir, expiration_policy)
1960+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1961 
1962     @mock.patch('time.time')
1963     @mock.patch('os.mkdir')
1964hunk ./src/allmydata/test/test_backends.py 148
1965         """ Write a new share. """
1966 
1967         def call_listdir(dirname):
1968-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1969-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1970+            self.failUnlessReallyEqual(dirname, sharedirname)
1971+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1972 
1973         mocklistdir.side_effect = call_listdir
1974 
1975hunk ./src/allmydata/test/test_backends.py 178
1976 
1977         sharefile = MockFile()
1978         def call_open(fname, mode):
1979-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1980+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1981             return sharefile
1982 
1983         mockopen.side_effect = call_open
1984hunk ./src/allmydata/test/test_backends.py 200
1985         StorageServer object. """
1986 
1987         def call_listdir(dirname):
1988-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1989+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1990             return ['0']
1991 
1992         mocklistdir.side_effect = call_listdir
1993}
1994[checkpoint patch
1995wilcoxjg@gmail.com**20110626165715
1996 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1997] {
1998hunk ./src/allmydata/storage/backends/das/core.py 21
1999 from allmydata.storage.lease import LeaseInfo
2000 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2001      create_mutable_sharefile
2002-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
2003+from allmydata.storage.immutable import BucketWriter, BucketReader
2004 from allmydata.storage.crawler import FSBucketCountingCrawler
2005 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
2006 
2007hunk ./src/allmydata/storage/backends/das/core.py 27
2008 from zope.interface import implements
2009 
2010+# $SHARENUM matches this regex:
2011+NUM_RE=re.compile("^[0-9]+$")
2012+
2013 class DASCore(Backend):
2014     implements(IStorageBackend)
2015     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
2016hunk ./src/allmydata/storage/backends/das/core.py 80
2017         return fileutil.get_available_space(self.storedir, self.reserved_space)
2018 
2019     def get_shares(self, storage_index):
2020-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
2021+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
2022         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
2023         try:
2024             for f in os.listdir(finalstoragedir):
2025hunk ./src/allmydata/storage/backends/das/core.py 86
2026                 if NUM_RE.match(f):
2027                     filename = os.path.join(finalstoragedir, f)
2028-                    yield FSBShare(filename, int(f))
2029+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
2030         except OSError:
2031             # Commonly caused by there being no buckets at all.
2032             pass
2033hunk ./src/allmydata/storage/backends/das/core.py 95
2034         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
2035         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2036         return bw
2037+
2038+    def set_storage_server(self, ss):
2039+        self.ss = ss
2040         
2041 
2042 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2043hunk ./src/allmydata/storage/server.py 29
2044 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2045 # base-32 chars).
2046 
2047-# $SHARENUM matches this regex:
2048-NUM_RE=re.compile("^[0-9]+$")
2049 
2050 class StorageServer(service.MultiService, Referenceable):
2051     implements(RIStorageServer, IStatsProducer)
2052}
2053[checkpoint4
2054wilcoxjg@gmail.com**20110628202202
2055 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2056] {
2057hunk ./src/allmydata/storage/backends/das/core.py 96
2058         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2059         return bw
2060 
2061+    def make_bucket_reader(self, share):
2062+        return BucketReader(self.ss, share)
2063+
2064     def set_storage_server(self, ss):
2065         self.ss = ss
2066         
2067hunk ./src/allmydata/storage/backends/das/core.py 138
2068         must not be None. """
2069         precondition((max_size is not None) or (not create), max_size, create)
2070         self.shnum = shnum
2071+        self.storage_index = storageindex
2072         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2073         self._max_size = max_size
2074         if create:
2075hunk ./src/allmydata/storage/backends/das/core.py 173
2076             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2077         self._data_offset = 0xc
2078 
2079+    def get_shnum(self):
2080+        return self.shnum
2081+
2082     def unlink(self):
2083         os.unlink(self.fname)
2084 
2085hunk ./src/allmydata/storage/backends/null/core.py 2
2086 from allmydata.storage.backends.base import Backend
2087+from allmydata.storage.immutable import BucketWriter, BucketReader
2088 
2089 class NullCore(Backend):
2090     def __init__(self):
2091hunk ./src/allmydata/storage/backends/null/core.py 17
2092     def get_share(self, storage_index, sharenum):
2093         return None
2094 
2095-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2096-        return NullBucketWriter()
2097+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2098+       
2099+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2100+
2101+    def set_storage_server(self, ss):
2102+        self.ss = ss
2103+
2104+class ImmutableShare:
2105+    sharetype = "immutable"
2106+
2107+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2108+        """ If max_size is not None then I won't allow more than
2109+        max_size to be written to me. If create=True then max_size
2110+        must not be None. """
2111+        precondition((max_size is not None) or (not create), max_size, create)
2112+        self.shnum = shnum
2113+        self.storage_index = storageindex
2114+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2115+        self._max_size = max_size
2116+        if create:
2117+            # touch the file, so later callers will see that we're working on
2118+            # it. Also construct the metadata.
2119+            assert not os.path.exists(self.fname)
2120+            fileutil.make_dirs(os.path.dirname(self.fname))
2121+            f = open(self.fname, 'wb')
2122+            # The second field -- the four-byte share data length -- is no
2123+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2124+            # there in case someone downgrades a storage server from >=
2125+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2126+            # server to another, etc. We do saturation -- a share data length
2127+            # larger than 2**32-1 (what can fit into the field) is marked as
2128+            # the largest length that can fit into the field. That way, even
2129+            # if this does happen, the old < v1.3.0 server will still allow
2130+            # clients to read the first part of the share.
2131+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2132+            f.close()
2133+            self._lease_offset = max_size + 0x0c
2134+            self._num_leases = 0
2135+        else:
2136+            f = open(self.fname, 'rb')
2137+            filesize = os.path.getsize(self.fname)
2138+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2139+            f.close()
2140+            if version != 1:
2141+                msg = "sharefile %s had version %d but we wanted 1" % \
2142+                      (self.fname, version)
2143+                raise UnknownImmutableContainerVersionError(msg)
2144+            self._num_leases = num_leases
2145+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2146+        self._data_offset = 0xc
2147+
2148+    def get_shnum(self):
2149+        return self.shnum
2150+
2151+    def unlink(self):
2152+        os.unlink(self.fname)
2153+
2154+    def read_share_data(self, offset, length):
2155+        precondition(offset >= 0)
2156+        # Reads beyond the end of the data are truncated. Reads that start
2157+        # beyond the end of the data return an empty string.
2158+        seekpos = self._data_offset+offset
2159+        fsize = os.path.getsize(self.fname)
2160+        actuallength = max(0, min(length, fsize-seekpos))
2161+        if actuallength == 0:
2162+            return ""
2163+        f = open(self.fname, 'rb')
2164+        f.seek(seekpos)
2165+        return f.read(actuallength)
2166+
2167+    def write_share_data(self, offset, data):
2168+        length = len(data)
2169+        precondition(offset >= 0, offset)
2170+        if self._max_size is not None and offset+length > self._max_size:
2171+            raise DataTooLargeError(self._max_size, offset, length)
2172+        f = open(self.fname, 'rb+')
2173+        real_offset = self._data_offset+offset
2174+        f.seek(real_offset)
2175+        assert f.tell() == real_offset
2176+        f.write(data)
2177+        f.close()
2178+
2179+    def _write_lease_record(self, f, lease_number, lease_info):
2180+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2181+        f.seek(offset)
2182+        assert f.tell() == offset
2183+        f.write(lease_info.to_immutable_data())
2184+
2185+    def _read_num_leases(self, f):
2186+        f.seek(0x08)
2187+        (num_leases,) = struct.unpack(">L", f.read(4))
2188+        return num_leases
2189+
2190+    def _write_num_leases(self, f, num_leases):
2191+        f.seek(0x08)
2192+        f.write(struct.pack(">L", num_leases))
2193+
2194+    def _truncate_leases(self, f, num_leases):
2195+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2196+
2197+    def get_leases(self):
2198+        """Yields a LeaseInfo instance for all leases."""
2199+        f = open(self.fname, 'rb')
2200+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2201+        f.seek(self._lease_offset)
2202+        for i in range(num_leases):
2203+            data = f.read(self.LEASE_SIZE)
2204+            if data:
2205+                yield LeaseInfo().from_immutable_data(data)
2206+
2207+    def add_lease(self, lease_info):
2208+        f = open(self.fname, 'rb+')
2209+        num_leases = self._read_num_leases(f)
2210+        self._write_lease_record(f, num_leases, lease_info)
2211+        self._write_num_leases(f, num_leases+1)
2212+        f.close()
2213+
2214+    def renew_lease(self, renew_secret, new_expire_time):
2215+        for i,lease in enumerate(self.get_leases()):
2216+            if constant_time_compare(lease.renew_secret, renew_secret):
2217+                # yup. See if we need to update the owner time.
2218+                if new_expire_time > lease.expiration_time:
2219+                    # yes
2220+                    lease.expiration_time = new_expire_time
2221+                    f = open(self.fname, 'rb+')
2222+                    self._write_lease_record(f, i, lease)
2223+                    f.close()
2224+                return
2225+        raise IndexError("unable to renew non-existent lease")
2226+
2227+    def add_or_renew_lease(self, lease_info):
2228+        try:
2229+            self.renew_lease(lease_info.renew_secret,
2230+                             lease_info.expiration_time)
2231+        except IndexError:
2232+            self.add_lease(lease_info)
2233+
2234+
2235+    def cancel_lease(self, cancel_secret):
2236+        """Remove a lease with the given cancel_secret. If the last lease is
2237+        cancelled, the file will be removed. Return the number of bytes that
2238+        were freed (by truncating the list of leases, and possibly by
2239+        deleting the file. Raise IndexError if there was no lease with the
2240+        given cancel_secret.
2241+        """
2242+
2243+        leases = list(self.get_leases())
2244+        num_leases_removed = 0
2245+        for i,lease in enumerate(leases):
2246+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2247+                leases[i] = None
2248+                num_leases_removed += 1
2249+        if not num_leases_removed:
2250+            raise IndexError("unable to find matching lease to cancel")
2251+        if num_leases_removed:
2252+            # pack and write out the remaining leases. We write these out in
2253+            # the same order as they were added, so that if we crash while
2254+            # doing this, we won't lose any non-cancelled leases.
2255+            leases = [l for l in leases if l] # remove the cancelled leases
2256+            f = open(self.fname, 'rb+')
2257+            for i,lease in enumerate(leases):
2258+                self._write_lease_record(f, i, lease)
2259+            self._write_num_leases(f, len(leases))
2260+            self._truncate_leases(f, len(leases))
2261+            f.close()
2262+        space_freed = self.LEASE_SIZE * num_leases_removed
2263+        if not len(leases):
2264+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2265+            self.unlink()
2266+        return space_freed
2267hunk ./src/allmydata/storage/immutable.py 114
2268 class BucketReader(Referenceable):
2269     implements(RIBucketReader)
2270 
2271-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2272+    def __init__(self, ss, share):
2273         self.ss = ss
2274hunk ./src/allmydata/storage/immutable.py 116
2275-        self._share_file = ShareFile(sharefname)
2276-        self.storage_index = storage_index
2277-        self.shnum = shnum
2278+        self._share_file = share
2279+        self.storage_index = share.storage_index
2280+        self.shnum = share.shnum
2281 
2282     def __repr__(self):
2283         return "<%s %s %s>" % (self.__class__.__name__,
2284hunk ./src/allmydata/storage/server.py 316
2285         si_s = si_b2a(storage_index)
2286         log.msg("storage: get_buckets %s" % si_s)
2287         bucketreaders = {} # k: sharenum, v: BucketReader
2288-        for shnum, filename in self.backend.get_shares(storage_index):
2289-            bucketreaders[shnum] = BucketReader(self, filename,
2290-                                                storage_index, shnum)
2291+        self.backend.set_storage_server(self)
2292+        for share in self.backend.get_shares(storage_index):
2293+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2294         self.add_latency("get", time.time() - start)
2295         return bucketreaders
2296 
2297hunk ./src/allmydata/test/test_backends.py 25
2298 tempdir = 'teststoredir'
2299 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2300 sharefname = os.path.join(sharedirname, '0')
2301+expiration_policy = {'enabled' : False,
2302+                     'mode' : 'age',
2303+                     'override_lease_duration' : None,
2304+                     'cutoff_date' : None,
2305+                     'sharetypes' : None}
2306 
2307 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2308     @mock.patch('time.time')
2309hunk ./src/allmydata/test/test_backends.py 43
2310         tries to read or write to the file system. """
2311 
2312         # Now begin the test.
2313-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2314+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2315 
2316         self.failIf(mockisdir.called)
2317         self.failIf(mocklistdir.called)
2318hunk ./src/allmydata/test/test_backends.py 74
2319         mockopen.side_effect = call_open
2320 
2321         # Now begin the test.
2322-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2323+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2324 
2325         self.failIf(mockisdir.called)
2326         self.failIf(mocklistdir.called)
2327hunk ./src/allmydata/test/test_backends.py 86
2328 
2329 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2330     def setUp(self):
2331-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2332+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2333 
2334     @mock.patch('os.mkdir')
2335     @mock.patch('__builtin__.open')
2336hunk ./src/allmydata/test/test_backends.py 136
2337             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2338                 return StringIO()
2339         mockopen.side_effect = call_open
2340-        expiration_policy = {'enabled' : False,
2341-                             'mode' : 'age',
2342-                             'override_lease_duration' : None,
2343-                             'cutoff_date' : None,
2344-                             'sharetypes' : None}
2345         testbackend = DASCore(tempdir, expiration_policy)
2346         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2347 
2348}
2349[checkpoint5
2350wilcoxjg@gmail.com**20110705034626
2351 Ignore-this: 255780bd58299b0aa33c027e9d008262
2352] {
2353addfile ./src/allmydata/storage/backends/base.py
2354hunk ./src/allmydata/storage/backends/base.py 1
2355+from twisted.application import service
2356+
2357+class Backend(service.MultiService):
2358+    def __init__(self):
2359+        service.MultiService.__init__(self)
2360hunk ./src/allmydata/storage/backends/null/core.py 19
2361 
2362     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2363         
2364+        immutableshare = ImmutableShare()
2365         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2366 
2367     def set_storage_server(self, ss):
2368hunk ./src/allmydata/storage/backends/null/core.py 28
2369 class ImmutableShare:
2370     sharetype = "immutable"
2371 
2372-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2373+    def __init__(self):
2374         """ If max_size is not None then I won't allow more than
2375         max_size to be written to me. If create=True then max_size
2376         must not be None. """
2377hunk ./src/allmydata/storage/backends/null/core.py 32
2378-        precondition((max_size is not None) or (not create), max_size, create)
2379-        self.shnum = shnum
2380-        self.storage_index = storageindex
2381-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2382-        self._max_size = max_size
2383-        if create:
2384-            # touch the file, so later callers will see that we're working on
2385-            # it. Also construct the metadata.
2386-            assert not os.path.exists(self.fname)
2387-            fileutil.make_dirs(os.path.dirname(self.fname))
2388-            f = open(self.fname, 'wb')
2389-            # The second field -- the four-byte share data length -- is no
2390-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2391-            # there in case someone downgrades a storage server from >=
2392-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2393-            # server to another, etc. We do saturation -- a share data length
2394-            # larger than 2**32-1 (what can fit into the field) is marked as
2395-            # the largest length that can fit into the field. That way, even
2396-            # if this does happen, the old < v1.3.0 server will still allow
2397-            # clients to read the first part of the share.
2398-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2399-            f.close()
2400-            self._lease_offset = max_size + 0x0c
2401-            self._num_leases = 0
2402-        else:
2403-            f = open(self.fname, 'rb')
2404-            filesize = os.path.getsize(self.fname)
2405-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2406-            f.close()
2407-            if version != 1:
2408-                msg = "sharefile %s had version %d but we wanted 1" % \
2409-                      (self.fname, version)
2410-                raise UnknownImmutableContainerVersionError(msg)
2411-            self._num_leases = num_leases
2412-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2413-        self._data_offset = 0xc
2414+        pass
2415 
2416     def get_shnum(self):
2417         return self.shnum
2418hunk ./src/allmydata/storage/backends/null/core.py 54
2419         return f.read(actuallength)
2420 
2421     def write_share_data(self, offset, data):
2422-        length = len(data)
2423-        precondition(offset >= 0, offset)
2424-        if self._max_size is not None and offset+length > self._max_size:
2425-            raise DataTooLargeError(self._max_size, offset, length)
2426-        f = open(self.fname, 'rb+')
2427-        real_offset = self._data_offset+offset
2428-        f.seek(real_offset)
2429-        assert f.tell() == real_offset
2430-        f.write(data)
2431-        f.close()
2432+        pass
2433 
2434     def _write_lease_record(self, f, lease_number, lease_info):
2435         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2436hunk ./src/allmydata/storage/backends/null/core.py 84
2437             if data:
2438                 yield LeaseInfo().from_immutable_data(data)
2439 
2440-    def add_lease(self, lease_info):
2441-        f = open(self.fname, 'rb+')
2442-        num_leases = self._read_num_leases(f)
2443-        self._write_lease_record(f, num_leases, lease_info)
2444-        self._write_num_leases(f, num_leases+1)
2445-        f.close()
2446+    def add_lease(self, lease):
2447+        pass
2448 
2449     def renew_lease(self, renew_secret, new_expire_time):
2450         for i,lease in enumerate(self.get_leases()):
2451hunk ./src/allmydata/test/test_backends.py 32
2452                      'sharetypes' : None}
2453 
2454 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2455-    @mock.patch('time.time')
2456-    @mock.patch('os.mkdir')
2457-    @mock.patch('__builtin__.open')
2458-    @mock.patch('os.listdir')
2459-    @mock.patch('os.path.isdir')
2460-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2461-        """ This tests whether a server instance can be constructed
2462-        with a null backend. The server instance fails the test if it
2463-        tries to read or write to the file system. """
2464-
2465-        # Now begin the test.
2466-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2467-
2468-        self.failIf(mockisdir.called)
2469-        self.failIf(mocklistdir.called)
2470-        self.failIf(mockopen.called)
2471-        self.failIf(mockmkdir.called)
2472-
2473-        # You passed!
2474-
2475     @mock.patch('time.time')
2476     @mock.patch('os.mkdir')
2477     @mock.patch('__builtin__.open')
2478hunk ./src/allmydata/test/test_backends.py 53
2479                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2480         mockopen.side_effect = call_open
2481 
2482-        # Now begin the test.
2483-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2484-
2485-        self.failIf(mockisdir.called)
2486-        self.failIf(mocklistdir.called)
2487-        self.failIf(mockopen.called)
2488-        self.failIf(mockmkdir.called)
2489-        self.failIf(mocktime.called)
2490-
2491-        # You passed!
2492-
2493-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2494-    def setUp(self):
2495-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2496-
2497-    @mock.patch('os.mkdir')
2498-    @mock.patch('__builtin__.open')
2499-    @mock.patch('os.listdir')
2500-    @mock.patch('os.path.isdir')
2501-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2502-        """ Write a new share. """
2503-
2504-        # Now begin the test.
2505-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2506-        bs[0].remote_write(0, 'a')
2507-        self.failIf(mockisdir.called)
2508-        self.failIf(mocklistdir.called)
2509-        self.failIf(mockopen.called)
2510-        self.failIf(mockmkdir.called)
2511+        def call_isdir(fname):
2512+            if fname == os.path.join(tempdir,'shares'):
2513+                return True
2514+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2515+                return True
2516+            else:
2517+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2518+        mockisdir.side_effect = call_isdir
2519 
2520hunk ./src/allmydata/test/test_backends.py 62
2521-    @mock.patch('os.path.exists')
2522-    @mock.patch('os.path.getsize')
2523-    @mock.patch('__builtin__.open')
2524-    @mock.patch('os.listdir')
2525-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2526-        """ This tests whether the code correctly finds and reads
2527-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2528-        servers. There is a similar test in test_download, but that one
2529-        is from the perspective of the client and exercises a deeper
2530-        stack of code. This one is for exercising just the
2531-        StorageServer object. """
2532+        def call_mkdir(fname, mode):
2533+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2534+            self.failUnlessEqual(0777, mode)
2535+            if fname == tempdir:
2536+                return None
2537+            elif fname == os.path.join(tempdir,'shares'):
2538+                return None
2539+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2540+                return None
2541+            else:
2542+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2543+        mockmkdir.side_effect = call_mkdir
2544 
2545         # Now begin the test.
2546hunk ./src/allmydata/test/test_backends.py 76
2547-        bs = self.s.remote_get_buckets('teststorage_index')
2548+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2549 
2550hunk ./src/allmydata/test/test_backends.py 78
2551-        self.failUnlessEqual(len(bs), 0)
2552-        self.failIf(mocklistdir.called)
2553-        self.failIf(mockopen.called)
2554-        self.failIf(mockgetsize.called)
2555-        self.failIf(mockexists.called)
2556+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2557 
2558 
2559 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2560hunk ./src/allmydata/test/test_backends.py 193
2561         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2562 
2563 
2564+
2565+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2566+    @mock.patch('time.time')
2567+    @mock.patch('os.mkdir')
2568+    @mock.patch('__builtin__.open')
2569+    @mock.patch('os.listdir')
2570+    @mock.patch('os.path.isdir')
2571+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2572+        """ This tests whether a file system backend instance can be
2573+        constructed. To pass the test, it has to use the
2574+        filesystem in only the prescribed ways. """
2575+
2576+        def call_open(fname, mode):
2577+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2578+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2579+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2580+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2581+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2582+                return StringIO()
2583+            else:
2584+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2585+        mockopen.side_effect = call_open
2586+
2587+        def call_isdir(fname):
2588+            if fname == os.path.join(tempdir,'shares'):
2589+                return True
2590+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2591+                return True
2592+            else:
2593+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2594+        mockisdir.side_effect = call_isdir
2595+
2596+        def call_mkdir(fname, mode):
2597+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2598+            self.failUnlessEqual(0777, mode)
2599+            if fname == tempdir:
2600+                return None
2601+            elif fname == os.path.join(tempdir,'shares'):
2602+                return None
2603+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2604+                return None
2605+            else:
2606+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2607+        mockmkdir.side_effect = call_mkdir
2608+
2609+        # Now begin the test.
2610+        DASCore('teststoredir', expiration_policy)
2611+
2612+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2613}
2614[checkpoint 6
2615wilcoxjg@gmail.com**20110706190824
2616 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2617] {
2618hunk ./src/allmydata/interfaces.py 100
2619                          renew_secret=LeaseRenewSecret,
2620                          cancel_secret=LeaseCancelSecret,
2621                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2622-                         allocated_size=Offset, canary=Referenceable):
2623+                         allocated_size=Offset,
2624+                         canary=Referenceable):
2625         """
2626hunk ./src/allmydata/interfaces.py 103
2627-        @param storage_index: the index of the bucket to be created or
2628+        @param storage_index: the index of the shares to be created or
2629                               increfed.
2630hunk ./src/allmydata/interfaces.py 105
2631-        @param sharenums: these are the share numbers (probably between 0 and
2632-                          99) that the sender is proposing to store on this
2633-                          server.
2634-        @param renew_secret: This is the secret used to protect bucket refresh
2635+        @param renew_secret: This is the secret used to protect shares refresh
2636                              This secret is generated by the client and
2637                              stored for later comparison by the server. Each
2638                              server is given a different secret.
2639hunk ./src/allmydata/interfaces.py 109
2640-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2641-        @param canary: If the canary is lost before close(), the bucket is
2642+        @param cancel_secret: Like renew_secret, but protects shares decref.
2643+        @param sharenums: these are the share numbers (probably between 0 and
2644+                          99) that the sender is proposing to store on this
2645+                          server.
2646+        @param allocated_size: XXX The size of the shares the client wishes to store.
2647+        @param canary: If the canary is lost before close(), the shares are
2648                        deleted.
2649hunk ./src/allmydata/interfaces.py 116
2650+
2651         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2652                  already have and allocated is what we hereby agree to accept.
2653                  New leases are added for shares in both lists.
2654hunk ./src/allmydata/interfaces.py 128
2655                   renew_secret=LeaseRenewSecret,
2656                   cancel_secret=LeaseCancelSecret):
2657         """
2658-        Add a new lease on the given bucket. If the renew_secret matches an
2659+        Add a new lease on the given shares. If the renew_secret matches an
2660         existing lease, that lease will be renewed instead. If there is no
2661         bucket for the given storage_index, return silently. (note that in
2662         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2663hunk ./src/allmydata/storage/server.py 17
2664 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2665      create_mutable_sharefile
2666 
2667-from zope.interface import implements
2668-
2669 # storage/
2670 # storage/shares/incoming
2671 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2672hunk ./src/allmydata/test/test_backends.py 6
2673 from StringIO import StringIO
2674 
2675 from allmydata.test.common_util import ReallyEqualMixin
2676+from allmydata.util.assertutil import _assert
2677 
2678 import mock, os
2679 
2680hunk ./src/allmydata/test/test_backends.py 92
2681                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2682             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2683                 return StringIO()
2684+            else:
2685+                _assert(False, "The tester code doesn't recognize this case.") 
2686+
2687         mockopen.side_effect = call_open
2688         testbackend = DASCore(tempdir, expiration_policy)
2689         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2690hunk ./src/allmydata/test/test_backends.py 109
2691 
2692         def call_listdir(dirname):
2693             self.failUnlessReallyEqual(dirname, sharedirname)
2694-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2695+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2696 
2697         mocklistdir.side_effect = call_listdir
2698 
2699hunk ./src/allmydata/test/test_backends.py 113
2700+        def call_isdir(dirname):
2701+            self.failUnlessReallyEqual(dirname, sharedirname)
2702+            return True
2703+
2704+        mockisdir.side_effect = call_isdir
2705+
2706+        def call_mkdir(dirname, permissions):
2707+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2708+                self.Fail
2709+            else:
2710+                return True
2711+
2712+        mockmkdir.side_effect = call_mkdir
2713+
2714         class MockFile:
2715             def __init__(self):
2716                 self.buffer = ''
2717hunk ./src/allmydata/test/test_backends.py 156
2718             return sharefile
2719 
2720         mockopen.side_effect = call_open
2721+
2722         # Now begin the test.
2723         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2724         bs[0].remote_write(0, 'a')
2725hunk ./src/allmydata/test/test_backends.py 161
2726         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2727+       
2728+        # Now test the allocated_size method.
2729+        spaceint = self.s.allocated_size()
2730 
2731     @mock.patch('os.path.exists')
2732     @mock.patch('os.path.getsize')
2733}
2734[checkpoint 7
2735wilcoxjg@gmail.com**20110706200820
2736 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2737] hunk ./src/allmydata/test/test_backends.py 164
2738         
2739         # Now test the allocated_size method.
2740         spaceint = self.s.allocated_size()
2741+        self.failUnlessReallyEqual(spaceint, 1)
2742 
2743     @mock.patch('os.path.exists')
2744     @mock.patch('os.path.getsize')
2745[checkpoint8
2746wilcoxjg@gmail.com**20110706223126
2747 Ignore-this: 97336180883cb798b16f15411179f827
2748   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2749] hunk ./src/allmydata/test/test_backends.py 32
2750                      'cutoff_date' : None,
2751                      'sharetypes' : None}
2752 
2753+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2754+    def setUp(self):
2755+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2756+
2757+    @mock.patch('os.mkdir')
2758+    @mock.patch('__builtin__.open')
2759+    @mock.patch('os.listdir')
2760+    @mock.patch('os.path.isdir')
2761+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2762+        """ Write a new share. """
2763+
2764+        # Now begin the test.
2765+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2766+        bs[0].remote_write(0, 'a')
2767+        self.failIf(mockisdir.called)
2768+        self.failIf(mocklistdir.called)
2769+        self.failIf(mockopen.called)
2770+        self.failIf(mockmkdir.called)
2771+
2772 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2773     @mock.patch('time.time')
2774     @mock.patch('os.mkdir')
2775[checkpoint 9
2776wilcoxjg@gmail.com**20110707042942
2777 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2778] {
2779hunk ./src/allmydata/storage/backends/das/core.py 88
2780                     filename = os.path.join(finalstoragedir, f)
2781                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2782         except OSError:
2783-            # Commonly caused by there being no buckets at all.
2784+            # Commonly caused by there being no shares at all.
2785             pass
2786         
2787     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2788hunk ./src/allmydata/storage/backends/das/core.py 141
2789         self.storage_index = storageindex
2790         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2791         self._max_size = max_size
2792+        self.incomingdir = os.path.join(sharedir, 'incoming')
2793+        si_dir = storage_index_to_dir(storageindex)
2794+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2795+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2796         if create:
2797             # touch the file, so later callers will see that we're working on
2798             # it. Also construct the metadata.
2799hunk ./src/allmydata/storage/backends/das/core.py 177
2800             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2801         self._data_offset = 0xc
2802 
2803+    def close(self):
2804+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2805+        fileutil.rename(self.incominghome, self.finalhome)
2806+        try:
2807+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2808+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2809+            # these directories lying around forever, but the delete might
2810+            # fail if we're working on another share for the same storage
2811+            # index (like ab/abcde/5). The alternative approach would be to
2812+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2813+            # ShareWriter), each of which is responsible for a single
2814+            # directory on disk, and have them use reference counting of
2815+            # their children to know when they should do the rmdir. This
2816+            # approach is simpler, but relies on os.rmdir refusing to delete
2817+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2818+            os.rmdir(os.path.dirname(self.incominghome))
2819+            # we also delete the grandparent (prefix) directory, .../ab ,
2820+            # again to avoid leaving directories lying around. This might
2821+            # fail if there is another bucket open that shares a prefix (like
2822+            # ab/abfff).
2823+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2824+            # we leave the great-grandparent (incoming/) directory in place.
2825+        except EnvironmentError:
2826+            # ignore the "can't rmdir because the directory is not empty"
2827+            # exceptions, those are normal consequences of the
2828+            # above-mentioned conditions.
2829+            pass
2830+        pass
2831+       
2832+    def stat(self):
2833+        return os.stat(self.finalhome)[stat.ST_SIZE]
2834+
2835     def get_shnum(self):
2836         return self.shnum
2837 
2838hunk ./src/allmydata/storage/immutable.py 7
2839 
2840 from zope.interface import implements
2841 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2842-from allmydata.util import base32, fileutil, log
2843+from allmydata.util import base32, log
2844 from allmydata.util.assertutil import precondition
2845 from allmydata.util.hashutil import constant_time_compare
2846 from allmydata.storage.lease import LeaseInfo
2847hunk ./src/allmydata/storage/immutable.py 44
2848     def remote_close(self):
2849         precondition(not self.closed)
2850         start = time.time()
2851-
2852-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2853-        fileutil.rename(self.incominghome, self.finalhome)
2854-        try:
2855-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2856-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2857-            # these directories lying around forever, but the delete might
2858-            # fail if we're working on another share for the same storage
2859-            # index (like ab/abcde/5). The alternative approach would be to
2860-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2861-            # ShareWriter), each of which is responsible for a single
2862-            # directory on disk, and have them use reference counting of
2863-            # their children to know when they should do the rmdir. This
2864-            # approach is simpler, but relies on os.rmdir refusing to delete
2865-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2866-            os.rmdir(os.path.dirname(self.incominghome))
2867-            # we also delete the grandparent (prefix) directory, .../ab ,
2868-            # again to avoid leaving directories lying around. This might
2869-            # fail if there is another bucket open that shares a prefix (like
2870-            # ab/abfff).
2871-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2872-            # we leave the great-grandparent (incoming/) directory in place.
2873-        except EnvironmentError:
2874-            # ignore the "can't rmdir because the directory is not empty"
2875-            # exceptions, those are normal consequences of the
2876-            # above-mentioned conditions.
2877-            pass
2878+        self._sharefile.close()
2879         self._sharefile = None
2880         self.closed = True
2881         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2882hunk ./src/allmydata/storage/immutable.py 49
2883 
2884-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2885+        filelen = self._sharefile.stat()
2886         self.ss.bucket_writer_closed(self, filelen)
2887         self.ss.add_latency("close", time.time() - start)
2888         self.ss.count("close")
2889hunk ./src/allmydata/storage/server.py 45
2890         self._active_writers = weakref.WeakKeyDictionary()
2891         self.backend = backend
2892         self.backend.setServiceParent(self)
2893+        self.backend.set_storage_server(self)
2894         log.msg("StorageServer created", facility="tahoe.storage")
2895 
2896         self.latencies = {"allocate": [], # immutable
2897hunk ./src/allmydata/storage/server.py 220
2898 
2899         for shnum in (sharenums - alreadygot):
2900             if (not limited) or (remaining_space >= max_space_per_bucket):
2901-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2902-                self.backend.set_storage_server(self)
2903                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2904                                                      max_space_per_bucket, lease_info, canary)
2905                 bucketwriters[shnum] = bw
2906hunk ./src/allmydata/test/test_backends.py 117
2907         mockopen.side_effect = call_open
2908         testbackend = DASCore(tempdir, expiration_policy)
2909         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2910-
2911+   
2912+    @mock.patch('allmydata.util.fileutil.get_available_space')
2913     @mock.patch('time.time')
2914     @mock.patch('os.mkdir')
2915     @mock.patch('__builtin__.open')
2916hunk ./src/allmydata/test/test_backends.py 124
2917     @mock.patch('os.listdir')
2918     @mock.patch('os.path.isdir')
2919-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2920+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2921+                             mockget_available_space):
2922         """ Write a new share. """
2923 
2924         def call_listdir(dirname):
2925hunk ./src/allmydata/test/test_backends.py 148
2926 
2927         mockmkdir.side_effect = call_mkdir
2928 
2929+        def call_get_available_space(storedir, reserved_space):
2930+            self.failUnlessReallyEqual(storedir, tempdir)
2931+            return 1
2932+
2933+        mockget_available_space.side_effect = call_get_available_space
2934+
2935         class MockFile:
2936             def __init__(self):
2937                 self.buffer = ''
2938hunk ./src/allmydata/test/test_backends.py 188
2939         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2940         bs[0].remote_write(0, 'a')
2941         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2942-       
2943+
2944+        # What happens when there's not enough space for the client's request?
2945+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2946+
2947         # Now test the allocated_size method.
2948         spaceint = self.s.allocated_size()
2949         self.failUnlessReallyEqual(spaceint, 1)
2950}
2951[checkpoint10
2952wilcoxjg@gmail.com**20110707172049
2953 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2954] {
2955hunk ./src/allmydata/test/test_backends.py 20
2956 # The following share file contents was generated with
2957 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2958 # with share data == 'a'.
2959-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2960+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2961+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2962+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2963 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2964 
2965hunk ./src/allmydata/test/test_backends.py 25
2966+testnodeid = 'testnodeidxxxxxxxxxx'
2967 tempdir = 'teststoredir'
2968 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2969 sharefname = os.path.join(sharedirname, '0')
2970hunk ./src/allmydata/test/test_backends.py 37
2971 
2972 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2973     def setUp(self):
2974-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2975+        self.s = StorageServer(testnodeid, backend=NullCore())
2976 
2977     @mock.patch('os.mkdir')
2978     @mock.patch('__builtin__.open')
2979hunk ./src/allmydata/test/test_backends.py 99
2980         mockmkdir.side_effect = call_mkdir
2981 
2982         # Now begin the test.
2983-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2984+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2985 
2986         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2987 
2988hunk ./src/allmydata/test/test_backends.py 119
2989 
2990         mockopen.side_effect = call_open
2991         testbackend = DASCore(tempdir, expiration_policy)
2992-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2993-   
2994+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
2995+       
2996+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
2997     @mock.patch('allmydata.util.fileutil.get_available_space')
2998     @mock.patch('time.time')
2999     @mock.patch('os.mkdir')
3000hunk ./src/allmydata/test/test_backends.py 129
3001     @mock.patch('os.listdir')
3002     @mock.patch('os.path.isdir')
3003     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3004-                             mockget_available_space):
3005+                             mockget_available_space, mockget_shares):
3006         """ Write a new share. """
3007 
3008         def call_listdir(dirname):
3009hunk ./src/allmydata/test/test_backends.py 139
3010         mocklistdir.side_effect = call_listdir
3011 
3012         def call_isdir(dirname):
3013+            #XXX Should there be any other tests here?
3014             self.failUnlessReallyEqual(dirname, sharedirname)
3015             return True
3016 
3017hunk ./src/allmydata/test/test_backends.py 159
3018 
3019         mockget_available_space.side_effect = call_get_available_space
3020 
3021+        mocktime.return_value = 0
3022+        class MockShare:
3023+            def __init__(self):
3024+                self.shnum = 1
3025+               
3026+            def add_or_renew_lease(elf, lease_info):
3027+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3028+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3029+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3030+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3031+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3032+               
3033+
3034+        share = MockShare()
3035+        def call_get_shares(storageindex):
3036+            return [share]
3037+
3038+        mockget_shares.side_effect = call_get_shares
3039+
3040         class MockFile:
3041             def __init__(self):
3042                 self.buffer = ''
3043hunk ./src/allmydata/test/test_backends.py 199
3044             def tell(self):
3045                 return self.pos
3046 
3047-        mocktime.return_value = 0
3048 
3049         sharefile = MockFile()
3050         def call_open(fname, mode):
3051}
3052[jacp 11
3053wilcoxjg@gmail.com**20110708213919
3054 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3055] {
3056hunk ./src/allmydata/storage/backends/das/core.py 144
3057         self.incomingdir = os.path.join(sharedir, 'incoming')
3058         si_dir = storage_index_to_dir(storageindex)
3059         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3060+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3061         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3062         if create:
3063             # touch the file, so later callers will see that we're working on
3064hunk ./src/allmydata/storage/backends/das/core.py 208
3065         pass
3066         
3067     def stat(self):
3068-        return os.stat(self.finalhome)[stat.ST_SIZE]
3069+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3070 
3071     def get_shnum(self):
3072         return self.shnum
3073hunk ./src/allmydata/storage/immutable.py 44
3074     def remote_close(self):
3075         precondition(not self.closed)
3076         start = time.time()
3077+
3078         self._sharefile.close()
3079hunk ./src/allmydata/storage/immutable.py 46
3080+        filelen = self._sharefile.stat()
3081         self._sharefile = None
3082hunk ./src/allmydata/storage/immutable.py 48
3083+
3084         self.closed = True
3085         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3086 
3087hunk ./src/allmydata/storage/immutable.py 52
3088-        filelen = self._sharefile.stat()
3089         self.ss.bucket_writer_closed(self, filelen)
3090         self.ss.add_latency("close", time.time() - start)
3091         self.ss.count("close")
3092hunk ./src/allmydata/storage/server.py 220
3093 
3094         for shnum in (sharenums - alreadygot):
3095             if (not limited) or (remaining_space >= max_space_per_bucket):
3096-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3097-                                                     max_space_per_bucket, lease_info, canary)
3098+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3099                 bucketwriters[shnum] = bw
3100                 self._active_writers[bw] = 1
3101                 if limited:
3102hunk ./src/allmydata/test/test_backends.py 20
3103 # The following share file contents was generated with
3104 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3105 # with share data == 'a'.
3106-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3107-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3108+renew_secret  = 'x'*32
3109+cancel_secret = 'y'*32
3110 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3111 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3112 
3113hunk ./src/allmydata/test/test_backends.py 27
3114 testnodeid = 'testnodeidxxxxxxxxxx'
3115 tempdir = 'teststoredir'
3116-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3117-sharefname = os.path.join(sharedirname, '0')
3118+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3119+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3120+shareincomingname = os.path.join(sharedirincomingname, '0')
3121+sharefname = os.path.join(sharedirfinalname, '0')
3122+
3123 expiration_policy = {'enabled' : False,
3124                      'mode' : 'age',
3125                      'override_lease_duration' : None,
3126hunk ./src/allmydata/test/test_backends.py 123
3127         mockopen.side_effect = call_open
3128         testbackend = DASCore(tempdir, expiration_policy)
3129         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3130-       
3131+
3132+    @mock.patch('allmydata.util.fileutil.rename')
3133+    @mock.patch('allmydata.util.fileutil.make_dirs')
3134+    @mock.patch('os.path.exists')
3135+    @mock.patch('os.stat')
3136     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3137     @mock.patch('allmydata.util.fileutil.get_available_space')
3138     @mock.patch('time.time')
3139hunk ./src/allmydata/test/test_backends.py 136
3140     @mock.patch('os.listdir')
3141     @mock.patch('os.path.isdir')
3142     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3143-                             mockget_available_space, mockget_shares):
3144+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3145+                             mockmake_dirs, mockrename):
3146         """ Write a new share. """
3147 
3148         def call_listdir(dirname):
3149hunk ./src/allmydata/test/test_backends.py 141
3150-            self.failUnlessReallyEqual(dirname, sharedirname)
3151+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3152             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3153 
3154         mocklistdir.side_effect = call_listdir
3155hunk ./src/allmydata/test/test_backends.py 148
3156 
3157         def call_isdir(dirname):
3158             #XXX Should there be any other tests here?
3159-            self.failUnlessReallyEqual(dirname, sharedirname)
3160+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3161             return True
3162 
3163         mockisdir.side_effect = call_isdir
3164hunk ./src/allmydata/test/test_backends.py 154
3165 
3166         def call_mkdir(dirname, permissions):
3167-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3168+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3169                 self.Fail
3170             else:
3171                 return True
3172hunk ./src/allmydata/test/test_backends.py 208
3173                 return self.pos
3174 
3175 
3176-        sharefile = MockFile()
3177+        fobj = MockFile()
3178         def call_open(fname, mode):
3179             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3180hunk ./src/allmydata/test/test_backends.py 211
3181-            return sharefile
3182+            return fobj
3183 
3184         mockopen.side_effect = call_open
3185 
3186hunk ./src/allmydata/test/test_backends.py 215
3187+        def call_make_dirs(dname):
3188+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3189+           
3190+        mockmake_dirs.side_effect = call_make_dirs
3191+
3192+        def call_rename(src, dst):
3193+           self.failUnlessReallyEqual(src, shareincomingname)
3194+           self.failUnlessReallyEqual(dst, sharefname)
3195+           
3196+        mockrename.side_effect = call_rename
3197+
3198+        def call_exists(fname):
3199+            self.failUnlessReallyEqual(fname, sharefname)
3200+
3201+        mockexists.side_effect = call_exists
3202+
3203         # Now begin the test.
3204         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3205         bs[0].remote_write(0, 'a')
3206hunk ./src/allmydata/test/test_backends.py 234
3207-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3208+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3209+        spaceint = self.s.allocated_size()
3210+        self.failUnlessReallyEqual(spaceint, 1)
3211+
3212+        bs[0].remote_close()
3213 
3214         # What happens when there's not enough space for the client's request?
3215hunk ./src/allmydata/test/test_backends.py 241
3216-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3217+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3218 
3219         # Now test the allocated_size method.
3220hunk ./src/allmydata/test/test_backends.py 244
3221-        spaceint = self.s.allocated_size()
3222-        self.failUnlessReallyEqual(spaceint, 1)
3223+        #self.failIf(mockexists.called, mockexists.call_args_list)
3224+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3225+        #self.failIf(mockrename.called, mockrename.call_args_list)
3226+        #self.failIf(mockstat.called, mockstat.call_args_list)
3227 
3228     @mock.patch('os.path.exists')
3229     @mock.patch('os.path.getsize')
3230}
3231[checkpoint12 testing correct behavior with regard to incoming and final
3232wilcoxjg@gmail.com**20110710191915
3233 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3234] {
3235hunk ./src/allmydata/storage/backends/das/core.py 74
3236         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3237         self.lease_checker.setServiceParent(self)
3238 
3239+    def get_incoming(self, storageindex):
3240+        return set((1,))
3241+
3242     def get_available_space(self):
3243         if self.readonly:
3244             return 0
3245hunk ./src/allmydata/storage/server.py 77
3246         """Return a dict, indexed by category, that contains a dict of
3247         latency numbers for each category. If there are sufficient samples
3248         for unambiguous interpretation, each dict will contain the
3249-        following keys: mean, 01_0_percentile, 10_0_percentile,
3250+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3251         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3252         99_0_percentile, 99_9_percentile.  If there are insufficient
3253         samples for a given percentile to be interpreted unambiguously
3254hunk ./src/allmydata/storage/server.py 120
3255 
3256     def get_stats(self):
3257         # remember: RIStatsProvider requires that our return dict
3258-        # contains numeric values.
3259+        # contains numeric, or None values.
3260         stats = { 'storage_server.allocated': self.allocated_size(), }
3261         stats['storage_server.reserved_space'] = self.reserved_space
3262         for category,ld in self.get_latencies().items():
3263hunk ./src/allmydata/storage/server.py 185
3264         start = time.time()
3265         self.count("allocate")
3266         alreadygot = set()
3267+        incoming = set()
3268         bucketwriters = {} # k: shnum, v: BucketWriter
3269 
3270         si_s = si_b2a(storage_index)
3271hunk ./src/allmydata/storage/server.py 219
3272             alreadygot.add(share.shnum)
3273             share.add_or_renew_lease(lease_info)
3274 
3275-        for shnum in (sharenums - alreadygot):
3276+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3277+        incoming = self.backend.get_incoming(storageindex)
3278+
3279+        for shnum in ((sharenums - alreadygot) - incoming):
3280             if (not limited) or (remaining_space >= max_space_per_bucket):
3281                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3282                 bucketwriters[shnum] = bw
3283hunk ./src/allmydata/storage/server.py 229
3284                 self._active_writers[bw] = 1
3285                 if limited:
3286                     remaining_space -= max_space_per_bucket
3287-
3288-        #XXX We SHOULD DOCUMENT LATER.
3289+            else:
3290+                # Bummer not enough space to accept this share.
3291+                pass
3292 
3293         self.add_latency("allocate", time.time() - start)
3294         return alreadygot, bucketwriters
3295hunk ./src/allmydata/storage/server.py 323
3296         self.add_latency("get", time.time() - start)
3297         return bucketreaders
3298 
3299-    def get_leases(self, storage_index):
3300+    def remote_get_incoming(self, storageindex):
3301+        incoming_share_set = self.backend.get_incoming(storageindex)
3302+        return incoming_share_set
3303+
3304+    def get_leases(self, storageindex):
3305         """Provide an iterator that yields all of the leases attached to this
3306         bucket. Each lease is returned as a LeaseInfo instance.
3307 
3308hunk ./src/allmydata/storage/server.py 337
3309         # since all shares get the same lease data, we just grab the leases
3310         # from the first share
3311         try:
3312-            shnum, filename = self._get_shares(storage_index).next()
3313+            shnum, filename = self._get_shares(storageindex).next()
3314             sf = ShareFile(filename)
3315             return sf.get_leases()
3316         except StopIteration:
3317hunk ./src/allmydata/test/test_backends.py 182
3318 
3319         share = MockShare()
3320         def call_get_shares(storageindex):
3321-            return [share]
3322+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3323+            return []#share]
3324 
3325         mockget_shares.side_effect = call_get_shares
3326 
3327hunk ./src/allmydata/test/test_backends.py 222
3328         mockmake_dirs.side_effect = call_make_dirs
3329 
3330         def call_rename(src, dst):
3331-           self.failUnlessReallyEqual(src, shareincomingname)
3332-           self.failUnlessReallyEqual(dst, sharefname)
3333+            self.failUnlessReallyEqual(src, shareincomingname)
3334+            self.failUnlessReallyEqual(dst, sharefname)
3335             
3336         mockrename.side_effect = call_rename
3337 
3338hunk ./src/allmydata/test/test_backends.py 233
3339         mockexists.side_effect = call_exists
3340 
3341         # Now begin the test.
3342+
3343+        # XXX (0) ???  Fail unless something is not properly set-up?
3344         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3345hunk ./src/allmydata/test/test_backends.py 236
3346+
3347+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3348+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3349+
3350+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3351+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3352+        # with the same si, until BucketWriter.remote_close() has been called.
3353+        # self.failIf(bsa)
3354+
3355+        # XXX (3) Inspect final and fail unless there's nothing there.
3356         bs[0].remote_write(0, 'a')
3357hunk ./src/allmydata/test/test_backends.py 247
3358+        # XXX (4a) Inspect final and fail unless share 0 is there.
3359+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3360         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3361         spaceint = self.s.allocated_size()
3362         self.failUnlessReallyEqual(spaceint, 1)
3363hunk ./src/allmydata/test/test_backends.py 253
3364 
3365+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3366         bs[0].remote_close()
3367 
3368         # What happens when there's not enough space for the client's request?
3369hunk ./src/allmydata/test/test_backends.py 260
3370         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3371 
3372         # Now test the allocated_size method.
3373-        #self.failIf(mockexists.called, mockexists.call_args_list)
3374+        # self.failIf(mockexists.called, mockexists.call_args_list)
3375         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3376         #self.failIf(mockrename.called, mockrename.call_args_list)
3377         #self.failIf(mockstat.called, mockstat.call_args_list)
3378}
3379[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3380wilcoxjg@gmail.com**20110710195139
3381 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3382] {
3383hunk ./src/allmydata/storage/server.py 220
3384             share.add_or_renew_lease(lease_info)
3385 
3386         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3387-        incoming = self.backend.get_incoming(storageindex)
3388+        incoming = self.backend.get_incoming(storage_index)
3389 
3390         for shnum in ((sharenums - alreadygot) - incoming):
3391             if (not limited) or (remaining_space >= max_space_per_bucket):
3392hunk ./src/allmydata/storage/server.py 323
3393         self.add_latency("get", time.time() - start)
3394         return bucketreaders
3395 
3396-    def remote_get_incoming(self, storageindex):
3397-        incoming_share_set = self.backend.get_incoming(storageindex)
3398+    def remote_get_incoming(self, storage_index):
3399+        incoming_share_set = self.backend.get_incoming(storage_index)
3400         return incoming_share_set
3401 
3402hunk ./src/allmydata/storage/server.py 327
3403-    def get_leases(self, storageindex):
3404+    def get_leases(self, storage_index):
3405         """Provide an iterator that yields all of the leases attached to this
3406         bucket. Each lease is returned as a LeaseInfo instance.
3407 
3408hunk ./src/allmydata/storage/server.py 337
3409         # since all shares get the same lease data, we just grab the leases
3410         # from the first share
3411         try:
3412-            shnum, filename = self._get_shares(storageindex).next()
3413+            shnum, filename = self._get_shares(storage_index).next()
3414             sf = ShareFile(filename)
3415             return sf.get_leases()
3416         except StopIteration:
3417replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3418}
3419[adding comments to clarify what I'm about to do.
3420wilcoxjg@gmail.com**20110710220623
3421 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3422] {
3423hunk ./src/allmydata/storage/backends/das/core.py 8
3424 
3425 import os, re, weakref, struct, time
3426 
3427-from foolscap.api import Referenceable
3428+#from foolscap.api import Referenceable
3429 from twisted.application import service
3430 
3431 from zope.interface import implements
3432hunk ./src/allmydata/storage/backends/das/core.py 12
3433-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3434+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3435 from allmydata.util import fileutil, idlib, log, time_format
3436 import allmydata # for __full_version__
3437 
3438hunk ./src/allmydata/storage/server.py 219
3439             alreadygot.add(share.shnum)
3440             share.add_or_renew_lease(lease_info)
3441 
3442-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3443+        # fill incoming with all shares that are incoming use a set operation
3444+        # since there's no need to operate on individual pieces
3445         incoming = self.backend.get_incoming(storageindex)
3446 
3447         for shnum in ((sharenums - alreadygot) - incoming):
3448hunk ./src/allmydata/test/test_backends.py 245
3449         # with the same si, until BucketWriter.remote_close() has been called.
3450         # self.failIf(bsa)
3451 
3452-        # XXX (3) Inspect final and fail unless there's nothing there.
3453         bs[0].remote_write(0, 'a')
3454hunk ./src/allmydata/test/test_backends.py 246
3455-        # XXX (4a) Inspect final and fail unless share 0 is there.
3456-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3457         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3458         spaceint = self.s.allocated_size()
3459         self.failUnlessReallyEqual(spaceint, 1)
3460hunk ./src/allmydata/test/test_backends.py 250
3461 
3462-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3463+        # XXX (3) Inspect final and fail unless there's nothing there.
3464         bs[0].remote_close()
3465hunk ./src/allmydata/test/test_backends.py 252
3466+        # XXX (4a) Inspect final and fail unless share 0 is there.
3467+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3468 
3469         # What happens when there's not enough space for the client's request?
3470         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3471}
3472[branching back, no longer attempting to mock inside TestServerFSBackend
3473wilcoxjg@gmail.com**20110711190849
3474 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3475] {
3476hunk ./src/allmydata/storage/backends/das/core.py 75
3477         self.lease_checker.setServiceParent(self)
3478 
3479     def get_incoming(self, storageindex):
3480-        return set((1,))
3481-
3482-    def get_available_space(self):
3483-        if self.readonly:
3484-            return 0
3485-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3486+        """Return the set of incoming shnums."""
3487+        return set(os.listdir(self.incomingdir))
3488 
3489     def get_shares(self, storage_index):
3490         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3491hunk ./src/allmydata/storage/backends/das/core.py 90
3492             # Commonly caused by there being no shares at all.
3493             pass
3494         
3495+    def get_available_space(self):
3496+        if self.readonly:
3497+            return 0
3498+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3499+
3500     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3501         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3502         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3503hunk ./src/allmydata/test/test_backends.py 27
3504 
3505 testnodeid = 'testnodeidxxxxxxxxxx'
3506 tempdir = 'teststoredir'
3507-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3508-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3509+basedir = os.path.join(tempdir, 'shares')
3510+baseincdir = os.path.join(basedir, 'incoming')
3511+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3512+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3513 shareincomingname = os.path.join(sharedirincomingname, '0')
3514 sharefname = os.path.join(sharedirfinalname, '0')
3515 
3516hunk ./src/allmydata/test/test_backends.py 142
3517                              mockmake_dirs, mockrename):
3518         """ Write a new share. """
3519 
3520-        def call_listdir(dirname):
3521-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3522-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3523-
3524-        mocklistdir.side_effect = call_listdir
3525-
3526-        def call_isdir(dirname):
3527-            #XXX Should there be any other tests here?
3528-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3529-            return True
3530-
3531-        mockisdir.side_effect = call_isdir
3532-
3533-        def call_mkdir(dirname, permissions):
3534-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3535-                self.Fail
3536-            else:
3537-                return True
3538-
3539-        mockmkdir.side_effect = call_mkdir
3540-
3541-        def call_get_available_space(storedir, reserved_space):
3542-            self.failUnlessReallyEqual(storedir, tempdir)
3543-            return 1
3544-
3545-        mockget_available_space.side_effect = call_get_available_space
3546-
3547-        mocktime.return_value = 0
3548         class MockShare:
3549             def __init__(self):
3550                 self.shnum = 1
3551hunk ./src/allmydata/test/test_backends.py 152
3552                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3553                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3554                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3555-               
3556 
3557         share = MockShare()
3558hunk ./src/allmydata/test/test_backends.py 154
3559-        def call_get_shares(storageindex):
3560-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3561-            return []#share]
3562-
3563-        mockget_shares.side_effect = call_get_shares
3564 
3565         class MockFile:
3566             def __init__(self):
3567hunk ./src/allmydata/test/test_backends.py 176
3568             def tell(self):
3569                 return self.pos
3570 
3571-
3572         fobj = MockFile()
3573hunk ./src/allmydata/test/test_backends.py 177
3574+
3575+        directories = {}
3576+        def call_listdir(dirname):
3577+            if dirname not in directories:
3578+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3579+            else:
3580+                return directories[dirname].get_contents()
3581+
3582+        mocklistdir.side_effect = call_listdir
3583+
3584+        class MockDir:
3585+            def __init__(self, dirname):
3586+                self.name = dirname
3587+                self.contents = []
3588+   
3589+            def get_contents(self):
3590+                return self.contents
3591+
3592+        def call_isdir(dirname):
3593+            #XXX Should there be any other tests here?
3594+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3595+            return True
3596+
3597+        mockisdir.side_effect = call_isdir
3598+
3599+        def call_mkdir(dirname, permissions):
3600+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3601+                self.Fail
3602+            if dirname in directories:
3603+                raise OSError(17, "File exists: '%s'" % dirname)
3604+                self.Fail
3605+            elif dirname not in directories:
3606+                directories[dirname] = MockDir(dirname)
3607+                return True
3608+
3609+        mockmkdir.side_effect = call_mkdir
3610+
3611+        def call_get_available_space(storedir, reserved_space):
3612+            self.failUnlessReallyEqual(storedir, tempdir)
3613+            return 1
3614+
3615+        mockget_available_space.side_effect = call_get_available_space
3616+
3617+        mocktime.return_value = 0
3618+        def call_get_shares(storageindex):
3619+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3620+            return []#share]
3621+
3622+        mockget_shares.side_effect = call_get_shares
3623+
3624         def call_open(fname, mode):
3625             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3626             return fobj
3627}
3628[checkpoint12 TestServerFSBackend no longer mocks filesystem
3629wilcoxjg@gmail.com**20110711193357
3630 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3631] {
3632hunk ./src/allmydata/storage/backends/das/core.py 23
3633      create_mutable_sharefile
3634 from allmydata.storage.immutable import BucketWriter, BucketReader
3635 from allmydata.storage.crawler import FSBucketCountingCrawler
3636+from allmydata.util.hashutil import constant_time_compare
3637 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3638 
3639 from zope.interface import implements
3640hunk ./src/allmydata/storage/backends/das/core.py 28
3641 
3642+# storage/
3643+# storage/shares/incoming
3644+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3645+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3646+# storage/shares/$START/$STORAGEINDEX
3647+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3648+
3649+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3650+# base-32 chars).
3651 # $SHARENUM matches this regex:
3652 NUM_RE=re.compile("^[0-9]+$")
3653 
3654hunk ./src/allmydata/test/test_backends.py 126
3655         testbackend = DASCore(tempdir, expiration_policy)
3656         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3657 
3658-    @mock.patch('allmydata.util.fileutil.rename')
3659-    @mock.patch('allmydata.util.fileutil.make_dirs')
3660-    @mock.patch('os.path.exists')
3661-    @mock.patch('os.stat')
3662-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3663-    @mock.patch('allmydata.util.fileutil.get_available_space')
3664     @mock.patch('time.time')
3665hunk ./src/allmydata/test/test_backends.py 127
3666-    @mock.patch('os.mkdir')
3667-    @mock.patch('__builtin__.open')
3668-    @mock.patch('os.listdir')
3669-    @mock.patch('os.path.isdir')
3670-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3671-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3672-                             mockmake_dirs, mockrename):
3673+    def test_write_share(self, mocktime):
3674         """ Write a new share. """
3675 
3676         class MockShare:
3677hunk ./src/allmydata/test/test_backends.py 143
3678 
3679         share = MockShare()
3680 
3681-        class MockFile:
3682-            def __init__(self):
3683-                self.buffer = ''
3684-                self.pos = 0
3685-            def write(self, instring):
3686-                begin = self.pos
3687-                padlen = begin - len(self.buffer)
3688-                if padlen > 0:
3689-                    self.buffer += '\x00' * padlen
3690-                end = self.pos + len(instring)
3691-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3692-                self.pos = end
3693-            def close(self):
3694-                pass
3695-            def seek(self, pos):
3696-                self.pos = pos
3697-            def read(self, numberbytes):
3698-                return self.buffer[self.pos:self.pos+numberbytes]
3699-            def tell(self):
3700-                return self.pos
3701-
3702-        fobj = MockFile()
3703-
3704-        directories = {}
3705-        def call_listdir(dirname):
3706-            if dirname not in directories:
3707-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3708-            else:
3709-                return directories[dirname].get_contents()
3710-
3711-        mocklistdir.side_effect = call_listdir
3712-
3713-        class MockDir:
3714-            def __init__(self, dirname):
3715-                self.name = dirname
3716-                self.contents = []
3717-   
3718-            def get_contents(self):
3719-                return self.contents
3720-
3721-        def call_isdir(dirname):
3722-            #XXX Should there be any other tests here?
3723-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3724-            return True
3725-
3726-        mockisdir.side_effect = call_isdir
3727-
3728-        def call_mkdir(dirname, permissions):
3729-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3730-                self.Fail
3731-            if dirname in directories:
3732-                raise OSError(17, "File exists: '%s'" % dirname)
3733-                self.Fail
3734-            elif dirname not in directories:
3735-                directories[dirname] = MockDir(dirname)
3736-                return True
3737-
3738-        mockmkdir.side_effect = call_mkdir
3739-
3740-        def call_get_available_space(storedir, reserved_space):
3741-            self.failUnlessReallyEqual(storedir, tempdir)
3742-            return 1
3743-
3744-        mockget_available_space.side_effect = call_get_available_space
3745-
3746-        mocktime.return_value = 0
3747-        def call_get_shares(storageindex):
3748-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3749-            return []#share]
3750-
3751-        mockget_shares.side_effect = call_get_shares
3752-
3753-        def call_open(fname, mode):
3754-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3755-            return fobj
3756-
3757-        mockopen.side_effect = call_open
3758-
3759-        def call_make_dirs(dname):
3760-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3761-           
3762-        mockmake_dirs.side_effect = call_make_dirs
3763-
3764-        def call_rename(src, dst):
3765-            self.failUnlessReallyEqual(src, shareincomingname)
3766-            self.failUnlessReallyEqual(dst, sharefname)
3767-           
3768-        mockrename.side_effect = call_rename
3769-
3770-        def call_exists(fname):
3771-            self.failUnlessReallyEqual(fname, sharefname)
3772-
3773-        mockexists.side_effect = call_exists
3774-
3775         # Now begin the test.
3776 
3777         # XXX (0) ???  Fail unless something is not properly set-up?
3778}
3779[JACP
3780wilcoxjg@gmail.com**20110711194407
3781 Ignore-this: b54745de777c4bb58d68d708f010bbb
3782] {
3783hunk ./src/allmydata/storage/backends/das/core.py 86
3784 
3785     def get_incoming(self, storageindex):
3786         """Return the set of incoming shnums."""
3787-        return set(os.listdir(self.incomingdir))
3788+        try:
3789+            incominglist = os.listdir(self.incomingdir)
3790+            print "incominglist: ", incominglist
3791+            return set(incominglist)
3792+        except OSError:
3793+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3794+            pass
3795 
3796     def get_shares(self, storage_index):
3797         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3798hunk ./src/allmydata/storage/server.py 17
3799 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3800      create_mutable_sharefile
3801 
3802-# storage/
3803-# storage/shares/incoming
3804-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3805-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3806-# storage/shares/$START/$STORAGEINDEX
3807-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3808-
3809-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3810-# base-32 chars).
3811-
3812-
3813 class StorageServer(service.MultiService, Referenceable):
3814     implements(RIStorageServer, IStatsProducer)
3815     name = 'storage'
3816}
3817[testing get incoming
3818wilcoxjg@gmail.com**20110711210224
3819 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3820] {
3821hunk ./src/allmydata/storage/backends/das/core.py 87
3822     def get_incoming(self, storageindex):
3823         """Return the set of incoming shnums."""
3824         try:
3825-            incominglist = os.listdir(self.incomingdir)
3826+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3827+            incominglist = os.listdir(incomingsharesdir)
3828             print "incominglist: ", incominglist
3829             return set(incominglist)
3830         except OSError:
3831hunk ./src/allmydata/storage/backends/das/core.py 92
3832-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3833-            pass
3834-
3835+            # XXX I'd like to make this more specific. If there are no shares at all.
3836+            return set()
3837+           
3838     def get_shares(self, storage_index):
3839         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3840         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3841hunk ./src/allmydata/test/test_backends.py 149
3842         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3843 
3844         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3845+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3846         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3847 
3848hunk ./src/allmydata/test/test_backends.py 152
3849-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3850         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3851         # with the same si, until BucketWriter.remote_close() has been called.
3852         # self.failIf(bsa)
3853}
3854[ImmutableShareFile does not know its StorageIndex
3855wilcoxjg@gmail.com**20110711211424
3856 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3857] {
3858hunk ./src/allmydata/storage/backends/das/core.py 112
3859             return 0
3860         return fileutil.get_available_space(self.storedir, self.reserved_space)
3861 
3862-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3863-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3864+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3865+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3866+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3867+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3868         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3869         return bw
3870 
3871hunk ./src/allmydata/storage/backends/das/core.py 155
3872     LEASE_SIZE = struct.calcsize(">L32s32sL")
3873     sharetype = "immutable"
3874 
3875-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3876+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3877         """ If max_size is not None then I won't allow more than
3878         max_size to be written to me. If create=True then max_size
3879         must not be None. """
3880}
3881[get_incoming correctly reports the 0 share after it has arrived
3882wilcoxjg@gmail.com**20110712025157
3883 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3884] {
3885hunk ./src/allmydata/storage/backends/das/core.py 1
3886+import os, re, weakref, struct, time, stat
3887+
3888 from allmydata.interfaces import IStorageBackend
3889 from allmydata.storage.backends.base import Backend
3890 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3891hunk ./src/allmydata/storage/backends/das/core.py 8
3892 from allmydata.util.assertutil import precondition
3893 
3894-import os, re, weakref, struct, time
3895-
3896 #from foolscap.api import Referenceable
3897 from twisted.application import service
3898 
3899hunk ./src/allmydata/storage/backends/das/core.py 89
3900         try:
3901             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3902             incominglist = os.listdir(incomingsharesdir)
3903-            print "incominglist: ", incominglist
3904-            return set(incominglist)
3905+            incomingshnums = [int(x) for x in incominglist]
3906+            return set(incomingshnums)
3907         except OSError:
3908             # XXX I'd like to make this more specific. If there are no shares at all.
3909             return set()
3910hunk ./src/allmydata/storage/backends/das/core.py 113
3911         return fileutil.get_available_space(self.storedir, self.reserved_space)
3912 
3913     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3914-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3915-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3916-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3917+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3918+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3919+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3920         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3921         return bw
3922 
3923hunk ./src/allmydata/storage/backends/das/core.py 160
3924         max_size to be written to me. If create=True then max_size
3925         must not be None. """
3926         precondition((max_size is not None) or (not create), max_size, create)
3927-        self.shnum = shnum
3928-        self.storage_index = storageindex
3929-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3930         self._max_size = max_size
3931hunk ./src/allmydata/storage/backends/das/core.py 161
3932-        self.incomingdir = os.path.join(sharedir, 'incoming')
3933-        si_dir = storage_index_to_dir(storageindex)
3934-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3935-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3936-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3937+        self.incominghome = incominghome
3938+        self.finalhome = finalhome
3939         if create:
3940             # touch the file, so later callers will see that we're working on
3941             # it. Also construct the metadata.
3942hunk ./src/allmydata/storage/backends/das/core.py 166
3943-            assert not os.path.exists(self.fname)
3944-            fileutil.make_dirs(os.path.dirname(self.fname))
3945-            f = open(self.fname, 'wb')
3946+            assert not os.path.exists(self.finalhome)
3947+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3948+            f = open(self.incominghome, 'wb')
3949             # The second field -- the four-byte share data length -- is no
3950             # longer used as of Tahoe v1.3.0, but we continue to write it in
3951             # there in case someone downgrades a storage server from >=
3952hunk ./src/allmydata/storage/backends/das/core.py 183
3953             self._lease_offset = max_size + 0x0c
3954             self._num_leases = 0
3955         else:
3956-            f = open(self.fname, 'rb')
3957-            filesize = os.path.getsize(self.fname)
3958+            f = open(self.finalhome, 'rb')
3959+            filesize = os.path.getsize(self.finalhome)
3960             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3961             f.close()
3962             if version != 1:
3963hunk ./src/allmydata/storage/backends/das/core.py 189
3964                 msg = "sharefile %s had version %d but we wanted 1" % \
3965-                      (self.fname, version)
3966+                      (self.finalhome, version)
3967                 raise UnknownImmutableContainerVersionError(msg)
3968             self._num_leases = num_leases
3969             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3970hunk ./src/allmydata/storage/backends/das/core.py 225
3971         pass
3972         
3973     def stat(self):
3974-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3975+        return os.stat(self.finalhome)[stat.ST_SIZE]
3976+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
3977 
3978     def get_shnum(self):
3979         return self.shnum
3980hunk ./src/allmydata/storage/backends/das/core.py 232
3981 
3982     def unlink(self):
3983-        os.unlink(self.fname)
3984+        os.unlink(self.finalhome)
3985 
3986     def read_share_data(self, offset, length):
3987         precondition(offset >= 0)
3988hunk ./src/allmydata/storage/backends/das/core.py 239
3989         # Reads beyond the end of the data are truncated. Reads that start
3990         # beyond the end of the data return an empty string.
3991         seekpos = self._data_offset+offset
3992-        fsize = os.path.getsize(self.fname)
3993+        fsize = os.path.getsize(self.finalhome)
3994         actuallength = max(0, min(length, fsize-seekpos))
3995         if actuallength == 0:
3996             return ""
3997hunk ./src/allmydata/storage/backends/das/core.py 243
3998-        f = open(self.fname, 'rb')
3999+        f = open(self.finalhome, 'rb')
4000         f.seek(seekpos)
4001         return f.read(actuallength)
4002 
4003hunk ./src/allmydata/storage/backends/das/core.py 252
4004         precondition(offset >= 0, offset)
4005         if self._max_size is not None and offset+length > self._max_size:
4006             raise DataTooLargeError(self._max_size, offset, length)
4007-        f = open(self.fname, 'rb+')
4008+        f = open(self.incominghome, 'rb+')
4009         real_offset = self._data_offset+offset
4010         f.seek(real_offset)
4011         assert f.tell() == real_offset
4012hunk ./src/allmydata/storage/backends/das/core.py 279
4013 
4014     def get_leases(self):
4015         """Yields a LeaseInfo instance for all leases."""
4016-        f = open(self.fname, 'rb')
4017+        f = open(self.finalhome, 'rb')
4018         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
4019         f.seek(self._lease_offset)
4020         for i in range(num_leases):
4021hunk ./src/allmydata/storage/backends/das/core.py 288
4022                 yield LeaseInfo().from_immutable_data(data)
4023 
4024     def add_lease(self, lease_info):
4025-        f = open(self.fname, 'rb+')
4026+        f = open(self.incominghome, 'rb+')
4027         num_leases = self._read_num_leases(f)
4028         self._write_lease_record(f, num_leases, lease_info)
4029         self._write_num_leases(f, num_leases+1)
4030hunk ./src/allmydata/storage/backends/das/core.py 301
4031                 if new_expire_time > lease.expiration_time:
4032                     # yes
4033                     lease.expiration_time = new_expire_time
4034-                    f = open(self.fname, 'rb+')
4035+                    f = open(self.finalhome, 'rb+')
4036                     self._write_lease_record(f, i, lease)
4037                     f.close()
4038                 return
4039hunk ./src/allmydata/storage/backends/das/core.py 336
4040             # the same order as they were added, so that if we crash while
4041             # doing this, we won't lose any non-cancelled leases.
4042             leases = [l for l in leases if l] # remove the cancelled leases
4043-            f = open(self.fname, 'rb+')
4044+            f = open(self.finalhome, 'rb+')
4045             for i,lease in enumerate(leases):
4046                 self._write_lease_record(f, i, lease)
4047             self._write_num_leases(f, len(leases))
4048hunk ./src/allmydata/storage/backends/das/core.py 344
4049             f.close()
4050         space_freed = self.LEASE_SIZE * num_leases_removed
4051         if not len(leases):
4052-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
4053+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
4054             self.unlink()
4055         return space_freed
4056hunk ./src/allmydata/test/test_backends.py 129
4057     @mock.patch('time.time')
4058     def test_write_share(self, mocktime):
4059         """ Write a new share. """
4060-
4061-        class MockShare:
4062-            def __init__(self):
4063-                self.shnum = 1
4064-               
4065-            def add_or_renew_lease(elf, lease_info):
4066-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4067-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4068-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4069-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4070-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4071-
4072-        share = MockShare()
4073-
4074         # Now begin the test.
4075 
4076         # XXX (0) ???  Fail unless something is not properly set-up?
4077hunk ./src/allmydata/test/test_backends.py 143
4078         # self.failIf(bsa)
4079 
4080         bs[0].remote_write(0, 'a')
4081-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4082+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4083         spaceint = self.s.allocated_size()
4084         self.failUnlessReallyEqual(spaceint, 1)
4085 
4086hunk ./src/allmydata/test/test_backends.py 161
4087         #self.failIf(mockrename.called, mockrename.call_args_list)
4088         #self.failIf(mockstat.called, mockstat.call_args_list)
4089 
4090+    def test_handle_incoming(self):
4091+        incomingset = self.s.backend.get_incoming('teststorage_index')
4092+        self.failUnlessReallyEqual(incomingset, set())
4093+
4094+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4095+       
4096+        incomingset = self.s.backend.get_incoming('teststorage_index')
4097+        self.failUnlessReallyEqual(incomingset, set((0,)))
4098+
4099+        bs[0].remote_close()
4100+        self.failUnlessReallyEqual(incomingset, set())
4101+
4102     @mock.patch('os.path.exists')
4103     @mock.patch('os.path.getsize')
4104     @mock.patch('__builtin__.open')
4105hunk ./src/allmydata/test/test_backends.py 223
4106         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4107 
4108 
4109-
4110 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4111     @mock.patch('time.time')
4112     @mock.patch('os.mkdir')
4113hunk ./src/allmydata/test/test_backends.py 271
4114         DASCore('teststoredir', expiration_policy)
4115 
4116         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4117+
4118}
4119[jacp14
4120wilcoxjg@gmail.com**20110712061211
4121 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4122] {
4123hunk ./src/allmydata/storage/backends/das/core.py 95
4124             # XXX I'd like to make this more specific. If there are no shares at all.
4125             return set()
4126             
4127-    def get_shares(self, storage_index):
4128+    def get_shares(self, storageindex):
4129         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4130hunk ./src/allmydata/storage/backends/das/core.py 97
4131-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4132+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4133         try:
4134             for f in os.listdir(finalstoragedir):
4135                 if NUM_RE.match(f):
4136hunk ./src/allmydata/storage/backends/das/core.py 102
4137                     filename = os.path.join(finalstoragedir, f)
4138-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4139+                    yield ImmutableShare(filename, storageindex, f)
4140         except OSError:
4141             # Commonly caused by there being no shares at all.
4142             pass
4143hunk ./src/allmydata/storage/backends/das/core.py 115
4144     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4145         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4146         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4147-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4148+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4149         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4150         return bw
4151 
4152hunk ./src/allmydata/storage/backends/das/core.py 155
4153     LEASE_SIZE = struct.calcsize(">L32s32sL")
4154     sharetype = "immutable"
4155 
4156-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4157+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4158         """ If max_size is not None then I won't allow more than
4159         max_size to be written to me. If create=True then max_size
4160         must not be None. """
4161hunk ./src/allmydata/storage/backends/das/core.py 160
4162         precondition((max_size is not None) or (not create), max_size, create)
4163+        self.storageindex = storageindex
4164         self._max_size = max_size
4165         self.incominghome = incominghome
4166         self.finalhome = finalhome
4167hunk ./src/allmydata/storage/backends/das/core.py 164
4168+        self.shnum = shnum
4169         if create:
4170             # touch the file, so later callers will see that we're working on
4171             # it. Also construct the metadata.
4172hunk ./src/allmydata/storage/backends/das/core.py 212
4173             # their children to know when they should do the rmdir. This
4174             # approach is simpler, but relies on os.rmdir refusing to delete
4175             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4176+            #print "os.path.dirname(self.incominghome): "
4177+            #print os.path.dirname(self.incominghome)
4178             os.rmdir(os.path.dirname(self.incominghome))
4179             # we also delete the grandparent (prefix) directory, .../ab ,
4180             # again to avoid leaving directories lying around. This might
4181hunk ./src/allmydata/storage/immutable.py 93
4182     def __init__(self, ss, share):
4183         self.ss = ss
4184         self._share_file = share
4185-        self.storage_index = share.storage_index
4186+        self.storageindex = share.storageindex
4187         self.shnum = share.shnum
4188 
4189     def __repr__(self):
4190hunk ./src/allmydata/storage/immutable.py 98
4191         return "<%s %s %s>" % (self.__class__.__name__,
4192-                               base32.b2a_l(self.storage_index[:8], 60),
4193+                               base32.b2a_l(self.storageindex[:8], 60),
4194                                self.shnum)
4195 
4196     def remote_read(self, offset, length):
4197hunk ./src/allmydata/storage/immutable.py 110
4198 
4199     def remote_advise_corrupt_share(self, reason):
4200         return self.ss.remote_advise_corrupt_share("immutable",
4201-                                                   self.storage_index,
4202+                                                   self.storageindex,
4203                                                    self.shnum,
4204                                                    reason)
4205hunk ./src/allmydata/test/test_backends.py 20
4206 # The following share file contents was generated with
4207 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4208 # with share data == 'a'.
4209-renew_secret  = 'x'*32
4210-cancel_secret = 'y'*32
4211-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4212-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4213+shareversionnumber = '\x00\x00\x00\x01'
4214+sharedatalength = '\x00\x00\x00\x01'
4215+numberofleases = '\x00\x00\x00\x01'
4216+shareinputdata = 'a'
4217+ownernumber = '\x00\x00\x00\x00'
4218+renewsecret  = 'x'*32
4219+cancelsecret = 'y'*32
4220+expirationtime = '\x00(\xde\x80'
4221+nextlease = ''
4222+containerdata = shareversionnumber + sharedatalength + numberofleases
4223+client_data = shareinputdata + ownernumber + renewsecret + \
4224+    cancelsecret + expirationtime + nextlease
4225+share_data = containerdata + client_data
4226+
4227 
4228 testnodeid = 'testnodeidxxxxxxxxxx'
4229 tempdir = 'teststoredir'
4230hunk ./src/allmydata/test/test_backends.py 52
4231 
4232 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4233     def setUp(self):
4234-        self.s = StorageServer(testnodeid, backend=NullCore())
4235+        self.ss = StorageServer(testnodeid, backend=NullCore())
4236 
4237     @mock.patch('os.mkdir')
4238     @mock.patch('__builtin__.open')
4239hunk ./src/allmydata/test/test_backends.py 62
4240         """ Write a new share. """
4241 
4242         # Now begin the test.
4243-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4244+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4245         bs[0].remote_write(0, 'a')
4246         self.failIf(mockisdir.called)
4247         self.failIf(mocklistdir.called)
4248hunk ./src/allmydata/test/test_backends.py 133
4249                 _assert(False, "The tester code doesn't recognize this case.") 
4250 
4251         mockopen.side_effect = call_open
4252-        testbackend = DASCore(tempdir, expiration_policy)
4253-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4254+        self.backend = DASCore(tempdir, expiration_policy)
4255+        self.ss = StorageServer(testnodeid, self.backend)
4256+        self.ssinf = StorageServer(testnodeid, self.backend)
4257 
4258     @mock.patch('time.time')
4259     def test_write_share(self, mocktime):
4260hunk ./src/allmydata/test/test_backends.py 142
4261         """ Write a new share. """
4262         # Now begin the test.
4263 
4264-        # XXX (0) ???  Fail unless something is not properly set-up?
4265-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4266+        mocktime.return_value = 0
4267+        # Inspect incoming and fail unless it's empty.
4268+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4269+        self.failUnlessReallyEqual(incomingset, set())
4270+       
4271+        # Among other things, populate incoming with the sharenum: 0.
4272+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4273 
4274hunk ./src/allmydata/test/test_backends.py 150
4275-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4276-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4277-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4278+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4279+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4280+       
4281+        # Attempt to create a second share writer with the same share.
4282+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4283 
4284hunk ./src/allmydata/test/test_backends.py 156
4285-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4286+        # Show that no sharewriter results from a remote_allocate_buckets
4287         # with the same si, until BucketWriter.remote_close() has been called.
4288hunk ./src/allmydata/test/test_backends.py 158
4289-        # self.failIf(bsa)
4290+        self.failIf(bsa)
4291 
4292hunk ./src/allmydata/test/test_backends.py 160
4293+        # Write 'a' to shnum 0. Only tested together with close and read.
4294         bs[0].remote_write(0, 'a')
4295hunk ./src/allmydata/test/test_backends.py 162
4296-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4297-        spaceint = self.s.allocated_size()
4298+
4299+        # Test allocated size.
4300+        spaceint = self.ss.allocated_size()
4301         self.failUnlessReallyEqual(spaceint, 1)
4302 
4303         # XXX (3) Inspect final and fail unless there's nothing there.
4304hunk ./src/allmydata/test/test_backends.py 168
4305+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4306         bs[0].remote_close()
4307         # XXX (4a) Inspect final and fail unless share 0 is there.
4308hunk ./src/allmydata/test/test_backends.py 171
4309+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4310+        #contents = sharesinfinal[0].read_share_data(0,999)
4311+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4312         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4313 
4314         # What happens when there's not enough space for the client's request?
4315hunk ./src/allmydata/test/test_backends.py 177
4316-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4317+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4318 
4319         # Now test the allocated_size method.
4320         # self.failIf(mockexists.called, mockexists.call_args_list)
4321hunk ./src/allmydata/test/test_backends.py 185
4322         #self.failIf(mockrename.called, mockrename.call_args_list)
4323         #self.failIf(mockstat.called, mockstat.call_args_list)
4324 
4325-    def test_handle_incoming(self):
4326-        incomingset = self.s.backend.get_incoming('teststorage_index')
4327-        self.failUnlessReallyEqual(incomingset, set())
4328-
4329-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4330-       
4331-        incomingset = self.s.backend.get_incoming('teststorage_index')
4332-        self.failUnlessReallyEqual(incomingset, set((0,)))
4333-
4334-        bs[0].remote_close()
4335-        self.failUnlessReallyEqual(incomingset, set())
4336-
4337     @mock.patch('os.path.exists')
4338     @mock.patch('os.path.getsize')
4339     @mock.patch('__builtin__.open')
4340hunk ./src/allmydata/test/test_backends.py 208
4341             self.failUnless('r' in mode, mode)
4342             self.failUnless('b' in mode, mode)
4343 
4344-            return StringIO(share_file_data)
4345+            return StringIO(share_data)
4346         mockopen.side_effect = call_open
4347 
4348hunk ./src/allmydata/test/test_backends.py 211
4349-        datalen = len(share_file_data)
4350+        datalen = len(share_data)
4351         def call_getsize(fname):
4352             self.failUnlessReallyEqual(fname, sharefname)
4353             return datalen
4354hunk ./src/allmydata/test/test_backends.py 223
4355         mockexists.side_effect = call_exists
4356 
4357         # Now begin the test.
4358-        bs = self.s.remote_get_buckets('teststorage_index')
4359+        bs = self.ss.remote_get_buckets('teststorage_index')
4360 
4361         self.failUnlessEqual(len(bs), 1)
4362hunk ./src/allmydata/test/test_backends.py 226
4363-        b = bs[0]
4364+        b = bs['0']
4365         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4366hunk ./src/allmydata/test/test_backends.py 228
4367-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4368+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4369         # If you try to read past the end you get the as much data as is there.
4370hunk ./src/allmydata/test/test_backends.py 230
4371-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4372+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4373         # If you start reading past the end of the file you get the empty string.
4374         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4375 
4376}
4377[jacp14 or so
4378wilcoxjg@gmail.com**20110713060346
4379 Ignore-this: 7026810f60879d65b525d450e43ff87a
4380] {
4381hunk ./src/allmydata/storage/backends/das/core.py 102
4382             for f in os.listdir(finalstoragedir):
4383                 if NUM_RE.match(f):
4384                     filename = os.path.join(finalstoragedir, f)
4385-                    yield ImmutableShare(filename, storageindex, f)
4386+                    yield ImmutableShare(filename, storageindex, int(f))
4387         except OSError:
4388             # Commonly caused by there being no shares at all.
4389             pass
4390hunk ./src/allmydata/storage/backends/null/core.py 25
4391     def set_storage_server(self, ss):
4392         self.ss = ss
4393 
4394+    def get_incoming(self, storageindex):
4395+        return set()
4396+
4397 class ImmutableShare:
4398     sharetype = "immutable"
4399 
4400hunk ./src/allmydata/storage/immutable.py 19
4401 
4402     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4403         self.ss = ss
4404-        self._max_size = max_size # don't allow the client to write more than this
4405+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4406+
4407         self._canary = canary
4408         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4409         self.closed = False
4410hunk ./src/allmydata/test/test_backends.py 135
4411         mockopen.side_effect = call_open
4412         self.backend = DASCore(tempdir, expiration_policy)
4413         self.ss = StorageServer(testnodeid, self.backend)
4414-        self.ssinf = StorageServer(testnodeid, self.backend)
4415+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4416+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4417 
4418     @mock.patch('time.time')
4419     def test_write_share(self, mocktime):
4420hunk ./src/allmydata/test/test_backends.py 161
4421         # with the same si, until BucketWriter.remote_close() has been called.
4422         self.failIf(bsa)
4423 
4424-        # Write 'a' to shnum 0. Only tested together with close and read.
4425-        bs[0].remote_write(0, 'a')
4426-
4427         # Test allocated size.
4428         spaceint = self.ss.allocated_size()
4429         self.failUnlessReallyEqual(spaceint, 1)
4430hunk ./src/allmydata/test/test_backends.py 165
4431 
4432-        # XXX (3) Inspect final and fail unless there's nothing there.
4433+        # Write 'a' to shnum 0. Only tested together with close and read.
4434+        bs[0].remote_write(0, 'a')
4435+       
4436+        # Preclose: Inspect final, failUnless nothing there.
4437         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4438         bs[0].remote_close()
4439hunk ./src/allmydata/test/test_backends.py 171
4440-        # XXX (4a) Inspect final and fail unless share 0 is there.
4441-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4442-        #contents = sharesinfinal[0].read_share_data(0,999)
4443-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4444-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4445 
4446hunk ./src/allmydata/test/test_backends.py 172
4447-        # What happens when there's not enough space for the client's request?
4448-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4449+        # Postclose: (Omnibus) failUnless written data is in final.
4450+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4451+        contents = sharesinfinal[0].read_share_data(0,73)
4452+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4453 
4454hunk ./src/allmydata/test/test_backends.py 177
4455-        # Now test the allocated_size method.
4456-        # self.failIf(mockexists.called, mockexists.call_args_list)
4457-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4458-        #self.failIf(mockrename.called, mockrename.call_args_list)
4459-        #self.failIf(mockstat.called, mockstat.call_args_list)
4460+        # Cover interior of for share in get_shares loop.
4461+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4462+       
4463+    @mock.patch('time.time')
4464+    @mock.patch('allmydata.util.fileutil.get_available_space')
4465+    def test_out_of_space(self, mockget_available_space, mocktime):
4466+        mocktime.return_value = 0
4467+       
4468+        def call_get_available_space(dir, reserve):
4469+            return 0
4470+
4471+        mockget_available_space.side_effect = call_get_available_space
4472+       
4473+       
4474+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4475 
4476     @mock.patch('os.path.exists')
4477     @mock.patch('os.path.getsize')
4478hunk ./src/allmydata/test/test_backends.py 234
4479         bs = self.ss.remote_get_buckets('teststorage_index')
4480 
4481         self.failUnlessEqual(len(bs), 1)
4482-        b = bs['0']
4483+        b = bs[0]
4484         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4485         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4486         # If you try to read past the end you get the as much data as is there.
4487}
4488[temporary work-in-progress patch to be unrecorded
4489zooko@zooko.com**20110714003008
4490 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4491 tidy up a few tests, work done in pair-programming with Zancas
4492] {
4493hunk ./src/allmydata/storage/backends/das/core.py 65
4494         self._clean_incomplete()
4495 
4496     def _clean_incomplete(self):
4497-        fileutil.rm_dir(self.incomingdir)
4498+        fileutil.rmtree(self.incomingdir)
4499         fileutil.make_dirs(self.incomingdir)
4500 
4501     def _setup_corruption_advisory(self):
4502hunk ./src/allmydata/storage/immutable.py 1
4503-import os, stat, struct, time
4504+import os, time
4505 
4506 from foolscap.api import Referenceable
4507 
4508hunk ./src/allmydata/storage/server.py 1
4509-import os, re, weakref, struct, time
4510+import os, weakref, struct, time
4511 
4512 from foolscap.api import Referenceable
4513 from twisted.application import service
4514hunk ./src/allmydata/storage/server.py 7
4515 
4516 from zope.interface import implements
4517-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4518+from allmydata.interfaces import RIStorageServer, IStatsProducer
4519 from allmydata.util import fileutil, idlib, log, time_format
4520 import allmydata # for __full_version__
4521 
4522hunk ./src/allmydata/storage/server.py 313
4523         self.add_latency("get", time.time() - start)
4524         return bucketreaders
4525 
4526-    def remote_get_incoming(self, storageindex):
4527-        incoming_share_set = self.backend.get_incoming(storageindex)
4528-        return incoming_share_set
4529-
4530     def get_leases(self, storageindex):
4531         """Provide an iterator that yields all of the leases attached to this
4532         bucket. Each lease is returned as a LeaseInfo instance.
4533hunk ./src/allmydata/test/test_backends.py 3
4534 from twisted.trial import unittest
4535 
4536+from twisted.path.filepath import FilePath
4537+
4538 from StringIO import StringIO
4539 
4540 from allmydata.test.common_util import ReallyEqualMixin
4541hunk ./src/allmydata/test/test_backends.py 38
4542 
4543 
4544 testnodeid = 'testnodeidxxxxxxxxxx'
4545-tempdir = 'teststoredir'
4546-basedir = os.path.join(tempdir, 'shares')
4547+storedir = 'teststoredir'
4548+storedirfp = FilePath(storedir)
4549+basedir = os.path.join(storedir, 'shares')
4550 baseincdir = os.path.join(basedir, 'incoming')
4551 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4552 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4553hunk ./src/allmydata/test/test_backends.py 53
4554                      'cutoff_date' : None,
4555                      'sharetypes' : None}
4556 
4557-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4558+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4559+    """ NullBackend is just for testing and executable documentation, so
4560+    this test is actually a test of StorageServer in which we're using
4561+    NullBackend as helper code for the test, rather than a test of
4562+    NullBackend. """
4563     def setUp(self):
4564         self.ss = StorageServer(testnodeid, backend=NullCore())
4565 
4566hunk ./src/allmydata/test/test_backends.py 62
4567     @mock.patch('os.mkdir')
4568+
4569     @mock.patch('__builtin__.open')
4570     @mock.patch('os.listdir')
4571     @mock.patch('os.path.isdir')
4572hunk ./src/allmydata/test/test_backends.py 69
4573     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4574         """ Write a new share. """
4575 
4576-        # Now begin the test.
4577         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4578         bs[0].remote_write(0, 'a')
4579         self.failIf(mockisdir.called)
4580hunk ./src/allmydata/test/test_backends.py 83
4581     @mock.patch('os.listdir')
4582     @mock.patch('os.path.isdir')
4583     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4584-        """ This tests whether a server instance can be constructed
4585-        with a filesystem backend. To pass the test, it has to use the
4586-        filesystem in only the prescribed ways. """
4587+        """ This tests whether a server instance can be constructed with a
4588+        filesystem backend. To pass the test, it mustn't use the filesystem
4589+        outside of its configured storedir. """
4590 
4591         def call_open(fname, mode):
4592hunk ./src/allmydata/test/test_backends.py 88
4593-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4594-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4595-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4596-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4597-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4598+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4599+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4600+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4601+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4602+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4603                 return StringIO()
4604             else:
4605hunk ./src/allmydata/test/test_backends.py 95
4606-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4607+                fnamefp = FilePath(fname)
4608+                self.failUnless(storedirfp in fnamefp.parents(),
4609+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4610         mockopen.side_effect = call_open
4611 
4612         def call_isdir(fname):
4613hunk ./src/allmydata/test/test_backends.py 101
4614-            if fname == os.path.join(tempdir,'shares'):
4615+            if fname == os.path.join(storedir, 'shares'):
4616                 return True
4617hunk ./src/allmydata/test/test_backends.py 103
4618-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4619+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4620                 return True
4621             else:
4622                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4623hunk ./src/allmydata/test/test_backends.py 109
4624         mockisdir.side_effect = call_isdir
4625 
4626+        mocklistdir.return_value = []
4627+
4628         def call_mkdir(fname, mode):
4629hunk ./src/allmydata/test/test_backends.py 112
4630-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4631             self.failUnlessEqual(0777, mode)
4632hunk ./src/allmydata/test/test_backends.py 113
4633-            if fname == tempdir:
4634-                return None
4635-            elif fname == os.path.join(tempdir,'shares'):
4636-                return None
4637-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4638-                return None
4639-            else:
4640-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4641+            self.failUnlessIn(fname,
4642+                              [storedir,
4643+                               os.path.join(storedir, 'shares'),
4644+                               os.path.join(storedir, 'shares', 'incoming')],
4645+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4646         mockmkdir.side_effect = call_mkdir
4647 
4648         # Now begin the test.
4649hunk ./src/allmydata/test/test_backends.py 121
4650-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4651+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4652 
4653         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4654 
4655hunk ./src/allmydata/test/test_backends.py 126
4656 
4657-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4658+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4659+    """ This tests both the StorageServer xyz """
4660     @mock.patch('__builtin__.open')
4661     def setUp(self, mockopen):
4662         def call_open(fname, mode):
4663hunk ./src/allmydata/test/test_backends.py 131
4664-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4665-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4666-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4667-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4668-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4669+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4670+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4671+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4672+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4673+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4674                 return StringIO()
4675             else:
4676                 _assert(False, "The tester code doesn't recognize this case.") 
4677hunk ./src/allmydata/test/test_backends.py 141
4678 
4679         mockopen.side_effect = call_open
4680-        self.backend = DASCore(tempdir, expiration_policy)
4681+        self.backend = DASCore(storedir, expiration_policy)
4682         self.ss = StorageServer(testnodeid, self.backend)
4683hunk ./src/allmydata/test/test_backends.py 143
4684-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4685+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4686         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4687 
4688     @mock.patch('time.time')
4689hunk ./src/allmydata/test/test_backends.py 147
4690-    def test_write_share(self, mocktime):
4691-        """ Write a new share. """
4692-        # Now begin the test.
4693+    def test_write_and_read_share(self, mocktime):
4694+        """
4695+        Write a new share, read it, and test the server's (and FS backend's)
4696+        handling of simultaneous and successive attempts to write the same
4697+        share.
4698+        """
4699 
4700         mocktime.return_value = 0
4701         # Inspect incoming and fail unless it's empty.
4702hunk ./src/allmydata/test/test_backends.py 159
4703         incomingset = self.ss.backend.get_incoming('teststorage_index')
4704         self.failUnlessReallyEqual(incomingset, set())
4705         
4706-        # Among other things, populate incoming with the sharenum: 0.
4707+        # Populate incoming with the sharenum: 0.
4708         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4709 
4710         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4711hunk ./src/allmydata/test/test_backends.py 163
4712-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4713+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4714         
4715hunk ./src/allmydata/test/test_backends.py 165
4716-        # Attempt to create a second share writer with the same share.
4717+        # Attempt to create a second share writer with the same sharenum.
4718         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4719 
4720         # Show that no sharewriter results from a remote_allocate_buckets
4721hunk ./src/allmydata/test/test_backends.py 169
4722-        # with the same si, until BucketWriter.remote_close() has been called.
4723+        # with the same si and sharenum, until BucketWriter.remote_close()
4724+        # has been called.
4725         self.failIf(bsa)
4726 
4727         # Test allocated size.
4728hunk ./src/allmydata/test/test_backends.py 187
4729         # Postclose: (Omnibus) failUnless written data is in final.
4730         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4731         contents = sharesinfinal[0].read_share_data(0,73)
4732-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4733+        self.failUnlessReallyEqual(contents, client_data)
4734 
4735hunk ./src/allmydata/test/test_backends.py 189
4736-        # Cover interior of for share in get_shares loop.
4737-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4738+        # Exercise the case that the share we're asking to allocate is
4739+        # already (completely) uploaded.
4740+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4741         
4742     @mock.patch('time.time')
4743     @mock.patch('allmydata.util.fileutil.get_available_space')
4744hunk ./src/allmydata/test/test_backends.py 210
4745     @mock.patch('os.path.getsize')
4746     @mock.patch('__builtin__.open')
4747     @mock.patch('os.listdir')
4748-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4749+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4750         """ This tests whether the code correctly finds and reads
4751         shares written out by old (Tahoe-LAFS <= v1.8.2)
4752         servers. There is a similar test in test_download, but that one
4753hunk ./src/allmydata/test/test_backends.py 219
4754         StorageServer object. """
4755 
4756         def call_listdir(dirname):
4757-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4758+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4759             return ['0']
4760 
4761         mocklistdir.side_effect = call_listdir
4762hunk ./src/allmydata/test/test_backends.py 226
4763 
4764         def call_open(fname, mode):
4765             self.failUnlessReallyEqual(fname, sharefname)
4766-            self.failUnless('r' in mode, mode)
4767+            self.failUnlessEqual(mode[0], 'r', mode)
4768             self.failUnless('b' in mode, mode)
4769 
4770             return StringIO(share_data)
4771hunk ./src/allmydata/test/test_backends.py 268
4772         filesystem in only the prescribed ways. """
4773 
4774         def call_open(fname, mode):
4775-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4776-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4777-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4778-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4779-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4780+            if fname == os.path.join(storedir,'bucket_counter.state'):
4781+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4782+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4783+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4784+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4785                 return StringIO()
4786             else:
4787                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4788hunk ./src/allmydata/test/test_backends.py 279
4789         mockopen.side_effect = call_open
4790 
4791         def call_isdir(fname):
4792-            if fname == os.path.join(tempdir,'shares'):
4793+            if fname == os.path.join(storedir,'shares'):
4794                 return True
4795hunk ./src/allmydata/test/test_backends.py 281
4796-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4797+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4798                 return True
4799             else:
4800                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4801hunk ./src/allmydata/test/test_backends.py 290
4802         def call_mkdir(fname, mode):
4803             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4804             self.failUnlessEqual(0777, mode)
4805-            if fname == tempdir:
4806+            if fname == storedir:
4807                 return None
4808hunk ./src/allmydata/test/test_backends.py 292
4809-            elif fname == os.path.join(tempdir,'shares'):
4810+            elif fname == os.path.join(storedir,'shares'):
4811                 return None
4812hunk ./src/allmydata/test/test_backends.py 294
4813-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4814+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4815                 return None
4816             else:
4817                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4818hunk ./src/allmydata/util/fileutil.py 5
4819 Futz with files like a pro.
4820 """
4821 
4822-import sys, exceptions, os, stat, tempfile, time, binascii
4823+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4824 
4825 from twisted.python import log
4826 
4827hunk ./src/allmydata/util/fileutil.py 186
4828             raise tx
4829         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4830 
4831-def rm_dir(dirname):
4832+def rmtree(dirname):
4833     """
4834     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4835     already gone, do nothing and return without raising an exception.  If this
4836hunk ./src/allmydata/util/fileutil.py 205
4837             else:
4838                 remove(fullname)
4839         os.rmdir(dirname)
4840-    except Exception, le:
4841-        # Ignore "No such file or directory"
4842-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4843+    except EnvironmentError, le:
4844+        # Ignore "No such file or directory", collect any other exception.
4845+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4846             excs.append(le)
4847hunk ./src/allmydata/util/fileutil.py 209
4848+    except Exception, le:
4849+        excs.append(le)
4850 
4851     # Okay, now we've recursively removed everything, ignoring any "No
4852     # such file or directory" errors, and collecting any other errors.
4853hunk ./src/allmydata/util/fileutil.py 222
4854             raise OSError, "Failed to remove dir for unknown reason."
4855         raise OSError, excs
4856 
4857+def rm_dir(dirname):
4858+    # Renamed to be like shutil.rmtree and unlike rmdir.
4859+    return rmtree(dirname)
4860 
4861 def remove_if_possible(f):
4862     try:
4863}
4864[work in progress intended to be unrecorded and never committed to trunk
4865zooko@zooko.com**20110714212139
4866 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
4867 switch from os.path.join to filepath
4868 incomplete refactoring of common "stay in your subtree" tester code into a superclass
4869 
4870] {
4871hunk ./src/allmydata/test/test_backends.py 3
4872 from twisted.trial import unittest
4873 
4874-from twisted.path.filepath import FilePath
4875+from twisted.python.filepath import FilePath
4876 
4877 from StringIO import StringIO
4878 
4879hunk ./src/allmydata/test/test_backends.py 10
4880 from allmydata.test.common_util import ReallyEqualMixin
4881 from allmydata.util.assertutil import _assert
4882 
4883-import mock, os
4884+import mock
4885 
4886 # This is the code that we're going to be testing.
4887 from allmydata.storage.server import StorageServer
4888hunk ./src/allmydata/test/test_backends.py 25
4889 shareversionnumber = '\x00\x00\x00\x01'
4890 sharedatalength = '\x00\x00\x00\x01'
4891 numberofleases = '\x00\x00\x00\x01'
4892+
4893 shareinputdata = 'a'
4894 ownernumber = '\x00\x00\x00\x00'
4895 renewsecret  = 'x'*32
4896hunk ./src/allmydata/test/test_backends.py 39
4897 
4898 
4899 testnodeid = 'testnodeidxxxxxxxxxx'
4900-storedir = 'teststoredir'
4901-storedirfp = FilePath(storedir)
4902-basedir = os.path.join(storedir, 'shares')
4903-baseincdir = os.path.join(basedir, 'incoming')
4904-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4905-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4906-shareincomingname = os.path.join(sharedirincomingname, '0')
4907-sharefname = os.path.join(sharedirfinalname, '0')
4908+
4909+class TestFilesMixin(unittest.TestCase):
4910+    def setUp(self):
4911+        self.storedir = FilePath('teststoredir')
4912+        self.basedir = self.storedir.child('shares')
4913+        self.baseincdir = self.basedir.child('incoming')
4914+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4915+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4916+        self.shareincomingname = self.sharedirincomingname.child('0')
4917+        self.sharefname = self.sharedirfinalname.child('0')
4918+
4919+    def call_open(self, fname, mode):
4920+        fnamefp = FilePath(fname)
4921+        if fnamefp == self.storedir.child('bucket_counter.state'):
4922+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
4923+        elif fnamefp == self.storedir.child('lease_checker.state'):
4924+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
4925+        elif fnamefp == self.storedir.child('lease_checker.history'):
4926+            return StringIO()
4927+        else:
4928+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4929+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
4930+
4931+    def call_isdir(self, fname):
4932+        fnamefp = FilePath(fname)
4933+        if fnamefp == self.storedir.child('shares'):
4934+            return True
4935+        elif fnamefp == self.storedir.child('shares').child('incoming'):
4936+            return True
4937+        else:
4938+            self.failUnless(self.storedir in fnamefp.parents(),
4939+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4940+
4941+    def call_mkdir(self, fname, mode):
4942+        self.failUnlessEqual(0777, mode)
4943+        fnamefp = FilePath(fname)
4944+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4945+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4946+
4947+
4948+    @mock.patch('os.mkdir')
4949+    @mock.patch('__builtin__.open')
4950+    @mock.patch('os.listdir')
4951+    @mock.patch('os.path.isdir')
4952+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4953+        mocklistdir.return_value = []
4954+        mockmkdir.side_effect = self.call_mkdir
4955+        mockisdir.side_effect = self.call_isdir
4956+        mockopen.side_effect = self.call_open
4957+        mocklistdir.return_value = []
4958+       
4959+        test_func()
4960+       
4961+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4962 
4963 expiration_policy = {'enabled' : False,
4964                      'mode' : 'age',
4965hunk ./src/allmydata/test/test_backends.py 123
4966         self.failIf(mockopen.called)
4967         self.failIf(mockmkdir.called)
4968 
4969-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
4970-    @mock.patch('time.time')
4971-    @mock.patch('os.mkdir')
4972-    @mock.patch('__builtin__.open')
4973-    @mock.patch('os.listdir')
4974-    @mock.patch('os.path.isdir')
4975-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4976+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
4977+    def test_create_server_fs_backend(self):
4978         """ This tests whether a server instance can be constructed with a
4979         filesystem backend. To pass the test, it mustn't use the filesystem
4980         outside of its configured storedir. """
4981hunk ./src/allmydata/test/test_backends.py 129
4982 
4983-        def call_open(fname, mode):
4984-            if fname == os.path.join(storedir, 'bucket_counter.state'):
4985-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4986-            elif fname == os.path.join(storedir, 'lease_checker.state'):
4987-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4988-            elif fname == os.path.join(storedir, 'lease_checker.history'):
4989-                return StringIO()
4990-            else:
4991-                fnamefp = FilePath(fname)
4992-                self.failUnless(storedirfp in fnamefp.parents(),
4993-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4994-        mockopen.side_effect = call_open
4995+        def _f():
4996+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4997 
4998hunk ./src/allmydata/test/test_backends.py 132
4999-        def call_isdir(fname):
5000-            if fname == os.path.join(storedir, 'shares'):
5001-                return True
5002-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
5003-                return True
5004-            else:
5005-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5006-        mockisdir.side_effect = call_isdir
5007-
5008-        mocklistdir.return_value = []
5009-
5010-        def call_mkdir(fname, mode):
5011-            self.failUnlessEqual(0777, mode)
5012-            self.failUnlessIn(fname,
5013-                              [storedir,
5014-                               os.path.join(storedir, 'shares'),
5015-                               os.path.join(storedir, 'shares', 'incoming')],
5016-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
5017-        mockmkdir.side_effect = call_mkdir
5018-
5019-        # Now begin the test.
5020-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5021-
5022-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5023+        self._help_test_stay_in_your_subtree(_f)
5024 
5025 
5026 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5027}
5028[another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
5029zooko@zooko.com**20110715191500
5030 Ignore-this: af33336789041800761e80510ea2f583
5031 In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
5032] {
5033hunk ./src/allmydata/storage/backends/das/core.py 59
5034                 log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5035                         umid="0wZ27w", level=log.UNUSUAL)
5036 
5037-        self.sharedir = os.path.join(self.storedir, "shares")
5038-        fileutil.make_dirs(self.sharedir)
5039-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
5040+        self.sharedir = self.storedir.child("shares")
5041+        fileutil.fp_make_dirs(self.sharedir)
5042+        self.incomingdir = self.sharedir.child('incoming')
5043         self._clean_incomplete()
5044 
5045     def _clean_incomplete(self):
5046hunk ./src/allmydata/storage/backends/das/core.py 65
5047-        fileutil.rmtree(self.incomingdir)
5048-        fileutil.make_dirs(self.incomingdir)
5049+        fileutil.fp_remove(self.incomingdir)
5050+        fileutil.fp_make_dirs(self.incomingdir)
5051 
5052     def _setup_corruption_advisory(self):
5053         # we don't actually create the corruption-advisory dir until necessary
5054hunk ./src/allmydata/storage/backends/das/core.py 70
5055-        self.corruption_advisory_dir = os.path.join(self.storedir,
5056-                                                    "corruption-advisories")
5057+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
5058 
5059     def _setup_bucket_counter(self):
5060hunk ./src/allmydata/storage/backends/das/core.py 73
5061-        statefname = os.path.join(self.storedir, "bucket_counter.state")
5062+        statefname = self.storedir.child("bucket_counter.state")
5063         self.bucket_counter = FSBucketCountingCrawler(statefname)
5064         self.bucket_counter.setServiceParent(self)
5065 
5066hunk ./src/allmydata/storage/backends/das/core.py 78
5067     def _setup_lease_checkerf(self, expiration_policy):
5068-        statefile = os.path.join(self.storedir, "lease_checker.state")
5069-        historyfile = os.path.join(self.storedir, "lease_checker.history")
5070+        statefile = self.storedir.child("lease_checker.state")
5071+        historyfile = self.storedir.child("lease_checker.history")
5072         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
5073         self.lease_checker.setServiceParent(self)
5074 
5075hunk ./src/allmydata/storage/backends/das/core.py 83
5076-    def get_incoming(self, storageindex):
5077+    def get_incoming_shnums(self, storageindex):
5078         """Return the set of incoming shnums."""
5079         try:
5080hunk ./src/allmydata/storage/backends/das/core.py 86
5081-            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
5082-            incominglist = os.listdir(incomingsharesdir)
5083-            incomingshnums = [int(x) for x in incominglist]
5084-            return set(incomingshnums)
5085-        except OSError:
5086-            # XXX I'd like to make this more specific. If there are no shares at all.
5087-            return set()
5088+           
5089+            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5090+            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5091+            return frozenset(incomingshnums)
5092+        except UnlistableError:
5093+            # There is no shares directory at all.
5094+            return frozenset()
5095             
5096     def get_shares(self, storageindex):
5097         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5098hunk ./src/allmydata/storage/backends/das/core.py 96
5099-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
5100+        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5101         try:
5102hunk ./src/allmydata/storage/backends/das/core.py 98
5103-            for f in os.listdir(finalstoragedir):
5104-                if NUM_RE.match(f):
5105-                    filename = os.path.join(finalstoragedir, f)
5106-                    yield ImmutableShare(filename, storageindex, int(f))
5107-        except OSError:
5108-            # Commonly caused by there being no shares at all.
5109+            for f in finalstoragedir.listdir():
5110+                if NUM_RE.match(f.basename):
5111+                    yield ImmutableShare(f, storageindex, int(f))
5112+        except UnlistableError:
5113+            # There is no shares directory at all.
5114             pass
5115         
5116     def get_available_space(self):
5117hunk ./src/allmydata/storage/backends/das/core.py 149
5118 # then the value stored in this field will be the actual share data length
5119 # modulo 2**32.
5120 
5121-class ImmutableShare:
5122+class ImmutableShare(object):
5123     LEASE_SIZE = struct.calcsize(">L32s32sL")
5124     sharetype = "immutable"
5125 
5126hunk ./src/allmydata/storage/backends/das/core.py 166
5127         if create:
5128             # touch the file, so later callers will see that we're working on
5129             # it. Also construct the metadata.
5130-            assert not os.path.exists(self.finalhome)
5131-            fileutil.make_dirs(os.path.dirname(self.incominghome))
5132+            assert not finalhome.exists()
5133+            fp_make_dirs(self.incominghome)
5134             f = open(self.incominghome, 'wb')
5135             # The second field -- the four-byte share data length -- is no
5136             # longer used as of Tahoe v1.3.0, but we continue to write it in
5137hunk ./src/allmydata/storage/backends/das/core.py 316
5138         except IndexError:
5139             self.add_lease(lease_info)
5140 
5141-
5142     def cancel_lease(self, cancel_secret):
5143         """Remove a lease with the given cancel_secret. If the last lease is
5144         cancelled, the file will be removed. Return the number of bytes that
5145hunk ./src/allmydata/storage/common.py 19
5146 def si_a2b(ascii_storageindex):
5147     return base32.a2b(ascii_storageindex)
5148 
5149-def storage_index_to_dir(storageindex):
5150+def storage_index_to_dir(startfp, storageindex):
5151     sia = si_b2a(storageindex)
5152     return os.path.join(sia[:2], sia)
5153hunk ./src/allmydata/storage/server.py 210
5154 
5155         # fill incoming with all shares that are incoming use a set operation
5156         # since there's no need to operate on individual pieces
5157-        incoming = self.backend.get_incoming(storageindex)
5158+        incoming = self.backend.get_incoming_shnums(storageindex)
5159 
5160         for shnum in ((sharenums - alreadygot) - incoming):
5161             if (not limited) or (remaining_space >= max_space_per_bucket):
5162hunk ./src/allmydata/test/test_backends.py 5
5163 
5164 from twisted.python.filepath import FilePath
5165 
5166+from allmydata.util.log import msg
5167+
5168 from StringIO import StringIO
5169 
5170 from allmydata.test.common_util import ReallyEqualMixin
5171hunk ./src/allmydata/test/test_backends.py 42
5172 
5173 testnodeid = 'testnodeidxxxxxxxxxx'
5174 
5175-class TestFilesMixin(unittest.TestCase):
5176-    def setUp(self):
5177-        self.storedir = FilePath('teststoredir')
5178-        self.basedir = self.storedir.child('shares')
5179-        self.baseincdir = self.basedir.child('incoming')
5180-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5181-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5182-        self.shareincomingname = self.sharedirincomingname.child('0')
5183-        self.sharefname = self.sharedirfinalname.child('0')
5184+class MockStat:
5185+    def __init__(self):
5186+        self.st_mode = None
5187 
5188hunk ./src/allmydata/test/test_backends.py 46
5189+class MockFiles(unittest.TestCase):
5190+    """ I simulate a filesystem that the code under test can use. I flag the
5191+    code under test if it reads or writes outside of its prescribed
5192+    subtree. I simulate just the parts of the filesystem that the current
5193+    implementation of DAS backend needs. """
5194     def call_open(self, fname, mode):
5195         fnamefp = FilePath(fname)
5196hunk ./src/allmydata/test/test_backends.py 53
5197+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5198+                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5199+
5200         if fnamefp == self.storedir.child('bucket_counter.state'):
5201             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
5202         elif fnamefp == self.storedir.child('lease_checker.state'):
5203hunk ./src/allmydata/test/test_backends.py 61
5204             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
5205         elif fnamefp == self.storedir.child('lease_checker.history'):
5206+            # This is separated out from the else clause below just because
5207+            # we know this particular file is going to be used by the
5208+            # current implementation of DAS backend, and we might want to
5209+            # use this information in this test in the future...
5210             return StringIO()
5211         else:
5212hunk ./src/allmydata/test/test_backends.py 67
5213-            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5214-                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5215+            # Anything else you open inside your subtree appears to be an
5216+            # empty file.
5217+            return StringIO()
5218 
5219     def call_isdir(self, fname):
5220         fnamefp = FilePath(fname)
5221hunk ./src/allmydata/test/test_backends.py 73
5222-        if fnamefp == self.storedir.child('shares'):
5223+        return fnamefp.isdir()
5224+
5225+        self.failUnless(self.storedir == self or self.storedir in self.parents(),
5226+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
5227+
5228+        # The first two cases are separate from the else clause below just
5229+        # because we know that the current implementation of the DAS backend
5230+        # inspects these two directories and we might want to make use of
5231+        # that information in the tests in the future...
5232+        if self == self.storedir.child('shares'):
5233             return True
5234hunk ./src/allmydata/test/test_backends.py 84
5235-        elif fnamefp == self.storedir.child('shares').child('incoming'):
5236+        elif self == self.storedir.child('shares').child('incoming'):
5237             return True
5238         else:
5239hunk ./src/allmydata/test/test_backends.py 87
5240-            self.failUnless(self.storedir in fnamefp.parents(),
5241-                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5242+            # Anything else you open inside your subtree appears to be a
5243+            # directory.
5244+            return True
5245 
5246     def call_mkdir(self, fname, mode):
5247hunk ./src/allmydata/test/test_backends.py 92
5248-        self.failUnlessEqual(0777, mode)
5249         fnamefp = FilePath(fname)
5250         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5251                         "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5252hunk ./src/allmydata/test/test_backends.py 95
5253+        self.failUnlessEqual(0777, mode)
5254 
5255hunk ./src/allmydata/test/test_backends.py 97
5256+    def call_listdir(self, fname):
5257+        fnamefp = FilePath(fname)
5258+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5259+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5260 
5261hunk ./src/allmydata/test/test_backends.py 102
5262-    @mock.patch('os.mkdir')
5263-    @mock.patch('__builtin__.open')
5264-    @mock.patch('os.listdir')
5265-    @mock.patch('os.path.isdir')
5266-    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5267-        mocklistdir.return_value = []
5268+    def call_stat(self, fname):
5269+        fnamefp = FilePath(fname)
5270+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5271+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5272+
5273+        msg("%s.call_stat(%s)" % (self, fname,))
5274+        mstat = MockStat()
5275+        mstat.st_mode = 16893 # a directory
5276+        return mstat
5277+
5278+    def setUp(self):
5279+        msg( "%s.setUp()" % (self,))
5280+        self.storedir = FilePath('teststoredir')
5281+        self.basedir = self.storedir.child('shares')
5282+        self.baseincdir = self.basedir.child('incoming')
5283+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5284+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5285+        self.shareincomingname = self.sharedirincomingname.child('0')
5286+        self.sharefname = self.sharedirfinalname.child('0')
5287+
5288+        self.mocklistdirp = mock.patch('os.listdir')
5289+        mocklistdir = self.mocklistdirp.__enter__()
5290+        mocklistdir.side_effect = self.call_listdir
5291+
5292+        self.mockmkdirp = mock.patch('os.mkdir')
5293+        mockmkdir = self.mockmkdirp.__enter__()
5294         mockmkdir.side_effect = self.call_mkdir
5295hunk ./src/allmydata/test/test_backends.py 129
5296+
5297+        self.mockisdirp = mock.patch('os.path.isdir')
5298+        mockisdir = self.mockisdirp.__enter__()
5299         mockisdir.side_effect = self.call_isdir
5300hunk ./src/allmydata/test/test_backends.py 133
5301+
5302+        self.mockopenp = mock.patch('__builtin__.open')
5303+        mockopen = self.mockopenp.__enter__()
5304         mockopen.side_effect = self.call_open
5305hunk ./src/allmydata/test/test_backends.py 137
5306-        mocklistdir.return_value = []
5307-       
5308-        test_func()
5309-       
5310-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5311+
5312+        self.mockstatp = mock.patch('os.stat')
5313+        mockstat = self.mockstatp.__enter__()
5314+        mockstat.side_effect = self.call_stat
5315+
5316+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
5317+        mockfpstat = self.mockfpstatp.__enter__()
5318+        mockfpstat.side_effect = self.call_stat
5319+
5320+    def tearDown(self):
5321+        msg( "%s.tearDown()" % (self,))
5322+        self.mockfpstatp.__exit__()
5323+        self.mockstatp.__exit__()
5324+        self.mockopenp.__exit__()
5325+        self.mockisdirp.__exit__()
5326+        self.mockmkdirp.__exit__()
5327+        self.mocklistdirp.__exit__()
5328 
5329 expiration_policy = {'enabled' : False,
5330                      'mode' : 'age',
5331hunk ./src/allmydata/test/test_backends.py 184
5332         self.failIf(mockopen.called)
5333         self.failIf(mockmkdir.called)
5334 
5335-class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5336+class TestServerConstruction(MockFiles, ReallyEqualMixin):
5337     def test_create_server_fs_backend(self):
5338         """ This tests whether a server instance can be constructed with a
5339         filesystem backend. To pass the test, it mustn't use the filesystem
5340hunk ./src/allmydata/test/test_backends.py 190
5341         outside of its configured storedir. """
5342 
5343-        def _f():
5344-            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5345+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5346 
5347hunk ./src/allmydata/test/test_backends.py 192
5348-        self._help_test_stay_in_your_subtree(_f)
5349-
5350-
5351-class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5352-    """ This tests both the StorageServer xyz """
5353-    @mock.patch('__builtin__.open')
5354-    def setUp(self, mockopen):
5355-        def call_open(fname, mode):
5356-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5357-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5358-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5359-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5360-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5361-                return StringIO()
5362-            else:
5363-                _assert(False, "The tester code doesn't recognize this case.") 
5364-
5365-        mockopen.side_effect = call_open
5366-        self.backend = DASCore(storedir, expiration_policy)
5367-        self.ss = StorageServer(testnodeid, self.backend)
5368-        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
5369-        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5370+class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
5371+    """ This tests both the StorageServer and the DAS backend together. """
5372+    def setUp(self):
5373+        MockFiles.setUp(self)
5374+        try:
5375+            self.backend = DASCore(self.storedir, expiration_policy)
5376+            self.ss = StorageServer(testnodeid, self.backend)
5377+            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
5378+            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5379+        except:
5380+            MockFiles.tearDown(self)
5381+            raise
5382 
5383     @mock.patch('time.time')
5384     def test_write_and_read_share(self, mocktime):
5385hunk ./src/allmydata/util/fileutil.py 8
5386 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5387 
5388 from twisted.python import log
5389+from twisted.python.filepath import UnlistableError
5390 
5391 from pycryptopp.cipher.aes import AES
5392 
5393hunk ./src/allmydata/util/fileutil.py 187
5394             raise tx
5395         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5396 
5397+def fp_make_dirs(dirfp):
5398+    """
5399+    An idempotent version of FilePath.makedirs().  If the dir already
5400+    exists, do nothing and return without raising an exception.  If this
5401+    call creates the dir, return without raising an exception.  If there is
5402+    an error that prevents creation or if the directory gets deleted after
5403+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
5404+    exists, raise an exception.
5405+    """
5406+    log.msg( "xxx 0 %s" % (dirfp,))
5407+    tx = None
5408+    try:
5409+        dirfp.makedirs()
5410+    except OSError, x:
5411+        tx = x
5412+
5413+    if not dirfp.isdir():
5414+        if tx:
5415+            raise tx
5416+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5417+
5418 def rmtree(dirname):
5419     """
5420     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5421hunk ./src/allmydata/util/fileutil.py 244
5422             raise OSError, "Failed to remove dir for unknown reason."
5423         raise OSError, excs
5424 
5425+def fp_remove(dirfp):
5426+    try:
5427+        dirfp.remove()
5428+    except UnlistableError, e:
5429+        if e.originalException.errno != errno.ENOENT:
5430+            raise
5431+
5432 def rm_dir(dirname):
5433     # Renamed to be like shutil.rmtree and unlike rmdir.
5434     return rmtree(dirname)
5435}
5436[another temporary patch for sharing work-in-progress
5437zooko@zooko.com**20110720055918
5438 Ignore-this: dfa0270476cbc6511cdb54c5c9a55a8e
5439 A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
5440 There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
5441 (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
5442 
5443] {
5444hunk ./src/allmydata/storage/backends/das/core.py 5
5445 
5446 from allmydata.interfaces import IStorageBackend
5447 from allmydata.storage.backends.base import Backend
5448-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5449+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5450 from allmydata.util.assertutil import precondition
5451 
5452 #from foolscap.api import Referenceable
5453hunk ./src/allmydata/storage/backends/das/core.py 10
5454 from twisted.application import service
5455+from twisted.python.filepath import UnlistableError
5456 
5457 from zope.interface import implements
5458 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
5459hunk ./src/allmydata/storage/backends/das/core.py 17
5460 from allmydata.util import fileutil, idlib, log, time_format
5461 import allmydata # for __full_version__
5462 
5463-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5464-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5465+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5466+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5467 from allmydata.storage.lease import LeaseInfo
5468 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5469      create_mutable_sharefile
5470hunk ./src/allmydata/storage/backends/das/core.py 41
5471 # $SHARENUM matches this regex:
5472 NUM_RE=re.compile("^[0-9]+$")
5473 
5474+def is_num(fp):
5475+    return NUM_RE.match(fp.basename)
5476+
5477 class DASCore(Backend):
5478     implements(IStorageBackend)
5479     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
5480hunk ./src/allmydata/storage/backends/das/core.py 58
5481         self.storedir = storedir
5482         self.readonly = readonly
5483         self.reserved_space = int(reserved_space)
5484-        if self.reserved_space:
5485-            if self.get_available_space() is None:
5486-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5487-                        umid="0wZ27w", level=log.UNUSUAL)
5488-
5489         self.sharedir = self.storedir.child("shares")
5490         fileutil.fp_make_dirs(self.sharedir)
5491         self.incomingdir = self.sharedir.child('incoming')
5492hunk ./src/allmydata/storage/backends/das/core.py 62
5493         self._clean_incomplete()
5494+        if self.reserved_space and (self.get_available_space() is None):
5495+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5496+                    umid="0wZ27w", level=log.UNUSUAL)
5497+
5498 
5499     def _clean_incomplete(self):
5500         fileutil.fp_remove(self.incomingdir)
5501hunk ./src/allmydata/storage/backends/das/core.py 87
5502         self.lease_checker.setServiceParent(self)
5503 
5504     def get_incoming_shnums(self, storageindex):
5505-        """Return the set of incoming shnums."""
5506+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
5507+        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
5508         try:
5509hunk ./src/allmydata/storage/backends/das/core.py 90
5510-           
5511-            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5512-            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5513-            return frozenset(incomingshnums)
5514+            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
5515+            shnums = [ int(fp.basename) for fp in childfps ]
5516+            return frozenset(shnums)
5517         except UnlistableError:
5518             # There is no shares directory at all.
5519             return frozenset()
5520hunk ./src/allmydata/storage/backends/das/core.py 98
5521             
5522     def get_shares(self, storageindex):
5523-        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5524+        """ Generate ImmutableShare objects for shares we have for this
5525+        storageindex. ("Shares we have" means completed ones, excluding
5526+        incoming ones.)"""
5527         finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5528         try:
5529hunk ./src/allmydata/storage/backends/das/core.py 103
5530-            for f in finalstoragedir.listdir():
5531-                if NUM_RE.match(f.basename):
5532-                    yield ImmutableShare(f, storageindex, int(f))
5533+            for fp in finalstoragedir.children():
5534+                if is_num(fp):
5535+                    yield ImmutableShare(fp, storageindex)
5536         except UnlistableError:
5537             # There is no shares directory at all.
5538             pass
5539hunk ./src/allmydata/storage/backends/das/core.py 116
5540         return fileutil.get_available_space(self.storedir, self.reserved_space)
5541 
5542     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
5543-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
5544-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
5545+        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
5546+        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
5547         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
5548         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
5549         return bw
5550hunk ./src/allmydata/storage/backends/das/expirer.py 50
5551     slow_start = 360 # wait 6 minutes after startup
5552     minimum_cycle_time = 12*60*60 # not more than twice per day
5553 
5554-    def __init__(self, statefile, historyfile, expiration_policy):
5555-        self.historyfile = historyfile
5556+    def __init__(self, statefile, historyfp, expiration_policy):
5557+        self.historyfp = historyfp
5558         self.expiration_enabled = expiration_policy['enabled']
5559         self.mode = expiration_policy['mode']
5560         self.override_lease_duration = None
5561hunk ./src/allmydata/storage/backends/das/expirer.py 80
5562             self.state["cycle-to-date"].setdefault(k, so_far[k])
5563 
5564         # initialize history
5565-        if not os.path.exists(self.historyfile):
5566+        if not self.historyfp.exists():
5567             history = {} # cyclenum -> dict
5568hunk ./src/allmydata/storage/backends/das/expirer.py 82
5569-            f = open(self.historyfile, "wb")
5570-            pickle.dump(history, f)
5571-            f.close()
5572+            self.historyfp.setContent(pickle.dumps(history))
5573 
5574     def create_empty_cycle_dict(self):
5575         recovered = self.create_empty_recovered_dict()
5576hunk ./src/allmydata/storage/backends/das/expirer.py 305
5577         # copy() needs to become a deepcopy
5578         h["space-recovered"] = s["space-recovered"].copy()
5579 
5580-        history = pickle.load(open(self.historyfile, "rb"))
5581+        history = pickle.load(self.historyfp.getContent())
5582         history[cycle] = h
5583         while len(history) > 10:
5584             oldcycles = sorted(history.keys())
5585hunk ./src/allmydata/storage/backends/das/expirer.py 310
5586             del history[oldcycles[0]]
5587-        f = open(self.historyfile, "wb")
5588-        pickle.dump(history, f)
5589-        f.close()
5590+        self.historyfp.setContent(pickle.dumps(history))
5591 
5592     def get_state(self):
5593         """In addition to the crawler state described in
5594hunk ./src/allmydata/storage/backends/das/expirer.py 379
5595         progress = self.get_progress()
5596 
5597         state = ShareCrawler.get_state(self) # does a shallow copy
5598-        history = pickle.load(open(self.historyfile, "rb"))
5599+        history = pickle.load(self.historyfp.getContent())
5600         state["history"] = history
5601 
5602         if not progress["cycle-in-progress"]:
5603hunk ./src/allmydata/storage/common.py 19
5604 def si_a2b(ascii_storageindex):
5605     return base32.a2b(ascii_storageindex)
5606 
5607-def storage_index_to_dir(startfp, storageindex):
5608+def si_dir(startfp, storageindex):
5609     sia = si_b2a(storageindex)
5610hunk ./src/allmydata/storage/common.py 21
5611-    return os.path.join(sia[:2], sia)
5612+    return startfp.child(sia[:2]).child(sia)
5613hunk ./src/allmydata/storage/crawler.py 68
5614     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
5615     minimum_cycle_time = 300 # don't run a cycle faster than this
5616 
5617-    def __init__(self, statefname, allowed_cpu_percentage=None):
5618+    def __init__(self, statefp, allowed_cpu_percentage=None):
5619         service.MultiService.__init__(self)
5620         if allowed_cpu_percentage is not None:
5621             self.allowed_cpu_percentage = allowed_cpu_percentage
5622hunk ./src/allmydata/storage/crawler.py 72
5623-        self.statefname = statefname
5624+        self.statefp = statefp
5625         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
5626                          for i in range(2**10)]
5627         self.prefixes.sort()
5628hunk ./src/allmydata/storage/crawler.py 192
5629         #                            of the last bucket to be processed, or
5630         #                            None if we are sleeping between cycles
5631         try:
5632-            f = open(self.statefname, "rb")
5633-            state = pickle.load(f)
5634-            f.close()
5635+            state = pickle.loads(self.statefp.getContent())
5636         except EnvironmentError:
5637             state = {"version": 1,
5638                      "last-cycle-finished": None,
5639hunk ./src/allmydata/storage/crawler.py 228
5640         else:
5641             last_complete_prefix = self.prefixes[lcpi]
5642         self.state["last-complete-prefix"] = last_complete_prefix
5643-        tmpfile = self.statefname + ".tmp"
5644-        f = open(tmpfile, "wb")
5645-        pickle.dump(self.state, f)
5646-        f.close()
5647-        fileutil.move_into_place(tmpfile, self.statefname)
5648+        self.statefp.setContent(pickle.dumps(self.state))
5649 
5650     def startService(self):
5651         # arrange things to look like we were just sleeping, so
5652hunk ./src/allmydata/storage/crawler.py 440
5653 
5654     minimum_cycle_time = 60*60 # we don't need this more than once an hour
5655 
5656-    def __init__(self, statefname, num_sample_prefixes=1):
5657-        FSShareCrawler.__init__(self, statefname)
5658+    def __init__(self, statefp, num_sample_prefixes=1):
5659+        FSShareCrawler.__init__(self, statefp)
5660         self.num_sample_prefixes = num_sample_prefixes
5661 
5662     def add_initial_state(self):
5663hunk ./src/allmydata/storage/server.py 11
5664 from allmydata.util import fileutil, idlib, log, time_format
5665 import allmydata # for __full_version__
5666 
5667-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5668-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5669+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5670+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5671 from allmydata.storage.lease import LeaseInfo
5672 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5673      create_mutable_sharefile
5674hunk ./src/allmydata/storage/server.py 173
5675         # to a particular owner.
5676         start = time.time()
5677         self.count("allocate")
5678-        alreadygot = set()
5679         incoming = set()
5680         bucketwriters = {} # k: shnum, v: BucketWriter
5681 
5682hunk ./src/allmydata/storage/server.py 199
5683             remaining_space -= self.allocated_size()
5684         # self.readonly_storage causes remaining_space <= 0
5685 
5686-        # fill alreadygot with all shares that we have, not just the ones
5687+        # Fill alreadygot with all shares that we have, not just the ones
5688         # they asked about: this will save them a lot of work. Add or update
5689         # leases for all of them: if they want us to hold shares for this
5690hunk ./src/allmydata/storage/server.py 202
5691-        # file, they'll want us to hold leases for this file.
5692+        # file, they'll want us to hold leases for all the shares of it.
5693+        alreadygot = set()
5694         for share in self.backend.get_shares(storageindex):
5695hunk ./src/allmydata/storage/server.py 205
5696-            alreadygot.add(share.shnum)
5697             share.add_or_renew_lease(lease_info)
5698hunk ./src/allmydata/storage/server.py 206
5699+            alreadygot.add(share.shnum)
5700 
5701hunk ./src/allmydata/storage/server.py 208
5702-        # fill incoming with all shares that are incoming use a set operation
5703-        # since there's no need to operate on individual pieces
5704+        # all share numbers that are incoming
5705         incoming = self.backend.get_incoming_shnums(storageindex)
5706 
5707         for shnum in ((sharenums - alreadygot) - incoming):
5708hunk ./src/allmydata/storage/server.py 282
5709             total_space_freed += sf.cancel_lease(cancel_secret)
5710 
5711         if found_buckets:
5712-            storagedir = os.path.join(self.sharedir,
5713-                                      storage_index_to_dir(storageindex))
5714-            if not os.listdir(storagedir):
5715-                os.rmdir(storagedir)
5716+            storagedir = si_dir(self.sharedir, storageindex)
5717+            fp_rmdir_if_empty(storagedir)
5718 
5719         if self.stats_provider:
5720             self.stats_provider.count('storage_server.bytes_freed',
5721hunk ./src/allmydata/test/test_backends.py 52
5722     subtree. I simulate just the parts of the filesystem that the current
5723     implementation of DAS backend needs. """
5724     def call_open(self, fname, mode):
5725+        assert isinstance(fname, basestring), fname
5726         fnamefp = FilePath(fname)
5727         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5728                         "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5729hunk ./src/allmydata/test/test_backends.py 104
5730                         "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5731 
5732     def call_stat(self, fname):
5733+        assert isinstance(fname, basestring), fname
5734         fnamefp = FilePath(fname)
5735         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5736                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5737hunk ./src/allmydata/test/test_backends.py 217
5738 
5739         mocktime.return_value = 0
5740         # Inspect incoming and fail unless it's empty.
5741-        incomingset = self.ss.backend.get_incoming('teststorage_index')
5742-        self.failUnlessReallyEqual(incomingset, set())
5743+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
5744+        self.failUnlessReallyEqual(incomingset, frozenset())
5745         
5746         # Populate incoming with the sharenum: 0.
5747hunk ./src/allmydata/test/test_backends.py 221
5748-        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5749+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5750 
5751         # Inspect incoming and fail unless the sharenum: 0 is listed there.
5752hunk ./src/allmydata/test/test_backends.py 224
5753-        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
5754+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
5755         
5756         # Attempt to create a second share writer with the same sharenum.
5757hunk ./src/allmydata/test/test_backends.py 227
5758-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5759+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5760 
5761         # Show that no sharewriter results from a remote_allocate_buckets
5762         # with the same si and sharenum, until BucketWriter.remote_close()
5763hunk ./src/allmydata/test/test_backends.py 280
5764         StorageServer object. """
5765 
5766         def call_listdir(dirname):
5767+            precondition(isinstance(dirname, basestring), dirname)
5768             self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
5769             return ['0']
5770 
5771hunk ./src/allmydata/test/test_backends.py 287
5772         mocklistdir.side_effect = call_listdir
5773 
5774         def call_open(fname, mode):
5775+            precondition(isinstance(fname, basestring), fname)
5776             self.failUnlessReallyEqual(fname, sharefname)
5777             self.failUnlessEqual(mode[0], 'r', mode)
5778             self.failUnless('b' in mode, mode)
5779hunk ./src/allmydata/test/test_backends.py 297
5780 
5781         datalen = len(share_data)
5782         def call_getsize(fname):
5783+            precondition(isinstance(fname, basestring), fname)
5784             self.failUnlessReallyEqual(fname, sharefname)
5785             return datalen
5786         mockgetsize.side_effect = call_getsize
5787hunk ./src/allmydata/test/test_backends.py 303
5788 
5789         def call_exists(fname):
5790+            precondition(isinstance(fname, basestring), fname)
5791             self.failUnlessReallyEqual(fname, sharefname)
5792             return True
5793         mockexists.side_effect = call_exists
5794hunk ./src/allmydata/test/test_backends.py 321
5795         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
5796 
5797 
5798-class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
5799-    @mock.patch('time.time')
5800-    @mock.patch('os.mkdir')
5801-    @mock.patch('__builtin__.open')
5802-    @mock.patch('os.listdir')
5803-    @mock.patch('os.path.isdir')
5804-    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5805+class TestBackendConstruction(MockFiles, ReallyEqualMixin):
5806+    def test_create_fs_backend(self):
5807         """ This tests whether a file system backend instance can be
5808         constructed. To pass the test, it has to use the
5809         filesystem in only the prescribed ways. """
5810hunk ./src/allmydata/test/test_backends.py 327
5811 
5812-        def call_open(fname, mode):
5813-            if fname == os.path.join(storedir,'bucket_counter.state'):
5814-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5815-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5816-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5817-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5818-                return StringIO()
5819-            else:
5820-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5821-        mockopen.side_effect = call_open
5822-
5823-        def call_isdir(fname):
5824-            if fname == os.path.join(storedir,'shares'):
5825-                return True
5826-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5827-                return True
5828-            else:
5829-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5830-        mockisdir.side_effect = call_isdir
5831-
5832-        def call_mkdir(fname, mode):
5833-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
5834-            self.failUnlessEqual(0777, mode)
5835-            if fname == storedir:
5836-                return None
5837-            elif fname == os.path.join(storedir,'shares'):
5838-                return None
5839-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5840-                return None
5841-            else:
5842-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
5843-        mockmkdir.side_effect = call_mkdir
5844-
5845         # Now begin the test.
5846hunk ./src/allmydata/test/test_backends.py 328
5847-        DASCore('teststoredir', expiration_policy)
5848-
5849-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5850-
5851+        DASCore(self.storedir, expiration_policy)
5852hunk ./src/allmydata/util/fileutil.py 7
5853 
5854 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5855 
5856+from allmydata.util.assertutil import precondition
5857+
5858 from twisted.python import log
5859hunk ./src/allmydata/util/fileutil.py 10
5860-from twisted.python.filepath import UnlistableError
5861+from twisted.python.filepath import FilePath, UnlistableError
5862 
5863 from pycryptopp.cipher.aes import AES
5864 
5865hunk ./src/allmydata/util/fileutil.py 210
5866             raise tx
5867         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5868 
5869+def fp_rmdir_if_empty(dirfp):
5870+    """ Remove the directory if it is empty. """
5871+    try:
5872+        os.rmdir(dirfp.path)
5873+    except OSError, e:
5874+        if e.errno != errno.ENOTEMPTY:
5875+            raise
5876+    else:
5877+        dirfp.changed()
5878+
5879 def rmtree(dirname):
5880     """
5881     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5882hunk ./src/allmydata/util/fileutil.py 257
5883         raise OSError, excs
5884 
5885 def fp_remove(dirfp):
5886+    """
5887+    An idempotent version of shutil.rmtree().  If the dir is already gone,
5888+    do nothing and return without raising an exception.  If this call
5889+    removes the dir, return without raising an exception.  If there is an
5890+    error that prevents removal or if the directory gets created again by
5891+    someone else after this deletes it and before this checks that it is
5892+    gone, raise an exception.
5893+    """
5894     try:
5895         dirfp.remove()
5896     except UnlistableError, e:
5897hunk ./src/allmydata/util/fileutil.py 270
5898         if e.originalException.errno != errno.ENOENT:
5899             raise
5900+    except OSError, e:
5901+        if e.errno != errno.ENOENT:
5902+            raise
5903 
5904 def rm_dir(dirname):
5905     # Renamed to be like shutil.rmtree and unlike rmdir.
5906hunk ./src/allmydata/util/fileutil.py 387
5907         import traceback
5908         traceback.print_exc()
5909 
5910-def get_disk_stats(whichdir, reserved_space=0):
5911+def get_disk_stats(whichdirfp, reserved_space=0):
5912     """Return disk statistics for the storage disk, in the form of a dict
5913     with the following fields.
5914       total:            total bytes on disk
5915hunk ./src/allmydata/util/fileutil.py 408
5916     you can pass how many bytes you would like to leave unused on this
5917     filesystem as reserved_space.
5918     """
5919+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5920 
5921     if have_GetDiskFreeSpaceExW:
5922         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
5923hunk ./src/allmydata/util/fileutil.py 419
5924         n_free_for_nonroot = c_ulonglong(0)
5925         n_total            = c_ulonglong(0)
5926         n_free_for_root    = c_ulonglong(0)
5927-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
5928+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
5929                                                byref(n_total),
5930                                                byref(n_free_for_root))
5931         if retval == 0:
5932hunk ./src/allmydata/util/fileutil.py 424
5933             raise OSError("Windows error %d attempting to get disk statistics for %r"
5934-                          % (GetLastError(), whichdir))
5935+                          % (GetLastError(), whichdirfp.path))
5936         free_for_nonroot = n_free_for_nonroot.value
5937         total            = n_total.value
5938         free_for_root    = n_free_for_root.value
5939hunk ./src/allmydata/util/fileutil.py 433
5940         # <http://docs.python.org/library/os.html#os.statvfs>
5941         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
5942         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
5943-        s = os.statvfs(whichdir)
5944+        s = os.statvfs(whichdirfp.path)
5945 
5946         # on my mac laptop:
5947         #  statvfs(2) is a wrapper around statfs(2).
5948hunk ./src/allmydata/util/fileutil.py 460
5949              'avail': avail,
5950            }
5951 
5952-def get_available_space(whichdir, reserved_space):
5953+def get_available_space(whichdirfp, reserved_space):
5954     """Returns available space for share storage in bytes, or None if no
5955     API to get this information is available.
5956 
5957hunk ./src/allmydata/util/fileutil.py 472
5958     you can pass how many bytes you would like to leave unused on this
5959     filesystem as reserved_space.
5960     """
5961+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5962     try:
5963hunk ./src/allmydata/util/fileutil.py 474
5964-        return get_disk_stats(whichdir, reserved_space)['avail']
5965+        return get_disk_stats(whichdirfp, reserved_space)['avail']
5966     except AttributeError:
5967         return None
5968hunk ./src/allmydata/util/fileutil.py 477
5969-    except EnvironmentError:
5970-        log.msg("OS call to get disk statistics failed")
5971-        return 0
5972}
5973
5974Context:
5975
5976[docs: add missing link in NEWS.rst
5977zooko@zooko.com**20110712153307
5978 Ignore-this: be7b7eb81c03700b739daa1027d72b35
5979]
5980[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
5981zooko@zooko.com**20110712153229
5982 Ignore-this: 723c4f9e2211027c79d711715d972c5
5983 Also remove a couple of vestigial references to figleaf, which is long gone.
5984 fixes #1409 (remove contrib/fuse)
5985]
5986[add Protovis.js-based download-status timeline visualization
5987Brian Warner <warner@lothar.com>**20110629222606
5988 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
5989 
5990 provide status overlap info on the webapi t=json output, add decode/decrypt
5991 rate tooltips, add zoomin/zoomout buttons
5992]
5993[add more download-status data, fix tests
5994Brian Warner <warner@lothar.com>**20110629222555
5995 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
5996]
5997[prepare for viz: improve DownloadStatus events
5998Brian Warner <warner@lothar.com>**20110629222542
5999 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
6000 
6001 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
6002]
6003[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
6004zooko@zooko.com**20110629185711
6005 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
6006]
6007[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
6008david-sarah@jacaranda.org**20110130235809
6009 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
6010]
6011[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
6012david-sarah@jacaranda.org**20110626054124
6013 Ignore-this: abb864427a1b91bd10d5132b4589fd90
6014]
6015[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
6016david-sarah@jacaranda.org**20110623205528
6017 Ignore-this: c63e23146c39195de52fb17c7c49b2da
6018]
6019[Rename test_package_initialization.py to (much shorter) test_import.py .
6020Brian Warner <warner@lothar.com>**20110611190234
6021 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
6022 
6023 The former name was making my 'ls' listings hard to read, by forcing them
6024 down to just two columns.
6025]
6026[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
6027zooko@zooko.com**20110611163741
6028 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
6029 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
6030 fixes #1412
6031]
6032[wui: right-align the size column in the WUI
6033zooko@zooko.com**20110611153758
6034 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
6035 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
6036 fixes #1412
6037]
6038[docs: three minor fixes
6039zooko@zooko.com**20110610121656
6040 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
6041 CREDITS for arc for stats tweak
6042 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
6043 English usage tweak
6044]
6045[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
6046david-sarah@jacaranda.org**20110609223719
6047 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
6048]
6049[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
6050wilcoxjg@gmail.com**20110527120135
6051 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
6052 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
6053 NEWS.rst, stats.py: documentation of change to get_latencies
6054 stats.rst: now documents percentile modification in get_latencies
6055 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
6056 fixes #1392
6057]
6058[corrected "k must never be smaller than N" to "k must never be greater than N"
6059secorp@allmydata.org**20110425010308
6060 Ignore-this: 233129505d6c70860087f22541805eac
6061]
6062[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
6063david-sarah@jacaranda.org**20110517011214
6064 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
6065]
6066[docs: convert NEWS to NEWS.rst and change all references to it.
6067david-sarah@jacaranda.org**20110517010255
6068 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
6069]
6070[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
6071david-sarah@jacaranda.org**20110512140559
6072 Ignore-this: 784548fc5367fac5450df1c46890876d
6073]
6074[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
6075david-sarah@jacaranda.org**20110130164923
6076 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
6077]
6078[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
6079zooko@zooko.com**20110128142006
6080 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
6081 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
6082]
6083[M-x whitespace-cleanup
6084zooko@zooko.com**20110510193653
6085 Ignore-this: dea02f831298c0f65ad096960e7df5c7
6086]
6087[docs: fix typo in running.rst, thanks to arch_o_median
6088zooko@zooko.com**20110510193633
6089 Ignore-this: ca06de166a46abbc61140513918e79e8
6090]
6091[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
6092david-sarah@jacaranda.org**20110204204902
6093 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
6094]
6095[relnotes.txt: forseeable -> foreseeable. refs #1342
6096david-sarah@jacaranda.org**20110204204116
6097 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
6098]
6099[replace remaining .html docs with .rst docs
6100zooko@zooko.com**20110510191650
6101 Ignore-this: d557d960a986d4ac8216d1677d236399
6102 Remove install.html (long since deprecated).
6103 Also replace some obsolete references to install.html with references to quickstart.rst.
6104 Fix some broken internal references within docs/historical/historical_known_issues.txt.
6105 Thanks to Ravi Pinjala and Patrick McDonald.
6106 refs #1227
6107]
6108[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
6109zooko@zooko.com**20110428055232
6110 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
6111]
6112[munin tahoe_files plugin: fix incorrect file count
6113francois@ctrlaltdel.ch**20110428055312
6114 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
6115 fixes #1391
6116]
6117[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
6118david-sarah@jacaranda.org**20110411190738
6119 Ignore-this: 7847d26bc117c328c679f08a7baee519
6120]
6121[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
6122david-sarah@jacaranda.org**20110410155844
6123 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
6124]
6125[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
6126david-sarah@jacaranda.org**20110410155705
6127 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
6128]
6129[remove unused variable detected by pyflakes
6130zooko@zooko.com**20110407172231
6131 Ignore-this: 7344652d5e0720af822070d91f03daf9
6132]
6133[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
6134david-sarah@jacaranda.org**20110401202750
6135 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
6136]
6137[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
6138Brian Warner <warner@lothar.com>**20110325232511
6139 Ignore-this: d5307faa6900f143193bfbe14e0f01a
6140]
6141[control.py: remove all uses of s.get_serverid()
6142warner@lothar.com**20110227011203
6143 Ignore-this: f80a787953bd7fa3d40e828bde00e855
6144]
6145[web: remove some uses of s.get_serverid(), not all
6146warner@lothar.com**20110227011159
6147 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
6148]
6149[immutable/downloader/fetcher.py: remove all get_serverid() calls
6150warner@lothar.com**20110227011156
6151 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
6152]
6153[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
6154warner@lothar.com**20110227011153
6155 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
6156 
6157 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
6158 _shares_from_server dict was being popped incorrectly (using shnum as the
6159 index instead of serverid). I'm still thinking through the consequences of
6160 this bug. It was probably benign and really hard to detect. I think it would
6161 cause us to incorrectly believe that we're pulling too many shares from a
6162 server, and thus prefer a different server rather than asking for a second
6163 share from the first server. The diversity code is intended to spread out the
6164 number of shares simultaneously being requested from each server, but with
6165 this bug, it might be spreading out the total number of shares requested at
6166 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
6167 segment, so the effect doesn't last very long).
6168]
6169[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
6170warner@lothar.com**20110227011150
6171 Ignore-this: d8d56dd8e7b280792b40105e13664554
6172 
6173 test_download.py: create+check MyShare instances better, make sure they share
6174 Server objects, now that finder.py cares
6175]
6176[immutable/downloader/finder.py: reduce use of get_serverid(), one left
6177warner@lothar.com**20110227011146
6178 Ignore-this: 5785be173b491ae8a78faf5142892020
6179]
6180[immutable/offloaded.py: reduce use of get_serverid() a bit more
6181warner@lothar.com**20110227011142
6182 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
6183]
6184[immutable/upload.py: reduce use of get_serverid()
6185warner@lothar.com**20110227011138
6186 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
6187]
6188[immutable/checker.py: remove some uses of s.get_serverid(), not all
6189warner@lothar.com**20110227011134
6190 Ignore-this: e480a37efa9e94e8016d826c492f626e
6191]
6192[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
6193warner@lothar.com**20110227011132
6194 Ignore-this: 6078279ddf42b179996a4b53bee8c421
6195 MockIServer stubs
6196]
6197[upload.py: rearrange _make_trackers a bit, no behavior changes
6198warner@lothar.com**20110227011128
6199 Ignore-this: 296d4819e2af452b107177aef6ebb40f
6200]
6201[happinessutil.py: finally rename merge_peers to merge_servers
6202warner@lothar.com**20110227011124
6203 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
6204]
6205[test_upload.py: factor out FakeServerTracker
6206warner@lothar.com**20110227011120
6207 Ignore-this: 6c182cba90e908221099472cc159325b
6208]
6209[test_upload.py: server-vs-tracker cleanup
6210warner@lothar.com**20110227011115
6211 Ignore-this: 2915133be1a3ba456e8603885437e03
6212]
6213[happinessutil.py: server-vs-tracker cleanup
6214warner@lothar.com**20110227011111
6215 Ignore-this: b856c84033562d7d718cae7cb01085a9
6216]
6217[upload.py: more tracker-vs-server cleanup
6218warner@lothar.com**20110227011107
6219 Ignore-this: bb75ed2afef55e47c085b35def2de315
6220]
6221[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
6222warner@lothar.com**20110227011103
6223 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
6224]
6225[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
6226warner@lothar.com**20110227011100
6227 Ignore-this: 7ea858755cbe5896ac212a925840fe68
6228 
6229 No behavioral changes, just updating variable/method names and log messages.
6230 The effects outside these three files should be minimal: some exception
6231 messages changed (to say "server" instead of "peer"), and some internal class
6232 names were changed. A few things still use "peer" to minimize external
6233 changes, like UploadResults.timings["peer_selection"] and
6234 happinessutil.merge_peers, which can be changed later.
6235]
6236[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
6237warner@lothar.com**20110227011056
6238 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
6239]
6240[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
6241warner@lothar.com**20110227011051
6242 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
6243]
6244[test: increase timeout on a network test because Francois's ARM machine hit that timeout
6245zooko@zooko.com**20110317165909
6246 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
6247 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
6248]
6249[docs/configuration.rst: add a "Frontend Configuration" section
6250Brian Warner <warner@lothar.com>**20110222014323
6251 Ignore-this: 657018aa501fe4f0efef9851628444ca
6252 
6253 this points to docs/frontends/*.rst, which were previously underlinked
6254]
6255[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
6256"Brian Warner <warner@lothar.com>"**20110221061544
6257 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
6258]
6259[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
6260david-sarah@jacaranda.org**20110221015817
6261 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
6262]
6263[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
6264david-sarah@jacaranda.org**20110221020125
6265 Ignore-this: b0744ed58f161bf188e037bad077fc48
6266]
6267[Refactor StorageFarmBroker handling of servers
6268Brian Warner <warner@lothar.com>**20110221015804
6269 Ignore-this: 842144ed92f5717699b8f580eab32a51
6270 
6271 Pass around IServer instance instead of (peerid, rref) tuple. Replace
6272 "descriptor" with "server". Other replacements:
6273 
6274  get_all_servers -> get_connected_servers/get_known_servers
6275  get_servers_for_index -> get_servers_for_psi (now returns IServers)
6276 
6277 This change still needs to be pushed further down: lots of code is now
6278 getting the IServer and then distributing (peerid, rref) internally.
6279 Instead, it ought to distribute the IServer internally and delay
6280 extracting a serverid or rref until the last moment.
6281 
6282 no_network.py was updated to retain parallelism.
6283]
6284[TAG allmydata-tahoe-1.8.2
6285warner@lothar.com**20110131020101]
6286Patch bundle hash:
6287dcb8d47133bff7a956d701a185b9362e228dc343