Ticket #999: jacp19Zancas20110727.darcs.patch

File jacp19Zancas20110727.darcs.patch, 339.1 KB (added by Zancas, at 2011-07-27T08:05:16Z)
Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
50  * adding comments to clarify what I'm about to do.
51
52Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
53  * branching back, no longer attempting to mock inside TestServerFSBackend
54
55Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
56  * checkpoint12 TestServerFSBackend no longer mocks filesystem
57
58Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
59  * JACP
60
61Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
62  * testing get incoming
63
64Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
65  * ImmutableShareFile does not know its StorageIndex
66
67Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
68  * get_incoming correctly reports the 0 share after it has arrived
69
70Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
71  * jacp14
72
73Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
74  * jacp14 or so
75
76Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
77  * temporary work-in-progress patch to be unrecorded
78  tidy up a few tests, work done in pair-programming with Zancas
79
80Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
81  * work in progress intended to be unrecorded and never committed to trunk
82  switch from os.path.join to filepath
83  incomplete refactoring of common "stay in your subtree" tester code into a superclass
84 
85
86Fri Jul 15 13:15:00 MDT 2011  zooko@zooko.com
87  * another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
88  In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
89
90Tue Jul 19 23:59:18 MDT 2011  zooko@zooko.com
91  * another temporary patch for sharing work-in-progress
92  A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
93  There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
94  (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
95 
96
97Fri Jul 22 01:00:36 MDT 2011  wilcoxjg@gmail.com
98  * jacp16 or so
99
100Fri Jul 22 14:32:44 MDT 2011  wilcoxjg@gmail.com
101  * jacp17
102
103Fri Jul 22 21:19:15 MDT 2011  wilcoxjg@gmail.com
104  * jacp18
105
106Sat Jul 23 21:42:30 MDT 2011  wilcoxjg@gmail.com
107  * jacp19orso
108
109Wed Jul 27 02:05:53 MDT 2011  wilcoxjg@gmail.com
110  * jacp19
111
112New patches:
113
114[storage: new mocking tests of storage server read and write
115wilcoxjg@gmail.com**20110325203514
116 Ignore-this: df65c3c4f061dd1516f88662023fdb41
117 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
118] {
119addfile ./src/allmydata/test/test_server.py
120hunk ./src/allmydata/test/test_server.py 1
121+from twisted.trial import unittest
122+
123+from StringIO import StringIO
124+
125+from allmydata.test.common_util import ReallyEqualMixin
126+
127+import mock
128+
129+# This is the code that we're going to be testing.
130+from allmydata.storage.server import StorageServer
131+
132+# The following share file contents was generated with
133+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
134+# with share data == 'a'.
135+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
136+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
137+
138+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
139+
140+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
141+    @mock.patch('__builtin__.open')
142+    def test_create_server(self, mockopen):
143+        """ This tests whether a server instance can be constructed. """
144+
145+        def call_open(fname, mode):
146+            if fname == 'testdir/bucket_counter.state':
147+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
148+            elif fname == 'testdir/lease_checker.state':
149+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
150+            elif fname == 'testdir/lease_checker.history':
151+                return StringIO()
152+        mockopen.side_effect = call_open
153+
154+        # Now begin the test.
155+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
156+
157+        # You passed!
158+
159+class TestServer(unittest.TestCase, ReallyEqualMixin):
160+    @mock.patch('__builtin__.open')
161+    def setUp(self, mockopen):
162+        def call_open(fname, mode):
163+            if fname == 'testdir/bucket_counter.state':
164+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
165+            elif fname == 'testdir/lease_checker.state':
166+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
167+            elif fname == 'testdir/lease_checker.history':
168+                return StringIO()
169+        mockopen.side_effect = call_open
170+
171+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
172+
173+
174+    @mock.patch('time.time')
175+    @mock.patch('os.mkdir')
176+    @mock.patch('__builtin__.open')
177+    @mock.patch('os.listdir')
178+    @mock.patch('os.path.isdir')
179+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
180+        """Handle a report of corruption."""
181+
182+        def call_listdir(dirname):
183+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
184+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
185+
186+        mocklistdir.side_effect = call_listdir
187+
188+        class MockFile:
189+            def __init__(self):
190+                self.buffer = ''
191+                self.pos = 0
192+            def write(self, instring):
193+                begin = self.pos
194+                padlen = begin - len(self.buffer)
195+                if padlen > 0:
196+                    self.buffer += '\x00' * padlen
197+                end = self.pos + len(instring)
198+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
199+                self.pos = end
200+            def close(self):
201+                pass
202+            def seek(self, pos):
203+                self.pos = pos
204+            def read(self, numberbytes):
205+                return self.buffer[self.pos:self.pos+numberbytes]
206+            def tell(self):
207+                return self.pos
208+
209+        mocktime.return_value = 0
210+
211+        sharefile = MockFile()
212+        def call_open(fname, mode):
213+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
214+            return sharefile
215+
216+        mockopen.side_effect = call_open
217+        # Now begin the test.
218+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
219+        print bs
220+        bs[0].remote_write(0, 'a')
221+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
222+
223+
224+    @mock.patch('os.path.exists')
225+    @mock.patch('os.path.getsize')
226+    @mock.patch('__builtin__.open')
227+    @mock.patch('os.listdir')
228+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
229+        """ This tests whether the code correctly finds and reads
230+        shares written out by old (Tahoe-LAFS <= v1.8.2)
231+        servers. There is a similar test in test_download, but that one
232+        is from the perspective of the client and exercises a deeper
233+        stack of code. This one is for exercising just the
234+        StorageServer object. """
235+
236+        def call_listdir(dirname):
237+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
238+            return ['0']
239+
240+        mocklistdir.side_effect = call_listdir
241+
242+        def call_open(fname, mode):
243+            self.failUnlessReallyEqual(fname, sharefname)
244+            self.failUnless('r' in mode, mode)
245+            self.failUnless('b' in mode, mode)
246+
247+            return StringIO(share_file_data)
248+        mockopen.side_effect = call_open
249+
250+        datalen = len(share_file_data)
251+        def call_getsize(fname):
252+            self.failUnlessReallyEqual(fname, sharefname)
253+            return datalen
254+        mockgetsize.side_effect = call_getsize
255+
256+        def call_exists(fname):
257+            self.failUnlessReallyEqual(fname, sharefname)
258+            return True
259+        mockexists.side_effect = call_exists
260+
261+        # Now begin the test.
262+        bs = self.s.remote_get_buckets('teststorage_index')
263+
264+        self.failUnlessEqual(len(bs), 1)
265+        b = bs[0]
266+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
267+        # If you try to read past the end you get the as much data as is there.
268+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
269+        # If you start reading past the end of the file you get the empty string.
270+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
271}
272[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
273wilcoxjg@gmail.com**20110624202850
274 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
275 sloppy not for production
276] {
277move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
278hunk ./src/allmydata/storage/crawler.py 13
279     pass
280 
281 class ShareCrawler(service.MultiService):
282-    """A ShareCrawler subclass is attached to a StorageServer, and
283+    """A subcless of ShareCrawler is attached to a StorageServer, and
284     periodically walks all of its shares, processing each one in some
285     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
286     since large servers can easily have a terabyte of shares, in several
287hunk ./src/allmydata/storage/crawler.py 31
288     We assume that the normal upload/download/get_buckets traffic of a tahoe
289     grid will cause the prefixdir contents to be mostly cached in the kernel,
290     or that the number of buckets in each prefixdir will be small enough to
291-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
292+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
293     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
294     prefix. On this server, each prefixdir took 130ms-200ms to list the first
295     time, and 17ms to list the second time.
296hunk ./src/allmydata/storage/crawler.py 68
297     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
298     minimum_cycle_time = 300 # don't run a cycle faster than this
299 
300-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
301+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
302         service.MultiService.__init__(self)
303         if allowed_cpu_percentage is not None:
304             self.allowed_cpu_percentage = allowed_cpu_percentage
305hunk ./src/allmydata/storage/crawler.py 72
306-        self.server = server
307-        self.sharedir = server.sharedir
308-        self.statefile = statefile
309+        self.backend = backend
310         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
311                          for i in range(2**10)]
312         self.prefixes.sort()
313hunk ./src/allmydata/storage/crawler.py 446
314 
315     minimum_cycle_time = 60*60 # we don't need this more than once an hour
316 
317-    def __init__(self, server, statefile, num_sample_prefixes=1):
318-        ShareCrawler.__init__(self, server, statefile)
319+    def __init__(self, statefile, num_sample_prefixes=1):
320+        ShareCrawler.__init__(self, statefile)
321         self.num_sample_prefixes = num_sample_prefixes
322 
323     def add_initial_state(self):
324hunk ./src/allmydata/storage/expirer.py 15
325     removed.
326 
327     I collect statistics on the leases and make these available to a web
328-    status page, including::
329+    status page, including:
330 
331     Space recovered during this cycle-so-far:
332      actual (only if expiration_enabled=True):
333hunk ./src/allmydata/storage/expirer.py 51
334     slow_start = 360 # wait 6 minutes after startup
335     minimum_cycle_time = 12*60*60 # not more than twice per day
336 
337-    def __init__(self, server, statefile, historyfile,
338+    def __init__(self, statefile, historyfile,
339                  expiration_enabled, mode,
340                  override_lease_duration, # used if expiration_mode=="age"
341                  cutoff_date, # used if expiration_mode=="cutoff-date"
342hunk ./src/allmydata/storage/expirer.py 71
343         else:
344             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
345         self.sharetypes_to_expire = sharetypes
346-        ShareCrawler.__init__(self, server, statefile)
347+        ShareCrawler.__init__(self, statefile)
348 
349     def add_initial_state(self):
350         # we fill ["cycle-to-date"] here (even though they will be reset in
351hunk ./src/allmydata/storage/immutable.py 44
352     sharetype = "immutable"
353 
354     def __init__(self, filename, max_size=None, create=False):
355-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
356+        """ If max_size is not None then I won't allow more than
357+        max_size to be written to me. If create=True then max_size
358+        must not be None. """
359         precondition((max_size is not None) or (not create), max_size, create)
360         self.home = filename
361         self._max_size = max_size
362hunk ./src/allmydata/storage/immutable.py 87
363 
364     def read_share_data(self, offset, length):
365         precondition(offset >= 0)
366-        # reads beyond the end of the data are truncated. Reads that start
367-        # beyond the end of the data return an empty string. I wonder why
368-        # Python doesn't do the following computation for me?
369+        # Reads beyond the end of the data are truncated. Reads that start
370+        # beyond the end of the data return an empty string.
371         seekpos = self._data_offset+offset
372         fsize = os.path.getsize(self.home)
373         actuallength = max(0, min(length, fsize-seekpos))
374hunk ./src/allmydata/storage/immutable.py 198
375             space_freed += os.stat(self.home)[stat.ST_SIZE]
376             self.unlink()
377         return space_freed
378+class NullBucketWriter(Referenceable):
379+    implements(RIBucketWriter)
380 
381hunk ./src/allmydata/storage/immutable.py 201
382+    def remote_write(self, offset, data):
383+        return
384 
385 class BucketWriter(Referenceable):
386     implements(RIBucketWriter)
387hunk ./src/allmydata/storage/server.py 7
388 from twisted.application import service
389 
390 from zope.interface import implements
391-from allmydata.interfaces import RIStorageServer, IStatsProducer
392+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
393 from allmydata.util import fileutil, idlib, log, time_format
394 import allmydata # for __full_version__
395 
396hunk ./src/allmydata/storage/server.py 16
397 from allmydata.storage.lease import LeaseInfo
398 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
399      create_mutable_sharefile
400-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
401+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
402 from allmydata.storage.crawler import BucketCountingCrawler
403 from allmydata.storage.expirer import LeaseCheckingCrawler
404 
405hunk ./src/allmydata/storage/server.py 20
406+from zope.interface import implements
407+
408+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
409+# be started and stopped.
410+class Backend(service.MultiService):
411+    implements(IStatsProducer)
412+    def __init__(self):
413+        service.MultiService.__init__(self)
414+
415+    def get_bucket_shares(self):
416+        """XXX"""
417+        raise NotImplementedError
418+
419+    def get_share(self):
420+        """XXX"""
421+        raise NotImplementedError
422+
423+    def make_bucket_writer(self):
424+        """XXX"""
425+        raise NotImplementedError
426+
427+class NullBackend(Backend):
428+    def __init__(self):
429+        Backend.__init__(self)
430+
431+    def get_available_space(self):
432+        return None
433+
434+    def get_bucket_shares(self, storage_index):
435+        return set()
436+
437+    def get_share(self, storage_index, sharenum):
438+        return None
439+
440+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
441+        return NullBucketWriter()
442+
443+class FSBackend(Backend):
444+    def __init__(self, storedir, readonly=False, reserved_space=0):
445+        Backend.__init__(self)
446+
447+        self._setup_storage(storedir, readonly, reserved_space)
448+        self._setup_corruption_advisory()
449+        self._setup_bucket_counter()
450+        self._setup_lease_checkerf()
451+
452+    def _setup_storage(self, storedir, readonly, reserved_space):
453+        self.storedir = storedir
454+        self.readonly = readonly
455+        self.reserved_space = int(reserved_space)
456+        if self.reserved_space:
457+            if self.get_available_space() is None:
458+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
459+                        umid="0wZ27w", level=log.UNUSUAL)
460+
461+        self.sharedir = os.path.join(self.storedir, "shares")
462+        fileutil.make_dirs(self.sharedir)
463+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
464+        self._clean_incomplete()
465+
466+    def _clean_incomplete(self):
467+        fileutil.rm_dir(self.incomingdir)
468+        fileutil.make_dirs(self.incomingdir)
469+
470+    def _setup_corruption_advisory(self):
471+        # we don't actually create the corruption-advisory dir until necessary
472+        self.corruption_advisory_dir = os.path.join(self.storedir,
473+                                                    "corruption-advisories")
474+
475+    def _setup_bucket_counter(self):
476+        statefile = os.path.join(self.storedir, "bucket_counter.state")
477+        self.bucket_counter = BucketCountingCrawler(statefile)
478+        self.bucket_counter.setServiceParent(self)
479+
480+    def _setup_lease_checkerf(self):
481+        statefile = os.path.join(self.storedir, "lease_checker.state")
482+        historyfile = os.path.join(self.storedir, "lease_checker.history")
483+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
484+                                   expiration_enabled, expiration_mode,
485+                                   expiration_override_lease_duration,
486+                                   expiration_cutoff_date,
487+                                   expiration_sharetypes)
488+        self.lease_checker.setServiceParent(self)
489+
490+    def get_available_space(self):
491+        if self.readonly:
492+            return 0
493+        return fileutil.get_available_space(self.storedir, self.reserved_space)
494+
495+    def get_bucket_shares(self, storage_index):
496+        """Return a list of (shnum, pathname) tuples for files that hold
497+        shares for this storage_index. In each tuple, 'shnum' will always be
498+        the integer form of the last component of 'pathname'."""
499+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
500+        try:
501+            for f in os.listdir(storagedir):
502+                if NUM_RE.match(f):
503+                    filename = os.path.join(storagedir, f)
504+                    yield (int(f), filename)
505+        except OSError:
506+            # Commonly caused by there being no buckets at all.
507+            pass
508+
509 # storage/
510 # storage/shares/incoming
511 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
512hunk ./src/allmydata/storage/server.py 143
513     name = 'storage'
514     LeaseCheckerClass = LeaseCheckingCrawler
515 
516-    def __init__(self, storedir, nodeid, reserved_space=0,
517-                 discard_storage=False, readonly_storage=False,
518+    def __init__(self, nodeid, backend, reserved_space=0,
519+                 readonly_storage=False,
520                  stats_provider=None,
521                  expiration_enabled=False,
522                  expiration_mode="age",
523hunk ./src/allmydata/storage/server.py 155
524         assert isinstance(nodeid, str)
525         assert len(nodeid) == 20
526         self.my_nodeid = nodeid
527-        self.storedir = storedir
528-        sharedir = os.path.join(storedir, "shares")
529-        fileutil.make_dirs(sharedir)
530-        self.sharedir = sharedir
531-        # we don't actually create the corruption-advisory dir until necessary
532-        self.corruption_advisory_dir = os.path.join(storedir,
533-                                                    "corruption-advisories")
534-        self.reserved_space = int(reserved_space)
535-        self.no_storage = discard_storage
536-        self.readonly_storage = readonly_storage
537         self.stats_provider = stats_provider
538         if self.stats_provider:
539             self.stats_provider.register_producer(self)
540hunk ./src/allmydata/storage/server.py 158
541-        self.incomingdir = os.path.join(sharedir, 'incoming')
542-        self._clean_incomplete()
543-        fileutil.make_dirs(self.incomingdir)
544         self._active_writers = weakref.WeakKeyDictionary()
545hunk ./src/allmydata/storage/server.py 159
546+        self.backend = backend
547+        self.backend.setServiceParent(self)
548         log.msg("StorageServer created", facility="tahoe.storage")
549 
550hunk ./src/allmydata/storage/server.py 163
551-        if reserved_space:
552-            if self.get_available_space() is None:
553-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
554-                        umin="0wZ27w", level=log.UNUSUAL)
555-
556         self.latencies = {"allocate": [], # immutable
557                           "write": [],
558                           "close": [],
559hunk ./src/allmydata/storage/server.py 174
560                           "renew": [],
561                           "cancel": [],
562                           }
563-        self.add_bucket_counter()
564-
565-        statefile = os.path.join(self.storedir, "lease_checker.state")
566-        historyfile = os.path.join(self.storedir, "lease_checker.history")
567-        klass = self.LeaseCheckerClass
568-        self.lease_checker = klass(self, statefile, historyfile,
569-                                   expiration_enabled, expiration_mode,
570-                                   expiration_override_lease_duration,
571-                                   expiration_cutoff_date,
572-                                   expiration_sharetypes)
573-        self.lease_checker.setServiceParent(self)
574 
575     def __repr__(self):
576         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
577hunk ./src/allmydata/storage/server.py 178
578 
579-    def add_bucket_counter(self):
580-        statefile = os.path.join(self.storedir, "bucket_counter.state")
581-        self.bucket_counter = BucketCountingCrawler(self, statefile)
582-        self.bucket_counter.setServiceParent(self)
583-
584     def count(self, name, delta=1):
585         if self.stats_provider:
586             self.stats_provider.count("storage_server." + name, delta)
587hunk ./src/allmydata/storage/server.py 233
588             kwargs["facility"] = "tahoe.storage"
589         return log.msg(*args, **kwargs)
590 
591-    def _clean_incomplete(self):
592-        fileutil.rm_dir(self.incomingdir)
593-
594     def get_stats(self):
595         # remember: RIStatsProvider requires that our return dict
596         # contains numeric values.
597hunk ./src/allmydata/storage/server.py 269
598             stats['storage_server.total_bucket_count'] = bucket_count
599         return stats
600 
601-    def get_available_space(self):
602-        """Returns available space for share storage in bytes, or None if no
603-        API to get this information is available."""
604-
605-        if self.readonly_storage:
606-            return 0
607-        return fileutil.get_available_space(self.storedir, self.reserved_space)
608-
609     def allocated_size(self):
610         space = 0
611         for bw in self._active_writers:
612hunk ./src/allmydata/storage/server.py 276
613         return space
614 
615     def remote_get_version(self):
616-        remaining_space = self.get_available_space()
617+        remaining_space = self.backend.get_available_space()
618         if remaining_space is None:
619             # We're on a platform that has no API to get disk stats.
620             remaining_space = 2**64
621hunk ./src/allmydata/storage/server.py 301
622         self.count("allocate")
623         alreadygot = set()
624         bucketwriters = {} # k: shnum, v: BucketWriter
625-        si_dir = storage_index_to_dir(storage_index)
626-        si_s = si_b2a(storage_index)
627 
628hunk ./src/allmydata/storage/server.py 302
629+        si_s = si_b2a(storage_index)
630         log.msg("storage: allocate_buckets %s" % si_s)
631 
632         # in this implementation, the lease information (including secrets)
633hunk ./src/allmydata/storage/server.py 316
634 
635         max_space_per_bucket = allocated_size
636 
637-        remaining_space = self.get_available_space()
638+        remaining_space = self.backend.get_available_space()
639         limited = remaining_space is not None
640         if limited:
641             # this is a bit conservative, since some of this allocated_size()
642hunk ./src/allmydata/storage/server.py 329
643         # they asked about: this will save them a lot of work. Add or update
644         # leases for all of them: if they want us to hold shares for this
645         # file, they'll want us to hold leases for this file.
646-        for (shnum, fn) in self._get_bucket_shares(storage_index):
647+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
648             alreadygot.add(shnum)
649             sf = ShareFile(fn)
650             sf.add_or_renew_lease(lease_info)
651hunk ./src/allmydata/storage/server.py 335
652 
653         for shnum in sharenums:
654-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
655-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
656-            if os.path.exists(finalhome):
657+            share = self.backend.get_share(storage_index, shnum)
658+
659+            if not share:
660+                if (not limited) or (remaining_space >= max_space_per_bucket):
661+                    # ok! we need to create the new share file.
662+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
663+                                      max_space_per_bucket, lease_info, canary)
664+                    bucketwriters[shnum] = bw
665+                    self._active_writers[bw] = 1
666+                    if limited:
667+                        remaining_space -= max_space_per_bucket
668+                else:
669+                    # bummer! not enough space to accept this bucket
670+                    pass
671+
672+            elif share.is_complete():
673                 # great! we already have it. easy.
674                 pass
675hunk ./src/allmydata/storage/server.py 353
676-            elif os.path.exists(incominghome):
677+            elif not share.is_complete():
678                 # Note that we don't create BucketWriters for shnums that
679                 # have a partial share (in incoming/), so if a second upload
680                 # occurs while the first is still in progress, the second
681hunk ./src/allmydata/storage/server.py 359
682                 # uploader will use different storage servers.
683                 pass
684-            elif (not limited) or (remaining_space >= max_space_per_bucket):
685-                # ok! we need to create the new share file.
686-                bw = BucketWriter(self, incominghome, finalhome,
687-                                  max_space_per_bucket, lease_info, canary)
688-                if self.no_storage:
689-                    bw.throw_out_all_data = True
690-                bucketwriters[shnum] = bw
691-                self._active_writers[bw] = 1
692-                if limited:
693-                    remaining_space -= max_space_per_bucket
694-            else:
695-                # bummer! not enough space to accept this bucket
696-                pass
697-
698-        if bucketwriters:
699-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
700 
701         self.add_latency("allocate", time.time() - start)
702         return alreadygot, bucketwriters
703hunk ./src/allmydata/storage/server.py 437
704             self.stats_provider.count('storage_server.bytes_added', consumed_size)
705         del self._active_writers[bw]
706 
707-    def _get_bucket_shares(self, storage_index):
708-        """Return a list of (shnum, pathname) tuples for files that hold
709-        shares for this storage_index. In each tuple, 'shnum' will always be
710-        the integer form of the last component of 'pathname'."""
711-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
712-        try:
713-            for f in os.listdir(storagedir):
714-                if NUM_RE.match(f):
715-                    filename = os.path.join(storagedir, f)
716-                    yield (int(f), filename)
717-        except OSError:
718-            # Commonly caused by there being no buckets at all.
719-            pass
720 
721     def remote_get_buckets(self, storage_index):
722         start = time.time()
723hunk ./src/allmydata/storage/server.py 444
724         si_s = si_b2a(storage_index)
725         log.msg("storage: get_buckets %s" % si_s)
726         bucketreaders = {} # k: sharenum, v: BucketReader
727-        for shnum, filename in self._get_bucket_shares(storage_index):
728+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
729             bucketreaders[shnum] = BucketReader(self, filename,
730                                                 storage_index, shnum)
731         self.add_latency("get", time.time() - start)
732hunk ./src/allmydata/test/test_backends.py 10
733 import mock
734 
735 # This is the code that we're going to be testing.
736-from allmydata.storage.server import StorageServer
737+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
738 
739 # The following share file contents was generated with
740 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
741hunk ./src/allmydata/test/test_backends.py 21
742 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
743 
744 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
745+    @mock.patch('time.time')
746+    @mock.patch('os.mkdir')
747+    @mock.patch('__builtin__.open')
748+    @mock.patch('os.listdir')
749+    @mock.patch('os.path.isdir')
750+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
751+        """ This tests whether a server instance can be constructed
752+        with a null backend. The server instance fails the test if it
753+        tries to read or write to the file system. """
754+
755+        # Now begin the test.
756+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
757+
758+        self.failIf(mockisdir.called)
759+        self.failIf(mocklistdir.called)
760+        self.failIf(mockopen.called)
761+        self.failIf(mockmkdir.called)
762+
763+        # You passed!
764+
765+    @mock.patch('time.time')
766+    @mock.patch('os.mkdir')
767     @mock.patch('__builtin__.open')
768hunk ./src/allmydata/test/test_backends.py 44
769-    def test_create_server(self, mockopen):
770-        """ This tests whether a server instance can be constructed. """
771+    @mock.patch('os.listdir')
772+    @mock.patch('os.path.isdir')
773+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
774+        """ This tests whether a server instance can be constructed
775+        with a filesystem backend. To pass the test, it has to use the
776+        filesystem in only the prescribed ways. """
777 
778         def call_open(fname, mode):
779             if fname == 'testdir/bucket_counter.state':
780hunk ./src/allmydata/test/test_backends.py 58
781                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
782             elif fname == 'testdir/lease_checker.history':
783                 return StringIO()
784+            else:
785+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
786         mockopen.side_effect = call_open
787 
788         # Now begin the test.
789hunk ./src/allmydata/test/test_backends.py 63
790-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
791+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
792+
793+        self.failIf(mockisdir.called)
794+        self.failIf(mocklistdir.called)
795+        self.failIf(mockopen.called)
796+        self.failIf(mockmkdir.called)
797+        self.failIf(mocktime.called)
798 
799         # You passed!
800 
801hunk ./src/allmydata/test/test_backends.py 73
802-class TestServer(unittest.TestCase, ReallyEqualMixin):
803+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
804+    def setUp(self):
805+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
806+
807+    @mock.patch('os.mkdir')
808+    @mock.patch('__builtin__.open')
809+    @mock.patch('os.listdir')
810+    @mock.patch('os.path.isdir')
811+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
812+        """ Write a new share. """
813+
814+        # Now begin the test.
815+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
816+        bs[0].remote_write(0, 'a')
817+        self.failIf(mockisdir.called)
818+        self.failIf(mocklistdir.called)
819+        self.failIf(mockopen.called)
820+        self.failIf(mockmkdir.called)
821+
822+    @mock.patch('os.path.exists')
823+    @mock.patch('os.path.getsize')
824+    @mock.patch('__builtin__.open')
825+    @mock.patch('os.listdir')
826+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
827+        """ This tests whether the code correctly finds and reads
828+        shares written out by old (Tahoe-LAFS <= v1.8.2)
829+        servers. There is a similar test in test_download, but that one
830+        is from the perspective of the client and exercises a deeper
831+        stack of code. This one is for exercising just the
832+        StorageServer object. """
833+
834+        # Now begin the test.
835+        bs = self.s.remote_get_buckets('teststorage_index')
836+
837+        self.failUnlessEqual(len(bs), 0)
838+        self.failIf(mocklistdir.called)
839+        self.failIf(mockopen.called)
840+        self.failIf(mockgetsize.called)
841+        self.failIf(mockexists.called)
842+
843+
844+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
845     @mock.patch('__builtin__.open')
846     def setUp(self, mockopen):
847         def call_open(fname, mode):
848hunk ./src/allmydata/test/test_backends.py 126
849                 return StringIO()
850         mockopen.side_effect = call_open
851 
852-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
853-
854+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
855 
856     @mock.patch('time.time')
857     @mock.patch('os.mkdir')
858hunk ./src/allmydata/test/test_backends.py 134
859     @mock.patch('os.listdir')
860     @mock.patch('os.path.isdir')
861     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
862-        """Handle a report of corruption."""
863+        """ Write a new share. """
864 
865         def call_listdir(dirname):
866             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
867hunk ./src/allmydata/test/test_backends.py 173
868         mockopen.side_effect = call_open
869         # Now begin the test.
870         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
871-        print bs
872         bs[0].remote_write(0, 'a')
873         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
874 
875hunk ./src/allmydata/test/test_backends.py 176
876-
877     @mock.patch('os.path.exists')
878     @mock.patch('os.path.getsize')
879     @mock.patch('__builtin__.open')
880hunk ./src/allmydata/test/test_backends.py 218
881 
882         self.failUnlessEqual(len(bs), 1)
883         b = bs[0]
884+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
885         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
886         # If you try to read past the end you get the as much data as is there.
887         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
888hunk ./src/allmydata/test/test_backends.py 224
889         # If you start reading past the end of the file you get the empty string.
890         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
891+
892+
893}
894[a temp patch used as a snapshot
895wilcoxjg@gmail.com**20110626052732
896 Ignore-this: 95f05e314eaec870afa04c76d979aa44
897] {
898hunk ./docs/configuration.rst 637
899   [storage]
900   enabled = True
901   readonly = True
902-  sizelimit = 10000000000
903 
904 
905   [helper]
906hunk ./docs/garbage-collection.rst 16
907 
908 When a file or directory in the virtual filesystem is no longer referenced,
909 the space that its shares occupied on each storage server can be freed,
910-making room for other shares. Tahoe currently uses a garbage collection
911+making room for other shares. Tahoe uses a garbage collection
912 ("GC") mechanism to implement this space-reclamation process. Each share has
913 one or more "leases", which are managed by clients who want the
914 file/directory to be retained. The storage server accepts each share for a
915hunk ./docs/garbage-collection.rst 34
916 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
917 If lease renewal occurs quickly and with 100% reliability, than any renewal
918 time that is shorter than the lease duration will suffice, but a larger ratio
919-of duration-over-renewal-time will be more robust in the face of occasional
920+of lease duration to renewal time will be more robust in the face of occasional
921 delays or failures.
922 
923 The current recommended values for a small Tahoe grid are to renew the leases
924replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
925hunk ./src/allmydata/client.py 260
926             sharetypes.append("mutable")
927         expiration_sharetypes = tuple(sharetypes)
928 
929+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
930+            xyz
931+        xyz
932         ss = StorageServer(storedir, self.nodeid,
933                            reserved_space=reserved,
934                            discard_storage=discard,
935hunk ./src/allmydata/storage/crawler.py 234
936         f = open(tmpfile, "wb")
937         pickle.dump(self.state, f)
938         f.close()
939-        fileutil.move_into_place(tmpfile, self.statefile)
940+        fileutil.move_into_place(tmpfile, self.statefname)
941 
942     def startService(self):
943         # arrange things to look like we were just sleeping, so
944}
945[snapshot of progress on backend implementation (not suitable for trunk)
946wilcoxjg@gmail.com**20110626053244
947 Ignore-this: 50c764af791c2b99ada8289546806a0a
948] {
949adddir ./src/allmydata/storage/backends
950adddir ./src/allmydata/storage/backends/das
951move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
952adddir ./src/allmydata/storage/backends/null
953hunk ./src/allmydata/interfaces.py 270
954         store that on disk.
955         """
956 
957+class IStorageBackend(Interface):
958+    """
959+    Objects of this kind live on the server side and are used by the
960+    storage server object.
961+    """
962+    def get_available_space(self, reserved_space):
963+        """ Returns available space for share storage in bytes, or
964+        None if this information is not available or if the available
965+        space is unlimited.
966+
967+        If the backend is configured for read-only mode then this will
968+        return 0.
969+
970+        reserved_space is how many bytes to subtract from the answer, so
971+        you can pass how many bytes you would like to leave unused on this
972+        filesystem as reserved_space. """
973+
974+    def get_bucket_shares(self):
975+        """XXX"""
976+
977+    def get_share(self):
978+        """XXX"""
979+
980+    def make_bucket_writer(self):
981+        """XXX"""
982+
983+class IStorageBackendShare(Interface):
984+    """
985+    This object contains as much as all of the share data.  It is intended
986+    for lazy evaluation such that in many use cases substantially less than
987+    all of the share data will be accessed.
988+    """
989+    def is_complete(self):
990+        """
991+        Returns the share state, or None if the share does not exist.
992+        """
993+
994 class IStorageBucketWriter(Interface):
995     """
996     Objects of this kind live on the client side.
997hunk ./src/allmydata/interfaces.py 2492
998 
999 class EmptyPathnameComponentError(Exception):
1000     """The webapi disallows empty pathname components."""
1001+
1002+class IShareStore(Interface):
1003+    pass
1004+
1005addfile ./src/allmydata/storage/backends/__init__.py
1006addfile ./src/allmydata/storage/backends/das/__init__.py
1007addfile ./src/allmydata/storage/backends/das/core.py
1008hunk ./src/allmydata/storage/backends/das/core.py 1
1009+from allmydata.interfaces import IStorageBackend
1010+from allmydata.storage.backends.base import Backend
1011+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1012+from allmydata.util.assertutil import precondition
1013+
1014+import os, re, weakref, struct, time
1015+
1016+from foolscap.api import Referenceable
1017+from twisted.application import service
1018+
1019+from zope.interface import implements
1020+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
1021+from allmydata.util import fileutil, idlib, log, time_format
1022+import allmydata # for __full_version__
1023+
1024+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1025+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
1026+from allmydata.storage.lease import LeaseInfo
1027+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1028+     create_mutable_sharefile
1029+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1030+from allmydata.storage.crawler import FSBucketCountingCrawler
1031+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1032+
1033+from zope.interface import implements
1034+
1035+class DASCore(Backend):
1036+    implements(IStorageBackend)
1037+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1038+        Backend.__init__(self)
1039+
1040+        self._setup_storage(storedir, readonly, reserved_space)
1041+        self._setup_corruption_advisory()
1042+        self._setup_bucket_counter()
1043+        self._setup_lease_checkerf(expiration_policy)
1044+
1045+    def _setup_storage(self, storedir, readonly, reserved_space):
1046+        self.storedir = storedir
1047+        self.readonly = readonly
1048+        self.reserved_space = int(reserved_space)
1049+        if self.reserved_space:
1050+            if self.get_available_space() is None:
1051+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1052+                        umid="0wZ27w", level=log.UNUSUAL)
1053+
1054+        self.sharedir = os.path.join(self.storedir, "shares")
1055+        fileutil.make_dirs(self.sharedir)
1056+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1057+        self._clean_incomplete()
1058+
1059+    def _clean_incomplete(self):
1060+        fileutil.rm_dir(self.incomingdir)
1061+        fileutil.make_dirs(self.incomingdir)
1062+
1063+    def _setup_corruption_advisory(self):
1064+        # we don't actually create the corruption-advisory dir until necessary
1065+        self.corruption_advisory_dir = os.path.join(self.storedir,
1066+                                                    "corruption-advisories")
1067+
1068+    def _setup_bucket_counter(self):
1069+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1070+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1071+        self.bucket_counter.setServiceParent(self)
1072+
1073+    def _setup_lease_checkerf(self, expiration_policy):
1074+        statefile = os.path.join(self.storedir, "lease_checker.state")
1075+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1076+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1077+        self.lease_checker.setServiceParent(self)
1078+
1079+    def get_available_space(self):
1080+        if self.readonly:
1081+            return 0
1082+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1083+
1084+    def get_shares(self, storage_index):
1085+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1086+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1087+        try:
1088+            for f in os.listdir(finalstoragedir):
1089+                if NUM_RE.match(f):
1090+                    filename = os.path.join(finalstoragedir, f)
1091+                    yield FSBShare(filename, int(f))
1092+        except OSError:
1093+            # Commonly caused by there being no buckets at all.
1094+            pass
1095+       
1096+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1097+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1098+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1099+        return bw
1100+       
1101+
1102+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1103+# and share data. The share data is accessed by RIBucketWriter.write and
1104+# RIBucketReader.read . The lease information is not accessible through these
1105+# interfaces.
1106+
1107+# The share file has the following layout:
1108+#  0x00: share file version number, four bytes, current version is 1
1109+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1110+#  0x08: number of leases, four bytes big-endian
1111+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1112+#  A+0x0c = B: first lease. Lease format is:
1113+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1114+#   B+0x04: renew secret, 32 bytes (SHA256)
1115+#   B+0x24: cancel secret, 32 bytes (SHA256)
1116+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1117+#   B+0x48: next lease, or end of record
1118+
1119+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1120+# but it is still filled in by storage servers in case the storage server
1121+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1122+# share file is moved from one storage server to another. The value stored in
1123+# this field is truncated, so if the actual share data length is >= 2**32,
1124+# then the value stored in this field will be the actual share data length
1125+# modulo 2**32.
1126+
1127+class ImmutableShare:
1128+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1129+    sharetype = "immutable"
1130+
1131+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1132+        """ If max_size is not None then I won't allow more than
1133+        max_size to be written to me. If create=True then max_size
1134+        must not be None. """
1135+        precondition((max_size is not None) or (not create), max_size, create)
1136+        self.shnum = shnum
1137+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1138+        self._max_size = max_size
1139+        if create:
1140+            # touch the file, so later callers will see that we're working on
1141+            # it. Also construct the metadata.
1142+            assert not os.path.exists(self.fname)
1143+            fileutil.make_dirs(os.path.dirname(self.fname))
1144+            f = open(self.fname, 'wb')
1145+            # The second field -- the four-byte share data length -- is no
1146+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1147+            # there in case someone downgrades a storage server from >=
1148+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1149+            # server to another, etc. We do saturation -- a share data length
1150+            # larger than 2**32-1 (what can fit into the field) is marked as
1151+            # the largest length that can fit into the field. That way, even
1152+            # if this does happen, the old < v1.3.0 server will still allow
1153+            # clients to read the first part of the share.
1154+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1155+            f.close()
1156+            self._lease_offset = max_size + 0x0c
1157+            self._num_leases = 0
1158+        else:
1159+            f = open(self.fname, 'rb')
1160+            filesize = os.path.getsize(self.fname)
1161+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1162+            f.close()
1163+            if version != 1:
1164+                msg = "sharefile %s had version %d but we wanted 1" % \
1165+                      (self.fname, version)
1166+                raise UnknownImmutableContainerVersionError(msg)
1167+            self._num_leases = num_leases
1168+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1169+        self._data_offset = 0xc
1170+
1171+    def unlink(self):
1172+        os.unlink(self.fname)
1173+
1174+    def read_share_data(self, offset, length):
1175+        precondition(offset >= 0)
1176+        # Reads beyond the end of the data are truncated. Reads that start
1177+        # beyond the end of the data return an empty string.
1178+        seekpos = self._data_offset+offset
1179+        fsize = os.path.getsize(self.fname)
1180+        actuallength = max(0, min(length, fsize-seekpos))
1181+        if actuallength == 0:
1182+            return ""
1183+        f = open(self.fname, 'rb')
1184+        f.seek(seekpos)
1185+        return f.read(actuallength)
1186+
1187+    def write_share_data(self, offset, data):
1188+        length = len(data)
1189+        precondition(offset >= 0, offset)
1190+        if self._max_size is not None and offset+length > self._max_size:
1191+            raise DataTooLargeError(self._max_size, offset, length)
1192+        f = open(self.fname, 'rb+')
1193+        real_offset = self._data_offset+offset
1194+        f.seek(real_offset)
1195+        assert f.tell() == real_offset
1196+        f.write(data)
1197+        f.close()
1198+
1199+    def _write_lease_record(self, f, lease_number, lease_info):
1200+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1201+        f.seek(offset)
1202+        assert f.tell() == offset
1203+        f.write(lease_info.to_immutable_data())
1204+
1205+    def _read_num_leases(self, f):
1206+        f.seek(0x08)
1207+        (num_leases,) = struct.unpack(">L", f.read(4))
1208+        return num_leases
1209+
1210+    def _write_num_leases(self, f, num_leases):
1211+        f.seek(0x08)
1212+        f.write(struct.pack(">L", num_leases))
1213+
1214+    def _truncate_leases(self, f, num_leases):
1215+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1216+
1217+    def get_leases(self):
1218+        """Yields a LeaseInfo instance for all leases."""
1219+        f = open(self.fname, 'rb')
1220+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1221+        f.seek(self._lease_offset)
1222+        for i in range(num_leases):
1223+            data = f.read(self.LEASE_SIZE)
1224+            if data:
1225+                yield LeaseInfo().from_immutable_data(data)
1226+
1227+    def add_lease(self, lease_info):
1228+        f = open(self.fname, 'rb+')
1229+        num_leases = self._read_num_leases(f)
1230+        self._write_lease_record(f, num_leases, lease_info)
1231+        self._write_num_leases(f, num_leases+1)
1232+        f.close()
1233+
1234+    def renew_lease(self, renew_secret, new_expire_time):
1235+        for i,lease in enumerate(self.get_leases()):
1236+            if constant_time_compare(lease.renew_secret, renew_secret):
1237+                # yup. See if we need to update the owner time.
1238+                if new_expire_time > lease.expiration_time:
1239+                    # yes
1240+                    lease.expiration_time = new_expire_time
1241+                    f = open(self.fname, 'rb+')
1242+                    self._write_lease_record(f, i, lease)
1243+                    f.close()
1244+                return
1245+        raise IndexError("unable to renew non-existent lease")
1246+
1247+    def add_or_renew_lease(self, lease_info):
1248+        try:
1249+            self.renew_lease(lease_info.renew_secret,
1250+                             lease_info.expiration_time)
1251+        except IndexError:
1252+            self.add_lease(lease_info)
1253+
1254+
1255+    def cancel_lease(self, cancel_secret):
1256+        """Remove a lease with the given cancel_secret. If the last lease is
1257+        cancelled, the file will be removed. Return the number of bytes that
1258+        were freed (by truncating the list of leases, and possibly by
1259+        deleting the file. Raise IndexError if there was no lease with the
1260+        given cancel_secret.
1261+        """
1262+
1263+        leases = list(self.get_leases())
1264+        num_leases_removed = 0
1265+        for i,lease in enumerate(leases):
1266+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1267+                leases[i] = None
1268+                num_leases_removed += 1
1269+        if not num_leases_removed:
1270+            raise IndexError("unable to find matching lease to cancel")
1271+        if num_leases_removed:
1272+            # pack and write out the remaining leases. We write these out in
1273+            # the same order as they were added, so that if we crash while
1274+            # doing this, we won't lose any non-cancelled leases.
1275+            leases = [l for l in leases if l] # remove the cancelled leases
1276+            f = open(self.fname, 'rb+')
1277+            for i,lease in enumerate(leases):
1278+                self._write_lease_record(f, i, lease)
1279+            self._write_num_leases(f, len(leases))
1280+            self._truncate_leases(f, len(leases))
1281+            f.close()
1282+        space_freed = self.LEASE_SIZE * num_leases_removed
1283+        if not len(leases):
1284+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1285+            self.unlink()
1286+        return space_freed
1287hunk ./src/allmydata/storage/backends/das/expirer.py 2
1288 import time, os, pickle, struct
1289-from allmydata.storage.crawler import ShareCrawler
1290-from allmydata.storage.shares import get_share_file
1291+from allmydata.storage.crawler import FSShareCrawler
1292 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1293      UnknownImmutableContainerVersionError
1294 from twisted.python import log as twlog
1295hunk ./src/allmydata/storage/backends/das/expirer.py 7
1296 
1297-class LeaseCheckingCrawler(ShareCrawler):
1298+class FSLeaseCheckingCrawler(FSShareCrawler):
1299     """I examine the leases on all shares, determining which are still valid
1300     and which have expired. I can remove the expired leases (if so
1301     configured), and the share will be deleted when the last lease is
1302hunk ./src/allmydata/storage/backends/das/expirer.py 50
1303     slow_start = 360 # wait 6 minutes after startup
1304     minimum_cycle_time = 12*60*60 # not more than twice per day
1305 
1306-    def __init__(self, statefile, historyfile,
1307-                 expiration_enabled, mode,
1308-                 override_lease_duration, # used if expiration_mode=="age"
1309-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1310-                 sharetypes):
1311+    def __init__(self, statefile, historyfile, expiration_policy):
1312         self.historyfile = historyfile
1313hunk ./src/allmydata/storage/backends/das/expirer.py 52
1314-        self.expiration_enabled = expiration_enabled
1315-        self.mode = mode
1316+        self.expiration_enabled = expiration_policy['enabled']
1317+        self.mode = expiration_policy['mode']
1318         self.override_lease_duration = None
1319         self.cutoff_date = None
1320         if self.mode == "age":
1321hunk ./src/allmydata/storage/backends/das/expirer.py 57
1322-            assert isinstance(override_lease_duration, (int, type(None)))
1323-            self.override_lease_duration = override_lease_duration # seconds
1324+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1325+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1326         elif self.mode == "cutoff-date":
1327hunk ./src/allmydata/storage/backends/das/expirer.py 60
1328-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1329+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1330             assert cutoff_date is not None
1331hunk ./src/allmydata/storage/backends/das/expirer.py 62
1332-            self.cutoff_date = cutoff_date
1333+            self.cutoff_date = expiration_policy['cutoff_date']
1334         else:
1335hunk ./src/allmydata/storage/backends/das/expirer.py 64
1336-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1337-        self.sharetypes_to_expire = sharetypes
1338-        ShareCrawler.__init__(self, statefile)
1339+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1340+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1341+        FSShareCrawler.__init__(self, statefile)
1342 
1343     def add_initial_state(self):
1344         # we fill ["cycle-to-date"] here (even though they will be reset in
1345hunk ./src/allmydata/storage/backends/das/expirer.py 156
1346 
1347     def process_share(self, sharefilename):
1348         # first, find out what kind of a share it is
1349-        sf = get_share_file(sharefilename)
1350+        f = open(sharefilename, "rb")
1351+        prefix = f.read(32)
1352+        f.close()
1353+        if prefix == MutableShareFile.MAGIC:
1354+            sf = MutableShareFile(sharefilename)
1355+        else:
1356+            # otherwise assume it's immutable
1357+            sf = FSBShare(sharefilename)
1358         sharetype = sf.sharetype
1359         now = time.time()
1360         s = self.stat(sharefilename)
1361addfile ./src/allmydata/storage/backends/null/__init__.py
1362addfile ./src/allmydata/storage/backends/null/core.py
1363hunk ./src/allmydata/storage/backends/null/core.py 1
1364+from allmydata.storage.backends.base import Backend
1365+
1366+class NullCore(Backend):
1367+    def __init__(self):
1368+        Backend.__init__(self)
1369+
1370+    def get_available_space(self):
1371+        return None
1372+
1373+    def get_shares(self, storage_index):
1374+        return set()
1375+
1376+    def get_share(self, storage_index, sharenum):
1377+        return None
1378+
1379+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1380+        return NullBucketWriter()
1381hunk ./src/allmydata/storage/crawler.py 12
1382 class TimeSliceExceeded(Exception):
1383     pass
1384 
1385-class ShareCrawler(service.MultiService):
1386+class FSShareCrawler(service.MultiService):
1387     """A subcless of ShareCrawler is attached to a StorageServer, and
1388     periodically walks all of its shares, processing each one in some
1389     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1390hunk ./src/allmydata/storage/crawler.py 68
1391     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1392     minimum_cycle_time = 300 # don't run a cycle faster than this
1393 
1394-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1395+    def __init__(self, statefname, allowed_cpu_percentage=None):
1396         service.MultiService.__init__(self)
1397         if allowed_cpu_percentage is not None:
1398             self.allowed_cpu_percentage = allowed_cpu_percentage
1399hunk ./src/allmydata/storage/crawler.py 72
1400-        self.backend = backend
1401+        self.statefname = statefname
1402         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1403                          for i in range(2**10)]
1404         self.prefixes.sort()
1405hunk ./src/allmydata/storage/crawler.py 192
1406         #                            of the last bucket to be processed, or
1407         #                            None if we are sleeping between cycles
1408         try:
1409-            f = open(self.statefile, "rb")
1410+            f = open(self.statefname, "rb")
1411             state = pickle.load(f)
1412             f.close()
1413         except EnvironmentError:
1414hunk ./src/allmydata/storage/crawler.py 230
1415         else:
1416             last_complete_prefix = self.prefixes[lcpi]
1417         self.state["last-complete-prefix"] = last_complete_prefix
1418-        tmpfile = self.statefile + ".tmp"
1419+        tmpfile = self.statefname + ".tmp"
1420         f = open(tmpfile, "wb")
1421         pickle.dump(self.state, f)
1422         f.close()
1423hunk ./src/allmydata/storage/crawler.py 433
1424         pass
1425 
1426 
1427-class BucketCountingCrawler(ShareCrawler):
1428+class FSBucketCountingCrawler(FSShareCrawler):
1429     """I keep track of how many buckets are being managed by this server.
1430     This is equivalent to the number of distributed files and directories for
1431     which I am providing storage. The actual number of files+directories in
1432hunk ./src/allmydata/storage/crawler.py 446
1433 
1434     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1435 
1436-    def __init__(self, statefile, num_sample_prefixes=1):
1437-        ShareCrawler.__init__(self, statefile)
1438+    def __init__(self, statefname, num_sample_prefixes=1):
1439+        FSShareCrawler.__init__(self, statefname)
1440         self.num_sample_prefixes = num_sample_prefixes
1441 
1442     def add_initial_state(self):
1443hunk ./src/allmydata/storage/immutable.py 14
1444 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1445      DataTooLargeError
1446 
1447-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1448-# and share data. The share data is accessed by RIBucketWriter.write and
1449-# RIBucketReader.read . The lease information is not accessible through these
1450-# interfaces.
1451-
1452-# The share file has the following layout:
1453-#  0x00: share file version number, four bytes, current version is 1
1454-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1455-#  0x08: number of leases, four bytes big-endian
1456-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1457-#  A+0x0c = B: first lease. Lease format is:
1458-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1459-#   B+0x04: renew secret, 32 bytes (SHA256)
1460-#   B+0x24: cancel secret, 32 bytes (SHA256)
1461-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1462-#   B+0x48: next lease, or end of record
1463-
1464-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1465-# but it is still filled in by storage servers in case the storage server
1466-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1467-# share file is moved from one storage server to another. The value stored in
1468-# this field is truncated, so if the actual share data length is >= 2**32,
1469-# then the value stored in this field will be the actual share data length
1470-# modulo 2**32.
1471-
1472-class ShareFile:
1473-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1474-    sharetype = "immutable"
1475-
1476-    def __init__(self, filename, max_size=None, create=False):
1477-        """ If max_size is not None then I won't allow more than
1478-        max_size to be written to me. If create=True then max_size
1479-        must not be None. """
1480-        precondition((max_size is not None) or (not create), max_size, create)
1481-        self.home = filename
1482-        self._max_size = max_size
1483-        if create:
1484-            # touch the file, so later callers will see that we're working on
1485-            # it. Also construct the metadata.
1486-            assert not os.path.exists(self.home)
1487-            fileutil.make_dirs(os.path.dirname(self.home))
1488-            f = open(self.home, 'wb')
1489-            # The second field -- the four-byte share data length -- is no
1490-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1491-            # there in case someone downgrades a storage server from >=
1492-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1493-            # server to another, etc. We do saturation -- a share data length
1494-            # larger than 2**32-1 (what can fit into the field) is marked as
1495-            # the largest length that can fit into the field. That way, even
1496-            # if this does happen, the old < v1.3.0 server will still allow
1497-            # clients to read the first part of the share.
1498-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1499-            f.close()
1500-            self._lease_offset = max_size + 0x0c
1501-            self._num_leases = 0
1502-        else:
1503-            f = open(self.home, 'rb')
1504-            filesize = os.path.getsize(self.home)
1505-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1506-            f.close()
1507-            if version != 1:
1508-                msg = "sharefile %s had version %d but we wanted 1" % \
1509-                      (filename, version)
1510-                raise UnknownImmutableContainerVersionError(msg)
1511-            self._num_leases = num_leases
1512-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1513-        self._data_offset = 0xc
1514-
1515-    def unlink(self):
1516-        os.unlink(self.home)
1517-
1518-    def read_share_data(self, offset, length):
1519-        precondition(offset >= 0)
1520-        # Reads beyond the end of the data are truncated. Reads that start
1521-        # beyond the end of the data return an empty string.
1522-        seekpos = self._data_offset+offset
1523-        fsize = os.path.getsize(self.home)
1524-        actuallength = max(0, min(length, fsize-seekpos))
1525-        if actuallength == 0:
1526-            return ""
1527-        f = open(self.home, 'rb')
1528-        f.seek(seekpos)
1529-        return f.read(actuallength)
1530-
1531-    def write_share_data(self, offset, data):
1532-        length = len(data)
1533-        precondition(offset >= 0, offset)
1534-        if self._max_size is not None and offset+length > self._max_size:
1535-            raise DataTooLargeError(self._max_size, offset, length)
1536-        f = open(self.home, 'rb+')
1537-        real_offset = self._data_offset+offset
1538-        f.seek(real_offset)
1539-        assert f.tell() == real_offset
1540-        f.write(data)
1541-        f.close()
1542-
1543-    def _write_lease_record(self, f, lease_number, lease_info):
1544-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1545-        f.seek(offset)
1546-        assert f.tell() == offset
1547-        f.write(lease_info.to_immutable_data())
1548-
1549-    def _read_num_leases(self, f):
1550-        f.seek(0x08)
1551-        (num_leases,) = struct.unpack(">L", f.read(4))
1552-        return num_leases
1553-
1554-    def _write_num_leases(self, f, num_leases):
1555-        f.seek(0x08)
1556-        f.write(struct.pack(">L", num_leases))
1557-
1558-    def _truncate_leases(self, f, num_leases):
1559-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1560-
1561-    def get_leases(self):
1562-        """Yields a LeaseInfo instance for all leases."""
1563-        f = open(self.home, 'rb')
1564-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1565-        f.seek(self._lease_offset)
1566-        for i in range(num_leases):
1567-            data = f.read(self.LEASE_SIZE)
1568-            if data:
1569-                yield LeaseInfo().from_immutable_data(data)
1570-
1571-    def add_lease(self, lease_info):
1572-        f = open(self.home, 'rb+')
1573-        num_leases = self._read_num_leases(f)
1574-        self._write_lease_record(f, num_leases, lease_info)
1575-        self._write_num_leases(f, num_leases+1)
1576-        f.close()
1577-
1578-    def renew_lease(self, renew_secret, new_expire_time):
1579-        for i,lease in enumerate(self.get_leases()):
1580-            if constant_time_compare(lease.renew_secret, renew_secret):
1581-                # yup. See if we need to update the owner time.
1582-                if new_expire_time > lease.expiration_time:
1583-                    # yes
1584-                    lease.expiration_time = new_expire_time
1585-                    f = open(self.home, 'rb+')
1586-                    self._write_lease_record(f, i, lease)
1587-                    f.close()
1588-                return
1589-        raise IndexError("unable to renew non-existent lease")
1590-
1591-    def add_or_renew_lease(self, lease_info):
1592-        try:
1593-            self.renew_lease(lease_info.renew_secret,
1594-                             lease_info.expiration_time)
1595-        except IndexError:
1596-            self.add_lease(lease_info)
1597-
1598-
1599-    def cancel_lease(self, cancel_secret):
1600-        """Remove a lease with the given cancel_secret. If the last lease is
1601-        cancelled, the file will be removed. Return the number of bytes that
1602-        were freed (by truncating the list of leases, and possibly by
1603-        deleting the file. Raise IndexError if there was no lease with the
1604-        given cancel_secret.
1605-        """
1606-
1607-        leases = list(self.get_leases())
1608-        num_leases_removed = 0
1609-        for i,lease in enumerate(leases):
1610-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1611-                leases[i] = None
1612-                num_leases_removed += 1
1613-        if not num_leases_removed:
1614-            raise IndexError("unable to find matching lease to cancel")
1615-        if num_leases_removed:
1616-            # pack and write out the remaining leases. We write these out in
1617-            # the same order as they were added, so that if we crash while
1618-            # doing this, we won't lose any non-cancelled leases.
1619-            leases = [l for l in leases if l] # remove the cancelled leases
1620-            f = open(self.home, 'rb+')
1621-            for i,lease in enumerate(leases):
1622-                self._write_lease_record(f, i, lease)
1623-            self._write_num_leases(f, len(leases))
1624-            self._truncate_leases(f, len(leases))
1625-            f.close()
1626-        space_freed = self.LEASE_SIZE * num_leases_removed
1627-        if not len(leases):
1628-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1629-            self.unlink()
1630-        return space_freed
1631-class NullBucketWriter(Referenceable):
1632-    implements(RIBucketWriter)
1633-
1634-    def remote_write(self, offset, data):
1635-        return
1636-
1637 class BucketWriter(Referenceable):
1638     implements(RIBucketWriter)
1639 
1640hunk ./src/allmydata/storage/immutable.py 17
1641-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1642+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1643         self.ss = ss
1644hunk ./src/allmydata/storage/immutable.py 19
1645-        self.incominghome = incominghome
1646-        self.finalhome = finalhome
1647         self._max_size = max_size # don't allow the client to write more than this
1648         self._canary = canary
1649         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1650hunk ./src/allmydata/storage/immutable.py 24
1651         self.closed = False
1652         self.throw_out_all_data = False
1653-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1654+        self._sharefile = immutableshare
1655         # also, add our lease to the file now, so that other ones can be
1656         # added by simultaneous uploaders
1657         self._sharefile.add_lease(lease_info)
1658hunk ./src/allmydata/storage/server.py 16
1659 from allmydata.storage.lease import LeaseInfo
1660 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1661      create_mutable_sharefile
1662-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1663-from allmydata.storage.crawler import BucketCountingCrawler
1664-from allmydata.storage.expirer import LeaseCheckingCrawler
1665 
1666 from zope.interface import implements
1667 
1668hunk ./src/allmydata/storage/server.py 19
1669-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1670-# be started and stopped.
1671-class Backend(service.MultiService):
1672-    implements(IStatsProducer)
1673-    def __init__(self):
1674-        service.MultiService.__init__(self)
1675-
1676-    def get_bucket_shares(self):
1677-        """XXX"""
1678-        raise NotImplementedError
1679-
1680-    def get_share(self):
1681-        """XXX"""
1682-        raise NotImplementedError
1683-
1684-    def make_bucket_writer(self):
1685-        """XXX"""
1686-        raise NotImplementedError
1687-
1688-class NullBackend(Backend):
1689-    def __init__(self):
1690-        Backend.__init__(self)
1691-
1692-    def get_available_space(self):
1693-        return None
1694-
1695-    def get_bucket_shares(self, storage_index):
1696-        return set()
1697-
1698-    def get_share(self, storage_index, sharenum):
1699-        return None
1700-
1701-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1702-        return NullBucketWriter()
1703-
1704-class FSBackend(Backend):
1705-    def __init__(self, storedir, readonly=False, reserved_space=0):
1706-        Backend.__init__(self)
1707-
1708-        self._setup_storage(storedir, readonly, reserved_space)
1709-        self._setup_corruption_advisory()
1710-        self._setup_bucket_counter()
1711-        self._setup_lease_checkerf()
1712-
1713-    def _setup_storage(self, storedir, readonly, reserved_space):
1714-        self.storedir = storedir
1715-        self.readonly = readonly
1716-        self.reserved_space = int(reserved_space)
1717-        if self.reserved_space:
1718-            if self.get_available_space() is None:
1719-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1720-                        umid="0wZ27w", level=log.UNUSUAL)
1721-
1722-        self.sharedir = os.path.join(self.storedir, "shares")
1723-        fileutil.make_dirs(self.sharedir)
1724-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1725-        self._clean_incomplete()
1726-
1727-    def _clean_incomplete(self):
1728-        fileutil.rm_dir(self.incomingdir)
1729-        fileutil.make_dirs(self.incomingdir)
1730-
1731-    def _setup_corruption_advisory(self):
1732-        # we don't actually create the corruption-advisory dir until necessary
1733-        self.corruption_advisory_dir = os.path.join(self.storedir,
1734-                                                    "corruption-advisories")
1735-
1736-    def _setup_bucket_counter(self):
1737-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1738-        self.bucket_counter = BucketCountingCrawler(statefile)
1739-        self.bucket_counter.setServiceParent(self)
1740-
1741-    def _setup_lease_checkerf(self):
1742-        statefile = os.path.join(self.storedir, "lease_checker.state")
1743-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1744-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1745-                                   expiration_enabled, expiration_mode,
1746-                                   expiration_override_lease_duration,
1747-                                   expiration_cutoff_date,
1748-                                   expiration_sharetypes)
1749-        self.lease_checker.setServiceParent(self)
1750-
1751-    def get_available_space(self):
1752-        if self.readonly:
1753-            return 0
1754-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1755-
1756-    def get_bucket_shares(self, storage_index):
1757-        """Return a list of (shnum, pathname) tuples for files that hold
1758-        shares for this storage_index. In each tuple, 'shnum' will always be
1759-        the integer form of the last component of 'pathname'."""
1760-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1761-        try:
1762-            for f in os.listdir(storagedir):
1763-                if NUM_RE.match(f):
1764-                    filename = os.path.join(storagedir, f)
1765-                    yield (int(f), filename)
1766-        except OSError:
1767-            # Commonly caused by there being no buckets at all.
1768-            pass
1769-
1770 # storage/
1771 # storage/shares/incoming
1772 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1773hunk ./src/allmydata/storage/server.py 32
1774 # $SHARENUM matches this regex:
1775 NUM_RE=re.compile("^[0-9]+$")
1776 
1777-
1778-
1779 class StorageServer(service.MultiService, Referenceable):
1780     implements(RIStorageServer, IStatsProducer)
1781     name = 'storage'
1782hunk ./src/allmydata/storage/server.py 35
1783-    LeaseCheckerClass = LeaseCheckingCrawler
1784 
1785     def __init__(self, nodeid, backend, reserved_space=0,
1786                  readonly_storage=False,
1787hunk ./src/allmydata/storage/server.py 38
1788-                 stats_provider=None,
1789-                 expiration_enabled=False,
1790-                 expiration_mode="age",
1791-                 expiration_override_lease_duration=None,
1792-                 expiration_cutoff_date=None,
1793-                 expiration_sharetypes=("mutable", "immutable")):
1794+                 stats_provider=None ):
1795         service.MultiService.__init__(self)
1796         assert isinstance(nodeid, str)
1797         assert len(nodeid) == 20
1798hunk ./src/allmydata/storage/server.py 217
1799         # they asked about: this will save them a lot of work. Add or update
1800         # leases for all of them: if they want us to hold shares for this
1801         # file, they'll want us to hold leases for this file.
1802-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1803-            alreadygot.add(shnum)
1804-            sf = ShareFile(fn)
1805-            sf.add_or_renew_lease(lease_info)
1806-
1807-        for shnum in sharenums:
1808-            share = self.backend.get_share(storage_index, shnum)
1809+        for share in self.backend.get_shares(storage_index):
1810+            alreadygot.add(share.shnum)
1811+            share.add_or_renew_lease(lease_info)
1812 
1813hunk ./src/allmydata/storage/server.py 221
1814-            if not share:
1815-                if (not limited) or (remaining_space >= max_space_per_bucket):
1816-                    # ok! we need to create the new share file.
1817-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1818-                                      max_space_per_bucket, lease_info, canary)
1819-                    bucketwriters[shnum] = bw
1820-                    self._active_writers[bw] = 1
1821-                    if limited:
1822-                        remaining_space -= max_space_per_bucket
1823-                else:
1824-                    # bummer! not enough space to accept this bucket
1825-                    pass
1826+        for shnum in (sharenums - alreadygot):
1827+            if (not limited) or (remaining_space >= max_space_per_bucket):
1828+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1829+                self.backend.set_storage_server(self)
1830+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1831+                                                     max_space_per_bucket, lease_info, canary)
1832+                bucketwriters[shnum] = bw
1833+                self._active_writers[bw] = 1
1834+                if limited:
1835+                    remaining_space -= max_space_per_bucket
1836 
1837hunk ./src/allmydata/storage/server.py 232
1838-            elif share.is_complete():
1839-                # great! we already have it. easy.
1840-                pass
1841-            elif not share.is_complete():
1842-                # Note that we don't create BucketWriters for shnums that
1843-                # have a partial share (in incoming/), so if a second upload
1844-                # occurs while the first is still in progress, the second
1845-                # uploader will use different storage servers.
1846-                pass
1847+        #XXX We SHOULD DOCUMENT LATER.
1848 
1849         self.add_latency("allocate", time.time() - start)
1850         return alreadygot, bucketwriters
1851hunk ./src/allmydata/storage/server.py 238
1852 
1853     def _iter_share_files(self, storage_index):
1854-        for shnum, filename in self._get_bucket_shares(storage_index):
1855+        for shnum, filename in self._get_shares(storage_index):
1856             f = open(filename, 'rb')
1857             header = f.read(32)
1858             f.close()
1859hunk ./src/allmydata/storage/server.py 318
1860         si_s = si_b2a(storage_index)
1861         log.msg("storage: get_buckets %s" % si_s)
1862         bucketreaders = {} # k: sharenum, v: BucketReader
1863-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1864+        for shnum, filename in self.backend.get_shares(storage_index):
1865             bucketreaders[shnum] = BucketReader(self, filename,
1866                                                 storage_index, shnum)
1867         self.add_latency("get", time.time() - start)
1868hunk ./src/allmydata/storage/server.py 334
1869         # since all shares get the same lease data, we just grab the leases
1870         # from the first share
1871         try:
1872-            shnum, filename = self._get_bucket_shares(storage_index).next()
1873+            shnum, filename = self._get_shares(storage_index).next()
1874             sf = ShareFile(filename)
1875             return sf.get_leases()
1876         except StopIteration:
1877hunk ./src/allmydata/storage/shares.py 1
1878-#! /usr/bin/python
1879-
1880-from allmydata.storage.mutable import MutableShareFile
1881-from allmydata.storage.immutable import ShareFile
1882-
1883-def get_share_file(filename):
1884-    f = open(filename, "rb")
1885-    prefix = f.read(32)
1886-    f.close()
1887-    if prefix == MutableShareFile.MAGIC:
1888-        return MutableShareFile(filename)
1889-    # otherwise assume it's immutable
1890-    return ShareFile(filename)
1891-
1892rmfile ./src/allmydata/storage/shares.py
1893hunk ./src/allmydata/test/common_util.py 20
1894 
1895 def flip_one_bit(s, offset=0, size=None):
1896     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1897-    than offset+size. """
1898+    than offset+size. Return the new string. """
1899     if size is None:
1900         size=len(s)-offset
1901     i = randrange(offset, offset+size)
1902hunk ./src/allmydata/test/test_backends.py 7
1903 
1904 from allmydata.test.common_util import ReallyEqualMixin
1905 
1906-import mock
1907+import mock, os
1908 
1909 # This is the code that we're going to be testing.
1910hunk ./src/allmydata/test/test_backends.py 10
1911-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1912+from allmydata.storage.server import StorageServer
1913+
1914+from allmydata.storage.backends.das.core import DASCore
1915+from allmydata.storage.backends.null.core import NullCore
1916+
1917 
1918 # The following share file contents was generated with
1919 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1920hunk ./src/allmydata/test/test_backends.py 22
1921 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1922 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1923 
1924-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1925+tempdir = 'teststoredir'
1926+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1927+sharefname = os.path.join(sharedirname, '0')
1928 
1929 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1930     @mock.patch('time.time')
1931hunk ./src/allmydata/test/test_backends.py 58
1932         filesystem in only the prescribed ways. """
1933 
1934         def call_open(fname, mode):
1935-            if fname == 'testdir/bucket_counter.state':
1936-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1937-            elif fname == 'testdir/lease_checker.state':
1938-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1939-            elif fname == 'testdir/lease_checker.history':
1940+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1941+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1942+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1943+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1944+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1945                 return StringIO()
1946             else:
1947                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1948hunk ./src/allmydata/test/test_backends.py 124
1949     @mock.patch('__builtin__.open')
1950     def setUp(self, mockopen):
1951         def call_open(fname, mode):
1952-            if fname == 'testdir/bucket_counter.state':
1953-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1954-            elif fname == 'testdir/lease_checker.state':
1955-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1956-            elif fname == 'testdir/lease_checker.history':
1957+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1958+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1959+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1960+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1961+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1962                 return StringIO()
1963         mockopen.side_effect = call_open
1964hunk ./src/allmydata/test/test_backends.py 131
1965-
1966-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1967+        expiration_policy = {'enabled' : False,
1968+                             'mode' : 'age',
1969+                             'override_lease_duration' : None,
1970+                             'cutoff_date' : None,
1971+                             'sharetypes' : None}
1972+        testbackend = DASCore(tempdir, expiration_policy)
1973+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1974 
1975     @mock.patch('time.time')
1976     @mock.patch('os.mkdir')
1977hunk ./src/allmydata/test/test_backends.py 148
1978         """ Write a new share. """
1979 
1980         def call_listdir(dirname):
1981-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1982-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1983+            self.failUnlessReallyEqual(dirname, sharedirname)
1984+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1985 
1986         mocklistdir.side_effect = call_listdir
1987 
1988hunk ./src/allmydata/test/test_backends.py 178
1989 
1990         sharefile = MockFile()
1991         def call_open(fname, mode):
1992-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1993+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1994             return sharefile
1995 
1996         mockopen.side_effect = call_open
1997hunk ./src/allmydata/test/test_backends.py 200
1998         StorageServer object. """
1999 
2000         def call_listdir(dirname):
2001-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
2002+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2003             return ['0']
2004 
2005         mocklistdir.side_effect = call_listdir
2006}
2007[checkpoint patch
2008wilcoxjg@gmail.com**20110626165715
2009 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
2010] {
2011hunk ./src/allmydata/storage/backends/das/core.py 21
2012 from allmydata.storage.lease import LeaseInfo
2013 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2014      create_mutable_sharefile
2015-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
2016+from allmydata.storage.immutable import BucketWriter, BucketReader
2017 from allmydata.storage.crawler import FSBucketCountingCrawler
2018 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
2019 
2020hunk ./src/allmydata/storage/backends/das/core.py 27
2021 from zope.interface import implements
2022 
2023+# $SHARENUM matches this regex:
2024+NUM_RE=re.compile("^[0-9]+$")
2025+
2026 class DASCore(Backend):
2027     implements(IStorageBackend)
2028     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
2029hunk ./src/allmydata/storage/backends/das/core.py 80
2030         return fileutil.get_available_space(self.storedir, self.reserved_space)
2031 
2032     def get_shares(self, storage_index):
2033-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
2034+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
2035         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
2036         try:
2037             for f in os.listdir(finalstoragedir):
2038hunk ./src/allmydata/storage/backends/das/core.py 86
2039                 if NUM_RE.match(f):
2040                     filename = os.path.join(finalstoragedir, f)
2041-                    yield FSBShare(filename, int(f))
2042+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
2043         except OSError:
2044             # Commonly caused by there being no buckets at all.
2045             pass
2046hunk ./src/allmydata/storage/backends/das/core.py 95
2047         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
2048         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2049         return bw
2050+
2051+    def set_storage_server(self, ss):
2052+        self.ss = ss
2053         
2054 
2055 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2056hunk ./src/allmydata/storage/server.py 29
2057 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2058 # base-32 chars).
2059 
2060-# $SHARENUM matches this regex:
2061-NUM_RE=re.compile("^[0-9]+$")
2062 
2063 class StorageServer(service.MultiService, Referenceable):
2064     implements(RIStorageServer, IStatsProducer)
2065}
2066[checkpoint4
2067wilcoxjg@gmail.com**20110628202202
2068 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2069] {
2070hunk ./src/allmydata/storage/backends/das/core.py 96
2071         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2072         return bw
2073 
2074+    def make_bucket_reader(self, share):
2075+        return BucketReader(self.ss, share)
2076+
2077     def set_storage_server(self, ss):
2078         self.ss = ss
2079         
2080hunk ./src/allmydata/storage/backends/das/core.py 138
2081         must not be None. """
2082         precondition((max_size is not None) or (not create), max_size, create)
2083         self.shnum = shnum
2084+        self.storage_index = storageindex
2085         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2086         self._max_size = max_size
2087         if create:
2088hunk ./src/allmydata/storage/backends/das/core.py 173
2089             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2090         self._data_offset = 0xc
2091 
2092+    def get_shnum(self):
2093+        return self.shnum
2094+
2095     def unlink(self):
2096         os.unlink(self.fname)
2097 
2098hunk ./src/allmydata/storage/backends/null/core.py 2
2099 from allmydata.storage.backends.base import Backend
2100+from allmydata.storage.immutable import BucketWriter, BucketReader
2101 
2102 class NullCore(Backend):
2103     def __init__(self):
2104hunk ./src/allmydata/storage/backends/null/core.py 17
2105     def get_share(self, storage_index, sharenum):
2106         return None
2107 
2108-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2109-        return NullBucketWriter()
2110+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2111+       
2112+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2113+
2114+    def set_storage_server(self, ss):
2115+        self.ss = ss
2116+
2117+class ImmutableShare:
2118+    sharetype = "immutable"
2119+
2120+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2121+        """ If max_size is not None then I won't allow more than
2122+        max_size to be written to me. If create=True then max_size
2123+        must not be None. """
2124+        precondition((max_size is not None) or (not create), max_size, create)
2125+        self.shnum = shnum
2126+        self.storage_index = storageindex
2127+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2128+        self._max_size = max_size
2129+        if create:
2130+            # touch the file, so later callers will see that we're working on
2131+            # it. Also construct the metadata.
2132+            assert not os.path.exists(self.fname)
2133+            fileutil.make_dirs(os.path.dirname(self.fname))
2134+            f = open(self.fname, 'wb')
2135+            # The second field -- the four-byte share data length -- is no
2136+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2137+            # there in case someone downgrades a storage server from >=
2138+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2139+            # server to another, etc. We do saturation -- a share data length
2140+            # larger than 2**32-1 (what can fit into the field) is marked as
2141+            # the largest length that can fit into the field. That way, even
2142+            # if this does happen, the old < v1.3.0 server will still allow
2143+            # clients to read the first part of the share.
2144+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2145+            f.close()
2146+            self._lease_offset = max_size + 0x0c
2147+            self._num_leases = 0
2148+        else:
2149+            f = open(self.fname, 'rb')
2150+            filesize = os.path.getsize(self.fname)
2151+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2152+            f.close()
2153+            if version != 1:
2154+                msg = "sharefile %s had version %d but we wanted 1" % \
2155+                      (self.fname, version)
2156+                raise UnknownImmutableContainerVersionError(msg)
2157+            self._num_leases = num_leases
2158+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2159+        self._data_offset = 0xc
2160+
2161+    def get_shnum(self):
2162+        return self.shnum
2163+
2164+    def unlink(self):
2165+        os.unlink(self.fname)
2166+
2167+    def read_share_data(self, offset, length):
2168+        precondition(offset >= 0)
2169+        # Reads beyond the end of the data are truncated. Reads that start
2170+        # beyond the end of the data return an empty string.
2171+        seekpos = self._data_offset+offset
2172+        fsize = os.path.getsize(self.fname)
2173+        actuallength = max(0, min(length, fsize-seekpos))
2174+        if actuallength == 0:
2175+            return ""
2176+        f = open(self.fname, 'rb')
2177+        f.seek(seekpos)
2178+        return f.read(actuallength)
2179+
2180+    def write_share_data(self, offset, data):
2181+        length = len(data)
2182+        precondition(offset >= 0, offset)
2183+        if self._max_size is not None and offset+length > self._max_size:
2184+            raise DataTooLargeError(self._max_size, offset, length)
2185+        f = open(self.fname, 'rb+')
2186+        real_offset = self._data_offset+offset
2187+        f.seek(real_offset)
2188+        assert f.tell() == real_offset
2189+        f.write(data)
2190+        f.close()
2191+
2192+    def _write_lease_record(self, f, lease_number, lease_info):
2193+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2194+        f.seek(offset)
2195+        assert f.tell() == offset
2196+        f.write(lease_info.to_immutable_data())
2197+
2198+    def _read_num_leases(self, f):
2199+        f.seek(0x08)
2200+        (num_leases,) = struct.unpack(">L", f.read(4))
2201+        return num_leases
2202+
2203+    def _write_num_leases(self, f, num_leases):
2204+        f.seek(0x08)
2205+        f.write(struct.pack(">L", num_leases))
2206+
2207+    def _truncate_leases(self, f, num_leases):
2208+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2209+
2210+    def get_leases(self):
2211+        """Yields a LeaseInfo instance for all leases."""
2212+        f = open(self.fname, 'rb')
2213+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2214+        f.seek(self._lease_offset)
2215+        for i in range(num_leases):
2216+            data = f.read(self.LEASE_SIZE)
2217+            if data:
2218+                yield LeaseInfo().from_immutable_data(data)
2219+
2220+    def add_lease(self, lease_info):
2221+        f = open(self.fname, 'rb+')
2222+        num_leases = self._read_num_leases(f)
2223+        self._write_lease_record(f, num_leases, lease_info)
2224+        self._write_num_leases(f, num_leases+1)
2225+        f.close()
2226+
2227+    def renew_lease(self, renew_secret, new_expire_time):
2228+        for i,lease in enumerate(self.get_leases()):
2229+            if constant_time_compare(lease.renew_secret, renew_secret):
2230+                # yup. See if we need to update the owner time.
2231+                if new_expire_time > lease.expiration_time:
2232+                    # yes
2233+                    lease.expiration_time = new_expire_time
2234+                    f = open(self.fname, 'rb+')
2235+                    self._write_lease_record(f, i, lease)
2236+                    f.close()
2237+                return
2238+        raise IndexError("unable to renew non-existent lease")
2239+
2240+    def add_or_renew_lease(self, lease_info):
2241+        try:
2242+            self.renew_lease(lease_info.renew_secret,
2243+                             lease_info.expiration_time)
2244+        except IndexError:
2245+            self.add_lease(lease_info)
2246+
2247+
2248+    def cancel_lease(self, cancel_secret):
2249+        """Remove a lease with the given cancel_secret. If the last lease is
2250+        cancelled, the file will be removed. Return the number of bytes that
2251+        were freed (by truncating the list of leases, and possibly by
2252+        deleting the file. Raise IndexError if there was no lease with the
2253+        given cancel_secret.
2254+        """
2255+
2256+        leases = list(self.get_leases())
2257+        num_leases_removed = 0
2258+        for i,lease in enumerate(leases):
2259+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2260+                leases[i] = None
2261+                num_leases_removed += 1
2262+        if not num_leases_removed:
2263+            raise IndexError("unable to find matching lease to cancel")
2264+        if num_leases_removed:
2265+            # pack and write out the remaining leases. We write these out in
2266+            # the same order as they were added, so that if we crash while
2267+            # doing this, we won't lose any non-cancelled leases.
2268+            leases = [l for l in leases if l] # remove the cancelled leases
2269+            f = open(self.fname, 'rb+')
2270+            for i,lease in enumerate(leases):
2271+                self._write_lease_record(f, i, lease)
2272+            self._write_num_leases(f, len(leases))
2273+            self._truncate_leases(f, len(leases))
2274+            f.close()
2275+        space_freed = self.LEASE_SIZE * num_leases_removed
2276+        if not len(leases):
2277+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2278+            self.unlink()
2279+        return space_freed
2280hunk ./src/allmydata/storage/immutable.py 114
2281 class BucketReader(Referenceable):
2282     implements(RIBucketReader)
2283 
2284-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2285+    def __init__(self, ss, share):
2286         self.ss = ss
2287hunk ./src/allmydata/storage/immutable.py 116
2288-        self._share_file = ShareFile(sharefname)
2289-        self.storage_index = storage_index
2290-        self.shnum = shnum
2291+        self._share_file = share
2292+        self.storage_index = share.storage_index
2293+        self.shnum = share.shnum
2294 
2295     def __repr__(self):
2296         return "<%s %s %s>" % (self.__class__.__name__,
2297hunk ./src/allmydata/storage/server.py 316
2298         si_s = si_b2a(storage_index)
2299         log.msg("storage: get_buckets %s" % si_s)
2300         bucketreaders = {} # k: sharenum, v: BucketReader
2301-        for shnum, filename in self.backend.get_shares(storage_index):
2302-            bucketreaders[shnum] = BucketReader(self, filename,
2303-                                                storage_index, shnum)
2304+        self.backend.set_storage_server(self)
2305+        for share in self.backend.get_shares(storage_index):
2306+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2307         self.add_latency("get", time.time() - start)
2308         return bucketreaders
2309 
2310hunk ./src/allmydata/test/test_backends.py 25
2311 tempdir = 'teststoredir'
2312 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2313 sharefname = os.path.join(sharedirname, '0')
2314+expiration_policy = {'enabled' : False,
2315+                     'mode' : 'age',
2316+                     'override_lease_duration' : None,
2317+                     'cutoff_date' : None,
2318+                     'sharetypes' : None}
2319 
2320 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2321     @mock.patch('time.time')
2322hunk ./src/allmydata/test/test_backends.py 43
2323         tries to read or write to the file system. """
2324 
2325         # Now begin the test.
2326-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2327+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2328 
2329         self.failIf(mockisdir.called)
2330         self.failIf(mocklistdir.called)
2331hunk ./src/allmydata/test/test_backends.py 74
2332         mockopen.side_effect = call_open
2333 
2334         # Now begin the test.
2335-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2336+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2337 
2338         self.failIf(mockisdir.called)
2339         self.failIf(mocklistdir.called)
2340hunk ./src/allmydata/test/test_backends.py 86
2341 
2342 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2343     def setUp(self):
2344-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2345+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2346 
2347     @mock.patch('os.mkdir')
2348     @mock.patch('__builtin__.open')
2349hunk ./src/allmydata/test/test_backends.py 136
2350             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2351                 return StringIO()
2352         mockopen.side_effect = call_open
2353-        expiration_policy = {'enabled' : False,
2354-                             'mode' : 'age',
2355-                             'override_lease_duration' : None,
2356-                             'cutoff_date' : None,
2357-                             'sharetypes' : None}
2358         testbackend = DASCore(tempdir, expiration_policy)
2359         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2360 
2361}
2362[checkpoint5
2363wilcoxjg@gmail.com**20110705034626
2364 Ignore-this: 255780bd58299b0aa33c027e9d008262
2365] {
2366addfile ./src/allmydata/storage/backends/base.py
2367hunk ./src/allmydata/storage/backends/base.py 1
2368+from twisted.application import service
2369+
2370+class Backend(service.MultiService):
2371+    def __init__(self):
2372+        service.MultiService.__init__(self)
2373hunk ./src/allmydata/storage/backends/null/core.py 19
2374 
2375     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2376         
2377+        immutableshare = ImmutableShare()
2378         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2379 
2380     def set_storage_server(self, ss):
2381hunk ./src/allmydata/storage/backends/null/core.py 28
2382 class ImmutableShare:
2383     sharetype = "immutable"
2384 
2385-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2386+    def __init__(self):
2387         """ If max_size is not None then I won't allow more than
2388         max_size to be written to me. If create=True then max_size
2389         must not be None. """
2390hunk ./src/allmydata/storage/backends/null/core.py 32
2391-        precondition((max_size is not None) or (not create), max_size, create)
2392-        self.shnum = shnum
2393-        self.storage_index = storageindex
2394-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2395-        self._max_size = max_size
2396-        if create:
2397-            # touch the file, so later callers will see that we're working on
2398-            # it. Also construct the metadata.
2399-            assert not os.path.exists(self.fname)
2400-            fileutil.make_dirs(os.path.dirname(self.fname))
2401-            f = open(self.fname, 'wb')
2402-            # The second field -- the four-byte share data length -- is no
2403-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2404-            # there in case someone downgrades a storage server from >=
2405-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2406-            # server to another, etc. We do saturation -- a share data length
2407-            # larger than 2**32-1 (what can fit into the field) is marked as
2408-            # the largest length that can fit into the field. That way, even
2409-            # if this does happen, the old < v1.3.0 server will still allow
2410-            # clients to read the first part of the share.
2411-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2412-            f.close()
2413-            self._lease_offset = max_size + 0x0c
2414-            self._num_leases = 0
2415-        else:
2416-            f = open(self.fname, 'rb')
2417-            filesize = os.path.getsize(self.fname)
2418-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2419-            f.close()
2420-            if version != 1:
2421-                msg = "sharefile %s had version %d but we wanted 1" % \
2422-                      (self.fname, version)
2423-                raise UnknownImmutableContainerVersionError(msg)
2424-            self._num_leases = num_leases
2425-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2426-        self._data_offset = 0xc
2427+        pass
2428 
2429     def get_shnum(self):
2430         return self.shnum
2431hunk ./src/allmydata/storage/backends/null/core.py 54
2432         return f.read(actuallength)
2433 
2434     def write_share_data(self, offset, data):
2435-        length = len(data)
2436-        precondition(offset >= 0, offset)
2437-        if self._max_size is not None and offset+length > self._max_size:
2438-            raise DataTooLargeError(self._max_size, offset, length)
2439-        f = open(self.fname, 'rb+')
2440-        real_offset = self._data_offset+offset
2441-        f.seek(real_offset)
2442-        assert f.tell() == real_offset
2443-        f.write(data)
2444-        f.close()
2445+        pass
2446 
2447     def _write_lease_record(self, f, lease_number, lease_info):
2448         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2449hunk ./src/allmydata/storage/backends/null/core.py 84
2450             if data:
2451                 yield LeaseInfo().from_immutable_data(data)
2452 
2453-    def add_lease(self, lease_info):
2454-        f = open(self.fname, 'rb+')
2455-        num_leases = self._read_num_leases(f)
2456-        self._write_lease_record(f, num_leases, lease_info)
2457-        self._write_num_leases(f, num_leases+1)
2458-        f.close()
2459+    def add_lease(self, lease):
2460+        pass
2461 
2462     def renew_lease(self, renew_secret, new_expire_time):
2463         for i,lease in enumerate(self.get_leases()):
2464hunk ./src/allmydata/test/test_backends.py 32
2465                      'sharetypes' : None}
2466 
2467 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2468-    @mock.patch('time.time')
2469-    @mock.patch('os.mkdir')
2470-    @mock.patch('__builtin__.open')
2471-    @mock.patch('os.listdir')
2472-    @mock.patch('os.path.isdir')
2473-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2474-        """ This tests whether a server instance can be constructed
2475-        with a null backend. The server instance fails the test if it
2476-        tries to read or write to the file system. """
2477-
2478-        # Now begin the test.
2479-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2480-
2481-        self.failIf(mockisdir.called)
2482-        self.failIf(mocklistdir.called)
2483-        self.failIf(mockopen.called)
2484-        self.failIf(mockmkdir.called)
2485-
2486-        # You passed!
2487-
2488     @mock.patch('time.time')
2489     @mock.patch('os.mkdir')
2490     @mock.patch('__builtin__.open')
2491hunk ./src/allmydata/test/test_backends.py 53
2492                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2493         mockopen.side_effect = call_open
2494 
2495-        # Now begin the test.
2496-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2497-
2498-        self.failIf(mockisdir.called)
2499-        self.failIf(mocklistdir.called)
2500-        self.failIf(mockopen.called)
2501-        self.failIf(mockmkdir.called)
2502-        self.failIf(mocktime.called)
2503-
2504-        # You passed!
2505-
2506-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2507-    def setUp(self):
2508-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2509-
2510-    @mock.patch('os.mkdir')
2511-    @mock.patch('__builtin__.open')
2512-    @mock.patch('os.listdir')
2513-    @mock.patch('os.path.isdir')
2514-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2515-        """ Write a new share. """
2516-
2517-        # Now begin the test.
2518-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2519-        bs[0].remote_write(0, 'a')
2520-        self.failIf(mockisdir.called)
2521-        self.failIf(mocklistdir.called)
2522-        self.failIf(mockopen.called)
2523-        self.failIf(mockmkdir.called)
2524+        def call_isdir(fname):
2525+            if fname == os.path.join(tempdir,'shares'):
2526+                return True
2527+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2528+                return True
2529+            else:
2530+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2531+        mockisdir.side_effect = call_isdir
2532 
2533hunk ./src/allmydata/test/test_backends.py 62
2534-    @mock.patch('os.path.exists')
2535-    @mock.patch('os.path.getsize')
2536-    @mock.patch('__builtin__.open')
2537-    @mock.patch('os.listdir')
2538-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2539-        """ This tests whether the code correctly finds and reads
2540-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2541-        servers. There is a similar test in test_download, but that one
2542-        is from the perspective of the client and exercises a deeper
2543-        stack of code. This one is for exercising just the
2544-        StorageServer object. """
2545+        def call_mkdir(fname, mode):
2546+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2547+            self.failUnlessEqual(0777, mode)
2548+            if fname == tempdir:
2549+                return None
2550+            elif fname == os.path.join(tempdir,'shares'):
2551+                return None
2552+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2553+                return None
2554+            else:
2555+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2556+        mockmkdir.side_effect = call_mkdir
2557 
2558         # Now begin the test.
2559hunk ./src/allmydata/test/test_backends.py 76
2560-        bs = self.s.remote_get_buckets('teststorage_index')
2561+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2562 
2563hunk ./src/allmydata/test/test_backends.py 78
2564-        self.failUnlessEqual(len(bs), 0)
2565-        self.failIf(mocklistdir.called)
2566-        self.failIf(mockopen.called)
2567-        self.failIf(mockgetsize.called)
2568-        self.failIf(mockexists.called)
2569+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2570 
2571 
2572 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2573hunk ./src/allmydata/test/test_backends.py 193
2574         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2575 
2576 
2577+
2578+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2579+    @mock.patch('time.time')
2580+    @mock.patch('os.mkdir')
2581+    @mock.patch('__builtin__.open')
2582+    @mock.patch('os.listdir')
2583+    @mock.patch('os.path.isdir')
2584+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2585+        """ This tests whether a file system backend instance can be
2586+        constructed. To pass the test, it has to use the
2587+        filesystem in only the prescribed ways. """
2588+
2589+        def call_open(fname, mode):
2590+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2591+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2592+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2593+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2594+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2595+                return StringIO()
2596+            else:
2597+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2598+        mockopen.side_effect = call_open
2599+
2600+        def call_isdir(fname):
2601+            if fname == os.path.join(tempdir,'shares'):
2602+                return True
2603+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2604+                return True
2605+            else:
2606+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2607+        mockisdir.side_effect = call_isdir
2608+
2609+        def call_mkdir(fname, mode):
2610+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2611+            self.failUnlessEqual(0777, mode)
2612+            if fname == tempdir:
2613+                return None
2614+            elif fname == os.path.join(tempdir,'shares'):
2615+                return None
2616+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2617+                return None
2618+            else:
2619+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2620+        mockmkdir.side_effect = call_mkdir
2621+
2622+        # Now begin the test.
2623+        DASCore('teststoredir', expiration_policy)
2624+
2625+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2626}
2627[checkpoint 6
2628wilcoxjg@gmail.com**20110706190824
2629 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2630] {
2631hunk ./src/allmydata/interfaces.py 100
2632                          renew_secret=LeaseRenewSecret,
2633                          cancel_secret=LeaseCancelSecret,
2634                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2635-                         allocated_size=Offset, canary=Referenceable):
2636+                         allocated_size=Offset,
2637+                         canary=Referenceable):
2638         """
2639hunk ./src/allmydata/interfaces.py 103
2640-        @param storage_index: the index of the bucket to be created or
2641+        @param storage_index: the index of the shares to be created or
2642                               increfed.
2643hunk ./src/allmydata/interfaces.py 105
2644-        @param sharenums: these are the share numbers (probably between 0 and
2645-                          99) that the sender is proposing to store on this
2646-                          server.
2647-        @param renew_secret: This is the secret used to protect bucket refresh
2648+        @param renew_secret: This is the secret used to protect shares refresh
2649                              This secret is generated by the client and
2650                              stored for later comparison by the server. Each
2651                              server is given a different secret.
2652hunk ./src/allmydata/interfaces.py 109
2653-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2654-        @param canary: If the canary is lost before close(), the bucket is
2655+        @param cancel_secret: Like renew_secret, but protects shares decref.
2656+        @param sharenums: these are the share numbers (probably between 0 and
2657+                          99) that the sender is proposing to store on this
2658+                          server.
2659+        @param allocated_size: XXX The size of the shares the client wishes to store.
2660+        @param canary: If the canary is lost before close(), the shares are
2661                        deleted.
2662hunk ./src/allmydata/interfaces.py 116
2663+
2664         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2665                  already have and allocated is what we hereby agree to accept.
2666                  New leases are added for shares in both lists.
2667hunk ./src/allmydata/interfaces.py 128
2668                   renew_secret=LeaseRenewSecret,
2669                   cancel_secret=LeaseCancelSecret):
2670         """
2671-        Add a new lease on the given bucket. If the renew_secret matches an
2672+        Add a new lease on the given shares. If the renew_secret matches an
2673         existing lease, that lease will be renewed instead. If there is no
2674         bucket for the given storage_index, return silently. (note that in
2675         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2676hunk ./src/allmydata/storage/server.py 17
2677 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2678      create_mutable_sharefile
2679 
2680-from zope.interface import implements
2681-
2682 # storage/
2683 # storage/shares/incoming
2684 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2685hunk ./src/allmydata/test/test_backends.py 6
2686 from StringIO import StringIO
2687 
2688 from allmydata.test.common_util import ReallyEqualMixin
2689+from allmydata.util.assertutil import _assert
2690 
2691 import mock, os
2692 
2693hunk ./src/allmydata/test/test_backends.py 92
2694                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2695             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2696                 return StringIO()
2697+            else:
2698+                _assert(False, "The tester code doesn't recognize this case.") 
2699+
2700         mockopen.side_effect = call_open
2701         testbackend = DASCore(tempdir, expiration_policy)
2702         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2703hunk ./src/allmydata/test/test_backends.py 109
2704 
2705         def call_listdir(dirname):
2706             self.failUnlessReallyEqual(dirname, sharedirname)
2707-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2708+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2709 
2710         mocklistdir.side_effect = call_listdir
2711 
2712hunk ./src/allmydata/test/test_backends.py 113
2713+        def call_isdir(dirname):
2714+            self.failUnlessReallyEqual(dirname, sharedirname)
2715+            return True
2716+
2717+        mockisdir.side_effect = call_isdir
2718+
2719+        def call_mkdir(dirname, permissions):
2720+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2721+                self.Fail
2722+            else:
2723+                return True
2724+
2725+        mockmkdir.side_effect = call_mkdir
2726+
2727         class MockFile:
2728             def __init__(self):
2729                 self.buffer = ''
2730hunk ./src/allmydata/test/test_backends.py 156
2731             return sharefile
2732 
2733         mockopen.side_effect = call_open
2734+
2735         # Now begin the test.
2736         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2737         bs[0].remote_write(0, 'a')
2738hunk ./src/allmydata/test/test_backends.py 161
2739         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2740+       
2741+        # Now test the allocated_size method.
2742+        spaceint = self.s.allocated_size()
2743 
2744     @mock.patch('os.path.exists')
2745     @mock.patch('os.path.getsize')
2746}
2747[checkpoint 7
2748wilcoxjg@gmail.com**20110706200820
2749 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2750] hunk ./src/allmydata/test/test_backends.py 164
2751         
2752         # Now test the allocated_size method.
2753         spaceint = self.s.allocated_size()
2754+        self.failUnlessReallyEqual(spaceint, 1)
2755 
2756     @mock.patch('os.path.exists')
2757     @mock.patch('os.path.getsize')
2758[checkpoint8
2759wilcoxjg@gmail.com**20110706223126
2760 Ignore-this: 97336180883cb798b16f15411179f827
2761   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2762] hunk ./src/allmydata/test/test_backends.py 32
2763                      'cutoff_date' : None,
2764                      'sharetypes' : None}
2765 
2766+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2767+    def setUp(self):
2768+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2769+
2770+    @mock.patch('os.mkdir')
2771+    @mock.patch('__builtin__.open')
2772+    @mock.patch('os.listdir')
2773+    @mock.patch('os.path.isdir')
2774+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2775+        """ Write a new share. """
2776+
2777+        # Now begin the test.
2778+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2779+        bs[0].remote_write(0, 'a')
2780+        self.failIf(mockisdir.called)
2781+        self.failIf(mocklistdir.called)
2782+        self.failIf(mockopen.called)
2783+        self.failIf(mockmkdir.called)
2784+
2785 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2786     @mock.patch('time.time')
2787     @mock.patch('os.mkdir')
2788[checkpoint 9
2789wilcoxjg@gmail.com**20110707042942
2790 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2791] {
2792hunk ./src/allmydata/storage/backends/das/core.py 88
2793                     filename = os.path.join(finalstoragedir, f)
2794                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2795         except OSError:
2796-            # Commonly caused by there being no buckets at all.
2797+            # Commonly caused by there being no shares at all.
2798             pass
2799         
2800     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2801hunk ./src/allmydata/storage/backends/das/core.py 141
2802         self.storage_index = storageindex
2803         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2804         self._max_size = max_size
2805+        self.incomingdir = os.path.join(sharedir, 'incoming')
2806+        si_dir = storage_index_to_dir(storageindex)
2807+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2808+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2809         if create:
2810             # touch the file, so later callers will see that we're working on
2811             # it. Also construct the metadata.
2812hunk ./src/allmydata/storage/backends/das/core.py 177
2813             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2814         self._data_offset = 0xc
2815 
2816+    def close(self):
2817+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2818+        fileutil.rename(self.incominghome, self.finalhome)
2819+        try:
2820+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2821+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2822+            # these directories lying around forever, but the delete might
2823+            # fail if we're working on another share for the same storage
2824+            # index (like ab/abcde/5). The alternative approach would be to
2825+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2826+            # ShareWriter), each of which is responsible for a single
2827+            # directory on disk, and have them use reference counting of
2828+            # their children to know when they should do the rmdir. This
2829+            # approach is simpler, but relies on os.rmdir refusing to delete
2830+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2831+            os.rmdir(os.path.dirname(self.incominghome))
2832+            # we also delete the grandparent (prefix) directory, .../ab ,
2833+            # again to avoid leaving directories lying around. This might
2834+            # fail if there is another bucket open that shares a prefix (like
2835+            # ab/abfff).
2836+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2837+            # we leave the great-grandparent (incoming/) directory in place.
2838+        except EnvironmentError:
2839+            # ignore the "can't rmdir because the directory is not empty"
2840+            # exceptions, those are normal consequences of the
2841+            # above-mentioned conditions.
2842+            pass
2843+        pass
2844+       
2845+    def stat(self):
2846+        return os.stat(self.finalhome)[stat.ST_SIZE]
2847+
2848     def get_shnum(self):
2849         return self.shnum
2850 
2851hunk ./src/allmydata/storage/immutable.py 7
2852 
2853 from zope.interface import implements
2854 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2855-from allmydata.util import base32, fileutil, log
2856+from allmydata.util import base32, log
2857 from allmydata.util.assertutil import precondition
2858 from allmydata.util.hashutil import constant_time_compare
2859 from allmydata.storage.lease import LeaseInfo
2860hunk ./src/allmydata/storage/immutable.py 44
2861     def remote_close(self):
2862         precondition(not self.closed)
2863         start = time.time()
2864-
2865-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2866-        fileutil.rename(self.incominghome, self.finalhome)
2867-        try:
2868-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2869-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2870-            # these directories lying around forever, but the delete might
2871-            # fail if we're working on another share for the same storage
2872-            # index (like ab/abcde/5). The alternative approach would be to
2873-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2874-            # ShareWriter), each of which is responsible for a single
2875-            # directory on disk, and have them use reference counting of
2876-            # their children to know when they should do the rmdir. This
2877-            # approach is simpler, but relies on os.rmdir refusing to delete
2878-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2879-            os.rmdir(os.path.dirname(self.incominghome))
2880-            # we also delete the grandparent (prefix) directory, .../ab ,
2881-            # again to avoid leaving directories lying around. This might
2882-            # fail if there is another bucket open that shares a prefix (like
2883-            # ab/abfff).
2884-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2885-            # we leave the great-grandparent (incoming/) directory in place.
2886-        except EnvironmentError:
2887-            # ignore the "can't rmdir because the directory is not empty"
2888-            # exceptions, those are normal consequences of the
2889-            # above-mentioned conditions.
2890-            pass
2891+        self._sharefile.close()
2892         self._sharefile = None
2893         self.closed = True
2894         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2895hunk ./src/allmydata/storage/immutable.py 49
2896 
2897-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2898+        filelen = self._sharefile.stat()
2899         self.ss.bucket_writer_closed(self, filelen)
2900         self.ss.add_latency("close", time.time() - start)
2901         self.ss.count("close")
2902hunk ./src/allmydata/storage/server.py 45
2903         self._active_writers = weakref.WeakKeyDictionary()
2904         self.backend = backend
2905         self.backend.setServiceParent(self)
2906+        self.backend.set_storage_server(self)
2907         log.msg("StorageServer created", facility="tahoe.storage")
2908 
2909         self.latencies = {"allocate": [], # immutable
2910hunk ./src/allmydata/storage/server.py 220
2911 
2912         for shnum in (sharenums - alreadygot):
2913             if (not limited) or (remaining_space >= max_space_per_bucket):
2914-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2915-                self.backend.set_storage_server(self)
2916                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2917                                                      max_space_per_bucket, lease_info, canary)
2918                 bucketwriters[shnum] = bw
2919hunk ./src/allmydata/test/test_backends.py 117
2920         mockopen.side_effect = call_open
2921         testbackend = DASCore(tempdir, expiration_policy)
2922         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2923-
2924+   
2925+    @mock.patch('allmydata.util.fileutil.get_available_space')
2926     @mock.patch('time.time')
2927     @mock.patch('os.mkdir')
2928     @mock.patch('__builtin__.open')
2929hunk ./src/allmydata/test/test_backends.py 124
2930     @mock.patch('os.listdir')
2931     @mock.patch('os.path.isdir')
2932-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2933+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2934+                             mockget_available_space):
2935         """ Write a new share. """
2936 
2937         def call_listdir(dirname):
2938hunk ./src/allmydata/test/test_backends.py 148
2939 
2940         mockmkdir.side_effect = call_mkdir
2941 
2942+        def call_get_available_space(storedir, reserved_space):
2943+            self.failUnlessReallyEqual(storedir, tempdir)
2944+            return 1
2945+
2946+        mockget_available_space.side_effect = call_get_available_space
2947+
2948         class MockFile:
2949             def __init__(self):
2950                 self.buffer = ''
2951hunk ./src/allmydata/test/test_backends.py 188
2952         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2953         bs[0].remote_write(0, 'a')
2954         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2955-       
2956+
2957+        # What happens when there's not enough space for the client's request?
2958+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2959+
2960         # Now test the allocated_size method.
2961         spaceint = self.s.allocated_size()
2962         self.failUnlessReallyEqual(spaceint, 1)
2963}
2964[checkpoint10
2965wilcoxjg@gmail.com**20110707172049
2966 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2967] {
2968hunk ./src/allmydata/test/test_backends.py 20
2969 # The following share file contents was generated with
2970 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2971 # with share data == 'a'.
2972-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2973+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2974+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2975+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2976 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2977 
2978hunk ./src/allmydata/test/test_backends.py 25
2979+testnodeid = 'testnodeidxxxxxxxxxx'
2980 tempdir = 'teststoredir'
2981 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2982 sharefname = os.path.join(sharedirname, '0')
2983hunk ./src/allmydata/test/test_backends.py 37
2984 
2985 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2986     def setUp(self):
2987-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2988+        self.s = StorageServer(testnodeid, backend=NullCore())
2989 
2990     @mock.patch('os.mkdir')
2991     @mock.patch('__builtin__.open')
2992hunk ./src/allmydata/test/test_backends.py 99
2993         mockmkdir.side_effect = call_mkdir
2994 
2995         # Now begin the test.
2996-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2997+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2998 
2999         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
3000 
3001hunk ./src/allmydata/test/test_backends.py 119
3002 
3003         mockopen.side_effect = call_open
3004         testbackend = DASCore(tempdir, expiration_policy)
3005-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
3006-   
3007+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3008+       
3009+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3010     @mock.patch('allmydata.util.fileutil.get_available_space')
3011     @mock.patch('time.time')
3012     @mock.patch('os.mkdir')
3013hunk ./src/allmydata/test/test_backends.py 129
3014     @mock.patch('os.listdir')
3015     @mock.patch('os.path.isdir')
3016     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3017-                             mockget_available_space):
3018+                             mockget_available_space, mockget_shares):
3019         """ Write a new share. """
3020 
3021         def call_listdir(dirname):
3022hunk ./src/allmydata/test/test_backends.py 139
3023         mocklistdir.side_effect = call_listdir
3024 
3025         def call_isdir(dirname):
3026+            #XXX Should there be any other tests here?
3027             self.failUnlessReallyEqual(dirname, sharedirname)
3028             return True
3029 
3030hunk ./src/allmydata/test/test_backends.py 159
3031 
3032         mockget_available_space.side_effect = call_get_available_space
3033 
3034+        mocktime.return_value = 0
3035+        class MockShare:
3036+            def __init__(self):
3037+                self.shnum = 1
3038+               
3039+            def add_or_renew_lease(elf, lease_info):
3040+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3041+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3042+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3043+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3044+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3045+               
3046+
3047+        share = MockShare()
3048+        def call_get_shares(storageindex):
3049+            return [share]
3050+
3051+        mockget_shares.side_effect = call_get_shares
3052+
3053         class MockFile:
3054             def __init__(self):
3055                 self.buffer = ''
3056hunk ./src/allmydata/test/test_backends.py 199
3057             def tell(self):
3058                 return self.pos
3059 
3060-        mocktime.return_value = 0
3061 
3062         sharefile = MockFile()
3063         def call_open(fname, mode):
3064}
3065[jacp 11
3066wilcoxjg@gmail.com**20110708213919
3067 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3068] {
3069hunk ./src/allmydata/storage/backends/das/core.py 144
3070         self.incomingdir = os.path.join(sharedir, 'incoming')
3071         si_dir = storage_index_to_dir(storageindex)
3072         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3073+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3074         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3075         if create:
3076             # touch the file, so later callers will see that we're working on
3077hunk ./src/allmydata/storage/backends/das/core.py 208
3078         pass
3079         
3080     def stat(self):
3081-        return os.stat(self.finalhome)[stat.ST_SIZE]
3082+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3083 
3084     def get_shnum(self):
3085         return self.shnum
3086hunk ./src/allmydata/storage/immutable.py 44
3087     def remote_close(self):
3088         precondition(not self.closed)
3089         start = time.time()
3090+
3091         self._sharefile.close()
3092hunk ./src/allmydata/storage/immutable.py 46
3093+        filelen = self._sharefile.stat()
3094         self._sharefile = None
3095hunk ./src/allmydata/storage/immutable.py 48
3096+
3097         self.closed = True
3098         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3099 
3100hunk ./src/allmydata/storage/immutable.py 52
3101-        filelen = self._sharefile.stat()
3102         self.ss.bucket_writer_closed(self, filelen)
3103         self.ss.add_latency("close", time.time() - start)
3104         self.ss.count("close")
3105hunk ./src/allmydata/storage/server.py 220
3106 
3107         for shnum in (sharenums - alreadygot):
3108             if (not limited) or (remaining_space >= max_space_per_bucket):
3109-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3110-                                                     max_space_per_bucket, lease_info, canary)
3111+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3112                 bucketwriters[shnum] = bw
3113                 self._active_writers[bw] = 1
3114                 if limited:
3115hunk ./src/allmydata/test/test_backends.py 20
3116 # The following share file contents was generated with
3117 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3118 # with share data == 'a'.
3119-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3120-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3121+renew_secret  = 'x'*32
3122+cancel_secret = 'y'*32
3123 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3124 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3125 
3126hunk ./src/allmydata/test/test_backends.py 27
3127 testnodeid = 'testnodeidxxxxxxxxxx'
3128 tempdir = 'teststoredir'
3129-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3130-sharefname = os.path.join(sharedirname, '0')
3131+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3132+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3133+shareincomingname = os.path.join(sharedirincomingname, '0')
3134+sharefname = os.path.join(sharedirfinalname, '0')
3135+
3136 expiration_policy = {'enabled' : False,
3137                      'mode' : 'age',
3138                      'override_lease_duration' : None,
3139hunk ./src/allmydata/test/test_backends.py 123
3140         mockopen.side_effect = call_open
3141         testbackend = DASCore(tempdir, expiration_policy)
3142         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3143-       
3144+
3145+    @mock.patch('allmydata.util.fileutil.rename')
3146+    @mock.patch('allmydata.util.fileutil.make_dirs')
3147+    @mock.patch('os.path.exists')
3148+    @mock.patch('os.stat')
3149     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3150     @mock.patch('allmydata.util.fileutil.get_available_space')
3151     @mock.patch('time.time')
3152hunk ./src/allmydata/test/test_backends.py 136
3153     @mock.patch('os.listdir')
3154     @mock.patch('os.path.isdir')
3155     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3156-                             mockget_available_space, mockget_shares):
3157+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3158+                             mockmake_dirs, mockrename):
3159         """ Write a new share. """
3160 
3161         def call_listdir(dirname):
3162hunk ./src/allmydata/test/test_backends.py 141
3163-            self.failUnlessReallyEqual(dirname, sharedirname)
3164+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3165             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3166 
3167         mocklistdir.side_effect = call_listdir
3168hunk ./src/allmydata/test/test_backends.py 148
3169 
3170         def call_isdir(dirname):
3171             #XXX Should there be any other tests here?
3172-            self.failUnlessReallyEqual(dirname, sharedirname)
3173+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3174             return True
3175 
3176         mockisdir.side_effect = call_isdir
3177hunk ./src/allmydata/test/test_backends.py 154
3178 
3179         def call_mkdir(dirname, permissions):
3180-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3181+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3182                 self.Fail
3183             else:
3184                 return True
3185hunk ./src/allmydata/test/test_backends.py 208
3186                 return self.pos
3187 
3188 
3189-        sharefile = MockFile()
3190+        fobj = MockFile()
3191         def call_open(fname, mode):
3192             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3193hunk ./src/allmydata/test/test_backends.py 211
3194-            return sharefile
3195+            return fobj
3196 
3197         mockopen.side_effect = call_open
3198 
3199hunk ./src/allmydata/test/test_backends.py 215
3200+        def call_make_dirs(dname):
3201+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3202+           
3203+        mockmake_dirs.side_effect = call_make_dirs
3204+
3205+        def call_rename(src, dst):
3206+           self.failUnlessReallyEqual(src, shareincomingname)
3207+           self.failUnlessReallyEqual(dst, sharefname)
3208+           
3209+        mockrename.side_effect = call_rename
3210+
3211+        def call_exists(fname):
3212+            self.failUnlessReallyEqual(fname, sharefname)
3213+
3214+        mockexists.side_effect = call_exists
3215+
3216         # Now begin the test.
3217         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3218         bs[0].remote_write(0, 'a')
3219hunk ./src/allmydata/test/test_backends.py 234
3220-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3221+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3222+        spaceint = self.s.allocated_size()
3223+        self.failUnlessReallyEqual(spaceint, 1)
3224+
3225+        bs[0].remote_close()
3226 
3227         # What happens when there's not enough space for the client's request?
3228hunk ./src/allmydata/test/test_backends.py 241
3229-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3230+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3231 
3232         # Now test the allocated_size method.
3233hunk ./src/allmydata/test/test_backends.py 244
3234-        spaceint = self.s.allocated_size()
3235-        self.failUnlessReallyEqual(spaceint, 1)
3236+        #self.failIf(mockexists.called, mockexists.call_args_list)
3237+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3238+        #self.failIf(mockrename.called, mockrename.call_args_list)
3239+        #self.failIf(mockstat.called, mockstat.call_args_list)
3240 
3241     @mock.patch('os.path.exists')
3242     @mock.patch('os.path.getsize')
3243}
3244[checkpoint12 testing correct behavior with regard to incoming and final
3245wilcoxjg@gmail.com**20110710191915
3246 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3247] {
3248hunk ./src/allmydata/storage/backends/das/core.py 74
3249         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3250         self.lease_checker.setServiceParent(self)
3251 
3252+    def get_incoming(self, storageindex):
3253+        return set((1,))
3254+
3255     def get_available_space(self):
3256         if self.readonly:
3257             return 0
3258hunk ./src/allmydata/storage/server.py 77
3259         """Return a dict, indexed by category, that contains a dict of
3260         latency numbers for each category. If there are sufficient samples
3261         for unambiguous interpretation, each dict will contain the
3262-        following keys: mean, 01_0_percentile, 10_0_percentile,
3263+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3264         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3265         99_0_percentile, 99_9_percentile.  If there are insufficient
3266         samples for a given percentile to be interpreted unambiguously
3267hunk ./src/allmydata/storage/server.py 120
3268 
3269     def get_stats(self):
3270         # remember: RIStatsProvider requires that our return dict
3271-        # contains numeric values.
3272+        # contains numeric, or None values.
3273         stats = { 'storage_server.allocated': self.allocated_size(), }
3274         stats['storage_server.reserved_space'] = self.reserved_space
3275         for category,ld in self.get_latencies().items():
3276hunk ./src/allmydata/storage/server.py 185
3277         start = time.time()
3278         self.count("allocate")
3279         alreadygot = set()
3280+        incoming = set()
3281         bucketwriters = {} # k: shnum, v: BucketWriter
3282 
3283         si_s = si_b2a(storage_index)
3284hunk ./src/allmydata/storage/server.py 219
3285             alreadygot.add(share.shnum)
3286             share.add_or_renew_lease(lease_info)
3287 
3288-        for shnum in (sharenums - alreadygot):
3289+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3290+        incoming = self.backend.get_incoming(storageindex)
3291+
3292+        for shnum in ((sharenums - alreadygot) - incoming):
3293             if (not limited) or (remaining_space >= max_space_per_bucket):
3294                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3295                 bucketwriters[shnum] = bw
3296hunk ./src/allmydata/storage/server.py 229
3297                 self._active_writers[bw] = 1
3298                 if limited:
3299                     remaining_space -= max_space_per_bucket
3300-
3301-        #XXX We SHOULD DOCUMENT LATER.
3302+            else:
3303+                # Bummer not enough space to accept this share.
3304+                pass
3305 
3306         self.add_latency("allocate", time.time() - start)
3307         return alreadygot, bucketwriters
3308hunk ./src/allmydata/storage/server.py 323
3309         self.add_latency("get", time.time() - start)
3310         return bucketreaders
3311 
3312-    def get_leases(self, storage_index):
3313+    def remote_get_incoming(self, storageindex):
3314+        incoming_share_set = self.backend.get_incoming(storageindex)
3315+        return incoming_share_set
3316+
3317+    def get_leases(self, storageindex):
3318         """Provide an iterator that yields all of the leases attached to this
3319         bucket. Each lease is returned as a LeaseInfo instance.
3320 
3321hunk ./src/allmydata/storage/server.py 337
3322         # since all shares get the same lease data, we just grab the leases
3323         # from the first share
3324         try:
3325-            shnum, filename = self._get_shares(storage_index).next()
3326+            shnum, filename = self._get_shares(storageindex).next()
3327             sf = ShareFile(filename)
3328             return sf.get_leases()
3329         except StopIteration:
3330hunk ./src/allmydata/test/test_backends.py 182
3331 
3332         share = MockShare()
3333         def call_get_shares(storageindex):
3334-            return [share]
3335+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3336+            return []#share]
3337 
3338         mockget_shares.side_effect = call_get_shares
3339 
3340hunk ./src/allmydata/test/test_backends.py 222
3341         mockmake_dirs.side_effect = call_make_dirs
3342 
3343         def call_rename(src, dst):
3344-           self.failUnlessReallyEqual(src, shareincomingname)
3345-           self.failUnlessReallyEqual(dst, sharefname)
3346+            self.failUnlessReallyEqual(src, shareincomingname)
3347+            self.failUnlessReallyEqual(dst, sharefname)
3348             
3349         mockrename.side_effect = call_rename
3350 
3351hunk ./src/allmydata/test/test_backends.py 233
3352         mockexists.side_effect = call_exists
3353 
3354         # Now begin the test.
3355+
3356+        # XXX (0) ???  Fail unless something is not properly set-up?
3357         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3358hunk ./src/allmydata/test/test_backends.py 236
3359+
3360+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3361+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3362+
3363+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3364+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3365+        # with the same si, until BucketWriter.remote_close() has been called.
3366+        # self.failIf(bsa)
3367+
3368+        # XXX (3) Inspect final and fail unless there's nothing there.
3369         bs[0].remote_write(0, 'a')
3370hunk ./src/allmydata/test/test_backends.py 247
3371+        # XXX (4a) Inspect final and fail unless share 0 is there.
3372+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3373         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3374         spaceint = self.s.allocated_size()
3375         self.failUnlessReallyEqual(spaceint, 1)
3376hunk ./src/allmydata/test/test_backends.py 253
3377 
3378+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3379         bs[0].remote_close()
3380 
3381         # What happens when there's not enough space for the client's request?
3382hunk ./src/allmydata/test/test_backends.py 260
3383         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3384 
3385         # Now test the allocated_size method.
3386-        #self.failIf(mockexists.called, mockexists.call_args_list)
3387+        # self.failIf(mockexists.called, mockexists.call_args_list)
3388         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3389         #self.failIf(mockrename.called, mockrename.call_args_list)
3390         #self.failIf(mockstat.called, mockstat.call_args_list)
3391}
3392[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3393wilcoxjg@gmail.com**20110710195139
3394 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3395] {
3396hunk ./src/allmydata/storage/server.py 220
3397             share.add_or_renew_lease(lease_info)
3398 
3399         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3400-        incoming = self.backend.get_incoming(storageindex)
3401+        incoming = self.backend.get_incoming(storage_index)
3402 
3403         for shnum in ((sharenums - alreadygot) - incoming):
3404             if (not limited) or (remaining_space >= max_space_per_bucket):
3405hunk ./src/allmydata/storage/server.py 323
3406         self.add_latency("get", time.time() - start)
3407         return bucketreaders
3408 
3409-    def remote_get_incoming(self, storageindex):
3410-        incoming_share_set = self.backend.get_incoming(storageindex)
3411+    def remote_get_incoming(self, storage_index):
3412+        incoming_share_set = self.backend.get_incoming(storage_index)
3413         return incoming_share_set
3414 
3415hunk ./src/allmydata/storage/server.py 327
3416-    def get_leases(self, storageindex):
3417+    def get_leases(self, storage_index):
3418         """Provide an iterator that yields all of the leases attached to this
3419         bucket. Each lease is returned as a LeaseInfo instance.
3420 
3421hunk ./src/allmydata/storage/server.py 337
3422         # since all shares get the same lease data, we just grab the leases
3423         # from the first share
3424         try:
3425-            shnum, filename = self._get_shares(storageindex).next()
3426+            shnum, filename = self._get_shares(storage_index).next()
3427             sf = ShareFile(filename)
3428             return sf.get_leases()
3429         except StopIteration:
3430replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3431}
3432[adding comments to clarify what I'm about to do.
3433wilcoxjg@gmail.com**20110710220623
3434 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3435] {
3436hunk ./src/allmydata/storage/backends/das/core.py 8
3437 
3438 import os, re, weakref, struct, time
3439 
3440-from foolscap.api import Referenceable
3441+#from foolscap.api import Referenceable
3442 from twisted.application import service
3443 
3444 from zope.interface import implements
3445hunk ./src/allmydata/storage/backends/das/core.py 12
3446-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3447+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3448 from allmydata.util import fileutil, idlib, log, time_format
3449 import allmydata # for __full_version__
3450 
3451hunk ./src/allmydata/storage/server.py 219
3452             alreadygot.add(share.shnum)
3453             share.add_or_renew_lease(lease_info)
3454 
3455-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3456+        # fill incoming with all shares that are incoming use a set operation
3457+        # since there's no need to operate on individual pieces
3458         incoming = self.backend.get_incoming(storageindex)
3459 
3460         for shnum in ((sharenums - alreadygot) - incoming):
3461hunk ./src/allmydata/test/test_backends.py 245
3462         # with the same si, until BucketWriter.remote_close() has been called.
3463         # self.failIf(bsa)
3464 
3465-        # XXX (3) Inspect final and fail unless there's nothing there.
3466         bs[0].remote_write(0, 'a')
3467hunk ./src/allmydata/test/test_backends.py 246
3468-        # XXX (4a) Inspect final and fail unless share 0 is there.
3469-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3470         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3471         spaceint = self.s.allocated_size()
3472         self.failUnlessReallyEqual(spaceint, 1)
3473hunk ./src/allmydata/test/test_backends.py 250
3474 
3475-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3476+        # XXX (3) Inspect final and fail unless there's nothing there.
3477         bs[0].remote_close()
3478hunk ./src/allmydata/test/test_backends.py 252
3479+        # XXX (4a) Inspect final and fail unless share 0 is there.
3480+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3481 
3482         # What happens when there's not enough space for the client's request?
3483         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3484}
3485[branching back, no longer attempting to mock inside TestServerFSBackend
3486wilcoxjg@gmail.com**20110711190849
3487 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3488] {
3489hunk ./src/allmydata/storage/backends/das/core.py 75
3490         self.lease_checker.setServiceParent(self)
3491 
3492     def get_incoming(self, storageindex):
3493-        return set((1,))
3494-
3495-    def get_available_space(self):
3496-        if self.readonly:
3497-            return 0
3498-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3499+        """Return the set of incoming shnums."""
3500+        return set(os.listdir(self.incomingdir))
3501 
3502     def get_shares(self, storage_index):
3503         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3504hunk ./src/allmydata/storage/backends/das/core.py 90
3505             # Commonly caused by there being no shares at all.
3506             pass
3507         
3508+    def get_available_space(self):
3509+        if self.readonly:
3510+            return 0
3511+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3512+
3513     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3514         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3515         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3516hunk ./src/allmydata/test/test_backends.py 27
3517 
3518 testnodeid = 'testnodeidxxxxxxxxxx'
3519 tempdir = 'teststoredir'
3520-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3521-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3522+basedir = os.path.join(tempdir, 'shares')
3523+baseincdir = os.path.join(basedir, 'incoming')
3524+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3525+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3526 shareincomingname = os.path.join(sharedirincomingname, '0')
3527 sharefname = os.path.join(sharedirfinalname, '0')
3528 
3529hunk ./src/allmydata/test/test_backends.py 142
3530                              mockmake_dirs, mockrename):
3531         """ Write a new share. """
3532 
3533-        def call_listdir(dirname):
3534-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3535-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3536-
3537-        mocklistdir.side_effect = call_listdir
3538-
3539-        def call_isdir(dirname):
3540-            #XXX Should there be any other tests here?
3541-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3542-            return True
3543-
3544-        mockisdir.side_effect = call_isdir
3545-
3546-        def call_mkdir(dirname, permissions):
3547-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3548-                self.Fail
3549-            else:
3550-                return True
3551-
3552-        mockmkdir.side_effect = call_mkdir
3553-
3554-        def call_get_available_space(storedir, reserved_space):
3555-            self.failUnlessReallyEqual(storedir, tempdir)
3556-            return 1
3557-
3558-        mockget_available_space.side_effect = call_get_available_space
3559-
3560-        mocktime.return_value = 0
3561         class MockShare:
3562             def __init__(self):
3563                 self.shnum = 1
3564hunk ./src/allmydata/test/test_backends.py 152
3565                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3566                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3567                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3568-               
3569 
3570         share = MockShare()
3571hunk ./src/allmydata/test/test_backends.py 154
3572-        def call_get_shares(storageindex):
3573-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3574-            return []#share]
3575-
3576-        mockget_shares.side_effect = call_get_shares
3577 
3578         class MockFile:
3579             def __init__(self):
3580hunk ./src/allmydata/test/test_backends.py 176
3581             def tell(self):
3582                 return self.pos
3583 
3584-
3585         fobj = MockFile()
3586hunk ./src/allmydata/test/test_backends.py 177
3587+
3588+        directories = {}
3589+        def call_listdir(dirname):
3590+            if dirname not in directories:
3591+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3592+            else:
3593+                return directories[dirname].get_contents()
3594+
3595+        mocklistdir.side_effect = call_listdir
3596+
3597+        class MockDir:
3598+            def __init__(self, dirname):
3599+                self.name = dirname
3600+                self.contents = []
3601+   
3602+            def get_contents(self):
3603+                return self.contents
3604+
3605+        def call_isdir(dirname):
3606+            #XXX Should there be any other tests here?
3607+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3608+            return True
3609+
3610+        mockisdir.side_effect = call_isdir
3611+
3612+        def call_mkdir(dirname, permissions):
3613+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3614+                self.Fail
3615+            if dirname in directories:
3616+                raise OSError(17, "File exists: '%s'" % dirname)
3617+                self.Fail
3618+            elif dirname not in directories:
3619+                directories[dirname] = MockDir(dirname)
3620+                return True
3621+
3622+        mockmkdir.side_effect = call_mkdir
3623+
3624+        def call_get_available_space(storedir, reserved_space):
3625+            self.failUnlessReallyEqual(storedir, tempdir)
3626+            return 1
3627+
3628+        mockget_available_space.side_effect = call_get_available_space
3629+
3630+        mocktime.return_value = 0
3631+        def call_get_shares(storageindex):
3632+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3633+            return []#share]
3634+
3635+        mockget_shares.side_effect = call_get_shares
3636+
3637         def call_open(fname, mode):
3638             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3639             return fobj
3640}
3641[checkpoint12 TestServerFSBackend no longer mocks filesystem
3642wilcoxjg@gmail.com**20110711193357
3643 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3644] {
3645hunk ./src/allmydata/storage/backends/das/core.py 23
3646      create_mutable_sharefile
3647 from allmydata.storage.immutable import BucketWriter, BucketReader
3648 from allmydata.storage.crawler import FSBucketCountingCrawler
3649+from allmydata.util.hashutil import constant_time_compare
3650 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3651 
3652 from zope.interface import implements
3653hunk ./src/allmydata/storage/backends/das/core.py 28
3654 
3655+# storage/
3656+# storage/shares/incoming
3657+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3658+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3659+# storage/shares/$START/$STORAGEINDEX
3660+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3661+
3662+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3663+# base-32 chars).
3664 # $SHARENUM matches this regex:
3665 NUM_RE=re.compile("^[0-9]+$")
3666 
3667hunk ./src/allmydata/test/test_backends.py 126
3668         testbackend = DASCore(tempdir, expiration_policy)
3669         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3670 
3671-    @mock.patch('allmydata.util.fileutil.rename')
3672-    @mock.patch('allmydata.util.fileutil.make_dirs')
3673-    @mock.patch('os.path.exists')
3674-    @mock.patch('os.stat')
3675-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3676-    @mock.patch('allmydata.util.fileutil.get_available_space')
3677     @mock.patch('time.time')
3678hunk ./src/allmydata/test/test_backends.py 127
3679-    @mock.patch('os.mkdir')
3680-    @mock.patch('__builtin__.open')
3681-    @mock.patch('os.listdir')
3682-    @mock.patch('os.path.isdir')
3683-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3684-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3685-                             mockmake_dirs, mockrename):
3686+    def test_write_share(self, mocktime):
3687         """ Write a new share. """
3688 
3689         class MockShare:
3690hunk ./src/allmydata/test/test_backends.py 143
3691 
3692         share = MockShare()
3693 
3694-        class MockFile:
3695-            def __init__(self):
3696-                self.buffer = ''
3697-                self.pos = 0
3698-            def write(self, instring):
3699-                begin = self.pos
3700-                padlen = begin - len(self.buffer)
3701-                if padlen > 0:
3702-                    self.buffer += '\x00' * padlen
3703-                end = self.pos + len(instring)
3704-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3705-                self.pos = end
3706-            def close(self):
3707-                pass
3708-            def seek(self, pos):
3709-                self.pos = pos
3710-            def read(self, numberbytes):
3711-                return self.buffer[self.pos:self.pos+numberbytes]
3712-            def tell(self):
3713-                return self.pos
3714-
3715-        fobj = MockFile()
3716-
3717-        directories = {}
3718-        def call_listdir(dirname):
3719-            if dirname not in directories:
3720-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3721-            else:
3722-                return directories[dirname].get_contents()
3723-
3724-        mocklistdir.side_effect = call_listdir
3725-
3726-        class MockDir:
3727-            def __init__(self, dirname):
3728-                self.name = dirname
3729-                self.contents = []
3730-   
3731-            def get_contents(self):
3732-                return self.contents
3733-
3734-        def call_isdir(dirname):
3735-            #XXX Should there be any other tests here?
3736-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3737-            return True
3738-
3739-        mockisdir.side_effect = call_isdir
3740-
3741-        def call_mkdir(dirname, permissions):
3742-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3743-                self.Fail
3744-            if dirname in directories:
3745-                raise OSError(17, "File exists: '%s'" % dirname)
3746-                self.Fail
3747-            elif dirname not in directories:
3748-                directories[dirname] = MockDir(dirname)
3749-                return True
3750-
3751-        mockmkdir.side_effect = call_mkdir
3752-
3753-        def call_get_available_space(storedir, reserved_space):
3754-            self.failUnlessReallyEqual(storedir, tempdir)
3755-            return 1
3756-
3757-        mockget_available_space.side_effect = call_get_available_space
3758-
3759-        mocktime.return_value = 0
3760-        def call_get_shares(storageindex):
3761-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3762-            return []#share]
3763-
3764-        mockget_shares.side_effect = call_get_shares
3765-
3766-        def call_open(fname, mode):
3767-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3768-            return fobj
3769-
3770-        mockopen.side_effect = call_open
3771-
3772-        def call_make_dirs(dname):
3773-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3774-           
3775-        mockmake_dirs.side_effect = call_make_dirs
3776-
3777-        def call_rename(src, dst):
3778-            self.failUnlessReallyEqual(src, shareincomingname)
3779-            self.failUnlessReallyEqual(dst, sharefname)
3780-           
3781-        mockrename.side_effect = call_rename
3782-
3783-        def call_exists(fname):
3784-            self.failUnlessReallyEqual(fname, sharefname)
3785-
3786-        mockexists.side_effect = call_exists
3787-
3788         # Now begin the test.
3789 
3790         # XXX (0) ???  Fail unless something is not properly set-up?
3791}
3792[JACP
3793wilcoxjg@gmail.com**20110711194407
3794 Ignore-this: b54745de777c4bb58d68d708f010bbb
3795] {
3796hunk ./src/allmydata/storage/backends/das/core.py 86
3797 
3798     def get_incoming(self, storageindex):
3799         """Return the set of incoming shnums."""
3800-        return set(os.listdir(self.incomingdir))
3801+        try:
3802+            incominglist = os.listdir(self.incomingdir)
3803+            print "incominglist: ", incominglist
3804+            return set(incominglist)
3805+        except OSError:
3806+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3807+            pass
3808 
3809     def get_shares(self, storage_index):
3810         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3811hunk ./src/allmydata/storage/server.py 17
3812 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3813      create_mutable_sharefile
3814 
3815-# storage/
3816-# storage/shares/incoming
3817-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3818-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3819-# storage/shares/$START/$STORAGEINDEX
3820-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3821-
3822-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3823-# base-32 chars).
3824-
3825-
3826 class StorageServer(service.MultiService, Referenceable):
3827     implements(RIStorageServer, IStatsProducer)
3828     name = 'storage'
3829}
3830[testing get incoming
3831wilcoxjg@gmail.com**20110711210224
3832 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3833] {
3834hunk ./src/allmydata/storage/backends/das/core.py 87
3835     def get_incoming(self, storageindex):
3836         """Return the set of incoming shnums."""
3837         try:
3838-            incominglist = os.listdir(self.incomingdir)
3839+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3840+            incominglist = os.listdir(incomingsharesdir)
3841             print "incominglist: ", incominglist
3842             return set(incominglist)
3843         except OSError:
3844hunk ./src/allmydata/storage/backends/das/core.py 92
3845-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3846-            pass
3847-
3848+            # XXX I'd like to make this more specific. If there are no shares at all.
3849+            return set()
3850+           
3851     def get_shares(self, storage_index):
3852         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3853         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3854hunk ./src/allmydata/test/test_backends.py 149
3855         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3856 
3857         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3858+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3859         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3860 
3861hunk ./src/allmydata/test/test_backends.py 152
3862-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3863         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3864         # with the same si, until BucketWriter.remote_close() has been called.
3865         # self.failIf(bsa)
3866}
3867[ImmutableShareFile does not know its StorageIndex
3868wilcoxjg@gmail.com**20110711211424
3869 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3870] {
3871hunk ./src/allmydata/storage/backends/das/core.py 112
3872             return 0
3873         return fileutil.get_available_space(self.storedir, self.reserved_space)
3874 
3875-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3876-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3877+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3878+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3879+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3880+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3881         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3882         return bw
3883 
3884hunk ./src/allmydata/storage/backends/das/core.py 155
3885     LEASE_SIZE = struct.calcsize(">L32s32sL")
3886     sharetype = "immutable"
3887 
3888-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3889+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3890         """ If max_size is not None then I won't allow more than
3891         max_size to be written to me. If create=True then max_size
3892         must not be None. """
3893}
3894[get_incoming correctly reports the 0 share after it has arrived
3895wilcoxjg@gmail.com**20110712025157
3896 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3897] {
3898hunk ./src/allmydata/storage/backends/das/core.py 1
3899+import os, re, weakref, struct, time, stat
3900+
3901 from allmydata.interfaces import IStorageBackend
3902 from allmydata.storage.backends.base import Backend
3903 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3904hunk ./src/allmydata/storage/backends/das/core.py 8
3905 from allmydata.util.assertutil import precondition
3906 
3907-import os, re, weakref, struct, time
3908-
3909 #from foolscap.api import Referenceable
3910 from twisted.application import service
3911 
3912hunk ./src/allmydata/storage/backends/das/core.py 89
3913         try:
3914             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3915             incominglist = os.listdir(incomingsharesdir)
3916-            print "incominglist: ", incominglist
3917-            return set(incominglist)
3918+            incomingshnums = [int(x) for x in incominglist]
3919+            return set(incomingshnums)
3920         except OSError:
3921             # XXX I'd like to make this more specific. If there are no shares at all.
3922             return set()
3923hunk ./src/allmydata/storage/backends/das/core.py 113
3924         return fileutil.get_available_space(self.storedir, self.reserved_space)
3925 
3926     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3927-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3928-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3929-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3930+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3931+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3932+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3933         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3934         return bw
3935 
3936hunk ./src/allmydata/storage/backends/das/core.py 160
3937         max_size to be written to me. If create=True then max_size
3938         must not be None. """
3939         precondition((max_size is not None) or (not create), max_size, create)
3940-        self.shnum = shnum
3941-        self.storage_index = storageindex
3942-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3943         self._max_size = max_size
3944hunk ./src/allmydata/storage/backends/das/core.py 161
3945-        self.incomingdir = os.path.join(sharedir, 'incoming')
3946-        si_dir = storage_index_to_dir(storageindex)
3947-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3948-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3949-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3950+        self.incominghome = incominghome
3951+        self.finalhome = finalhome
3952         if create:
3953             # touch the file, so later callers will see that we're working on
3954             # it. Also construct the metadata.
3955hunk ./src/allmydata/storage/backends/das/core.py 166
3956-            assert not os.path.exists(self.fname)
3957-            fileutil.make_dirs(os.path.dirname(self.fname))
3958-            f = open(self.fname, 'wb')
3959+            assert not os.path.exists(self.finalhome)
3960+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3961+            f = open(self.incominghome, 'wb')
3962             # The second field -- the four-byte share data length -- is no
3963             # longer used as of Tahoe v1.3.0, but we continue to write it in
3964             # there in case someone downgrades a storage server from >=
3965hunk ./src/allmydata/storage/backends/das/core.py 183
3966             self._lease_offset = max_size + 0x0c
3967             self._num_leases = 0
3968         else:
3969-            f = open(self.fname, 'rb')
3970-            filesize = os.path.getsize(self.fname)
3971+            f = open(self.finalhome, 'rb')
3972+            filesize = os.path.getsize(self.finalhome)
3973             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3974             f.close()
3975             if version != 1:
3976hunk ./src/allmydata/storage/backends/das/core.py 189
3977                 msg = "sharefile %s had version %d but we wanted 1" % \
3978-                      (self.fname, version)
3979+                      (self.finalhome, version)
3980                 raise UnknownImmutableContainerVersionError(msg)
3981             self._num_leases = num_leases
3982             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3983hunk ./src/allmydata/storage/backends/das/core.py 225
3984         pass
3985         
3986     def stat(self):
3987-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3988+        return os.stat(self.finalhome)[stat.ST_SIZE]
3989+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
3990 
3991     def get_shnum(self):
3992         return self.shnum
3993hunk ./src/allmydata/storage/backends/das/core.py 232
3994 
3995     def unlink(self):
3996-        os.unlink(self.fname)
3997+        os.unlink(self.finalhome)
3998 
3999     def read_share_data(self, offset, length):
4000         precondition(offset >= 0)
4001hunk ./src/allmydata/storage/backends/das/core.py 239
4002         # Reads beyond the end of the data are truncated. Reads that start
4003         # beyond the end of the data return an empty string.
4004         seekpos = self._data_offset+offset
4005-        fsize = os.path.getsize(self.fname)
4006+        fsize = os.path.getsize(self.finalhome)
4007         actuallength = max(0, min(length, fsize-seekpos))
4008         if actuallength == 0:
4009             return ""
4010hunk ./src/allmydata/storage/backends/das/core.py 243
4011-        f = open(self.fname, 'rb')
4012+        f = open(self.finalhome, 'rb')
4013         f.seek(seekpos)
4014         return f.read(actuallength)
4015 
4016hunk ./src/allmydata/storage/backends/das/core.py 252
4017         precondition(offset >= 0, offset)
4018         if self._max_size is not None and offset+length > self._max_size:
4019             raise DataTooLargeError(self._max_size, offset, length)
4020-        f = open(self.fname, 'rb+')
4021+        f = open(self.incominghome, 'rb+')
4022         real_offset = self._data_offset+offset
4023         f.seek(real_offset)
4024         assert f.tell() == real_offset
4025hunk ./src/allmydata/storage/backends/das/core.py 279
4026 
4027     def get_leases(self):
4028         """Yields a LeaseInfo instance for all leases."""
4029-        f = open(self.fname, 'rb')
4030+        f = open(self.finalhome, 'rb')
4031         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
4032         f.seek(self._lease_offset)
4033         for i in range(num_leases):
4034hunk ./src/allmydata/storage/backends/das/core.py 288
4035                 yield LeaseInfo().from_immutable_data(data)
4036 
4037     def add_lease(self, lease_info):
4038-        f = open(self.fname, 'rb+')
4039+        f = open(self.incominghome, 'rb+')
4040         num_leases = self._read_num_leases(f)
4041         self._write_lease_record(f, num_leases, lease_info)
4042         self._write_num_leases(f, num_leases+1)
4043hunk ./src/allmydata/storage/backends/das/core.py 301
4044                 if new_expire_time > lease.expiration_time:
4045                     # yes
4046                     lease.expiration_time = new_expire_time
4047-                    f = open(self.fname, 'rb+')
4048+                    f = open(self.finalhome, 'rb+')
4049                     self._write_lease_record(f, i, lease)
4050                     f.close()
4051                 return
4052hunk ./src/allmydata/storage/backends/das/core.py 336
4053             # the same order as they were added, so that if we crash while
4054             # doing this, we won't lose any non-cancelled leases.
4055             leases = [l for l in leases if l] # remove the cancelled leases
4056-            f = open(self.fname, 'rb+')
4057+            f = open(self.finalhome, 'rb+')
4058             for i,lease in enumerate(leases):
4059                 self._write_lease_record(f, i, lease)
4060             self._write_num_leases(f, len(leases))
4061hunk ./src/allmydata/storage/backends/das/core.py 344
4062             f.close()
4063         space_freed = self.LEASE_SIZE * num_leases_removed
4064         if not len(leases):
4065-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
4066+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
4067             self.unlink()
4068         return space_freed
4069hunk ./src/allmydata/test/test_backends.py 129
4070     @mock.patch('time.time')
4071     def test_write_share(self, mocktime):
4072         """ Write a new share. """
4073-
4074-        class MockShare:
4075-            def __init__(self):
4076-                self.shnum = 1
4077-               
4078-            def add_or_renew_lease(elf, lease_info):
4079-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4080-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4081-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4082-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4083-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4084-
4085-        share = MockShare()
4086-
4087         # Now begin the test.
4088 
4089         # XXX (0) ???  Fail unless something is not properly set-up?
4090hunk ./src/allmydata/test/test_backends.py 143
4091         # self.failIf(bsa)
4092 
4093         bs[0].remote_write(0, 'a')
4094-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4095+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4096         spaceint = self.s.allocated_size()
4097         self.failUnlessReallyEqual(spaceint, 1)
4098 
4099hunk ./src/allmydata/test/test_backends.py 161
4100         #self.failIf(mockrename.called, mockrename.call_args_list)
4101         #self.failIf(mockstat.called, mockstat.call_args_list)
4102 
4103+    def test_handle_incoming(self):
4104+        incomingset = self.s.backend.get_incoming('teststorage_index')
4105+        self.failUnlessReallyEqual(incomingset, set())
4106+
4107+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4108+       
4109+        incomingset = self.s.backend.get_incoming('teststorage_index')
4110+        self.failUnlessReallyEqual(incomingset, set((0,)))
4111+
4112+        bs[0].remote_close()
4113+        self.failUnlessReallyEqual(incomingset, set())
4114+
4115     @mock.patch('os.path.exists')
4116     @mock.patch('os.path.getsize')
4117     @mock.patch('__builtin__.open')
4118hunk ./src/allmydata/test/test_backends.py 223
4119         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4120 
4121 
4122-
4123 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4124     @mock.patch('time.time')
4125     @mock.patch('os.mkdir')
4126hunk ./src/allmydata/test/test_backends.py 271
4127         DASCore('teststoredir', expiration_policy)
4128 
4129         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4130+
4131}
4132[jacp14
4133wilcoxjg@gmail.com**20110712061211
4134 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4135] {
4136hunk ./src/allmydata/storage/backends/das/core.py 95
4137             # XXX I'd like to make this more specific. If there are no shares at all.
4138             return set()
4139             
4140-    def get_shares(self, storage_index):
4141+    def get_shares(self, storageindex):
4142         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4143hunk ./src/allmydata/storage/backends/das/core.py 97
4144-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4145+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4146         try:
4147             for f in os.listdir(finalstoragedir):
4148                 if NUM_RE.match(f):
4149hunk ./src/allmydata/storage/backends/das/core.py 102
4150                     filename = os.path.join(finalstoragedir, f)
4151-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4152+                    yield ImmutableShare(filename, storageindex, f)
4153         except OSError:
4154             # Commonly caused by there being no shares at all.
4155             pass
4156hunk ./src/allmydata/storage/backends/das/core.py 115
4157     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4158         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4159         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4160-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4161+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4162         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4163         return bw
4164 
4165hunk ./src/allmydata/storage/backends/das/core.py 155
4166     LEASE_SIZE = struct.calcsize(">L32s32sL")
4167     sharetype = "immutable"
4168 
4169-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4170+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4171         """ If max_size is not None then I won't allow more than
4172         max_size to be written to me. If create=True then max_size
4173         must not be None. """
4174hunk ./src/allmydata/storage/backends/das/core.py 160
4175         precondition((max_size is not None) or (not create), max_size, create)
4176+        self.storageindex = storageindex
4177         self._max_size = max_size
4178         self.incominghome = incominghome
4179         self.finalhome = finalhome
4180hunk ./src/allmydata/storage/backends/das/core.py 164
4181+        self.shnum = shnum
4182         if create:
4183             # touch the file, so later callers will see that we're working on
4184             # it. Also construct the metadata.
4185hunk ./src/allmydata/storage/backends/das/core.py 212
4186             # their children to know when they should do the rmdir. This
4187             # approach is simpler, but relies on os.rmdir refusing to delete
4188             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4189+            #print "os.path.dirname(self.incominghome): "
4190+            #print os.path.dirname(self.incominghome)
4191             os.rmdir(os.path.dirname(self.incominghome))
4192             # we also delete the grandparent (prefix) directory, .../ab ,
4193             # again to avoid leaving directories lying around. This might
4194hunk ./src/allmydata/storage/immutable.py 93
4195     def __init__(self, ss, share):
4196         self.ss = ss
4197         self._share_file = share
4198-        self.storage_index = share.storage_index
4199+        self.storageindex = share.storageindex
4200         self.shnum = share.shnum
4201 
4202     def __repr__(self):
4203hunk ./src/allmydata/storage/immutable.py 98
4204         return "<%s %s %s>" % (self.__class__.__name__,
4205-                               base32.b2a_l(self.storage_index[:8], 60),
4206+                               base32.b2a_l(self.storageindex[:8], 60),
4207                                self.shnum)
4208 
4209     def remote_read(self, offset, length):
4210hunk ./src/allmydata/storage/immutable.py 110
4211 
4212     def remote_advise_corrupt_share(self, reason):
4213         return self.ss.remote_advise_corrupt_share("immutable",
4214-                                                   self.storage_index,
4215+                                                   self.storageindex,
4216                                                    self.shnum,
4217                                                    reason)
4218hunk ./src/allmydata/test/test_backends.py 20
4219 # The following share file contents was generated with
4220 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4221 # with share data == 'a'.
4222-renew_secret  = 'x'*32
4223-cancel_secret = 'y'*32
4224-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4225-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4226+shareversionnumber = '\x00\x00\x00\x01'
4227+sharedatalength = '\x00\x00\x00\x01'
4228+numberofleases = '\x00\x00\x00\x01'
4229+shareinputdata = 'a'
4230+ownernumber = '\x00\x00\x00\x00'
4231+renewsecret  = 'x'*32
4232+cancelsecret = 'y'*32
4233+expirationtime = '\x00(\xde\x80'
4234+nextlease = ''
4235+containerdata = shareversionnumber + sharedatalength + numberofleases
4236+client_data = shareinputdata + ownernumber + renewsecret + \
4237+    cancelsecret + expirationtime + nextlease
4238+share_data = containerdata + client_data
4239+
4240 
4241 testnodeid = 'testnodeidxxxxxxxxxx'
4242 tempdir = 'teststoredir'
4243hunk ./src/allmydata/test/test_backends.py 52
4244 
4245 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4246     def setUp(self):
4247-        self.s = StorageServer(testnodeid, backend=NullCore())
4248+        self.ss = StorageServer(testnodeid, backend=NullCore())
4249 
4250     @mock.patch('os.mkdir')
4251     @mock.patch('__builtin__.open')
4252hunk ./src/allmydata/test/test_backends.py 62
4253         """ Write a new share. """
4254 
4255         # Now begin the test.
4256-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4257+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4258         bs[0].remote_write(0, 'a')
4259         self.failIf(mockisdir.called)
4260         self.failIf(mocklistdir.called)
4261hunk ./src/allmydata/test/test_backends.py 133
4262                 _assert(False, "The tester code doesn't recognize this case.") 
4263 
4264         mockopen.side_effect = call_open
4265-        testbackend = DASCore(tempdir, expiration_policy)
4266-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4267+        self.backend = DASCore(tempdir, expiration_policy)
4268+        self.ss = StorageServer(testnodeid, self.backend)
4269+        self.ssinf = StorageServer(testnodeid, self.backend)
4270 
4271     @mock.patch('time.time')
4272     def test_write_share(self, mocktime):
4273hunk ./src/allmydata/test/test_backends.py 142
4274         """ Write a new share. """
4275         # Now begin the test.
4276 
4277-        # XXX (0) ???  Fail unless something is not properly set-up?
4278-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4279+        mocktime.return_value = 0
4280+        # Inspect incoming and fail unless it's empty.
4281+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4282+        self.failUnlessReallyEqual(incomingset, set())
4283+       
4284+        # Among other things, populate incoming with the sharenum: 0.
4285+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4286 
4287hunk ./src/allmydata/test/test_backends.py 150
4288-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4289-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4290-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4291+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4292+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4293+       
4294+        # Attempt to create a second share writer with the same share.
4295+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4296 
4297hunk ./src/allmydata/test/test_backends.py 156
4298-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4299+        # Show that no sharewriter results from a remote_allocate_buckets
4300         # with the same si, until BucketWriter.remote_close() has been called.
4301hunk ./src/allmydata/test/test_backends.py 158
4302-        # self.failIf(bsa)
4303+        self.failIf(bsa)
4304 
4305hunk ./src/allmydata/test/test_backends.py 160
4306+        # Write 'a' to shnum 0. Only tested together with close and read.
4307         bs[0].remote_write(0, 'a')
4308hunk ./src/allmydata/test/test_backends.py 162
4309-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4310-        spaceint = self.s.allocated_size()
4311+
4312+        # Test allocated size.
4313+        spaceint = self.ss.allocated_size()
4314         self.failUnlessReallyEqual(spaceint, 1)
4315 
4316         # XXX (3) Inspect final and fail unless there's nothing there.
4317hunk ./src/allmydata/test/test_backends.py 168
4318+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4319         bs[0].remote_close()
4320         # XXX (4a) Inspect final and fail unless share 0 is there.
4321hunk ./src/allmydata/test/test_backends.py 171
4322+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4323+        #contents = sharesinfinal[0].read_share_data(0,999)
4324+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4325         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4326 
4327         # What happens when there's not enough space for the client's request?
4328hunk ./src/allmydata/test/test_backends.py 177
4329-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4330+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4331 
4332         # Now test the allocated_size method.
4333         # self.failIf(mockexists.called, mockexists.call_args_list)
4334hunk ./src/allmydata/test/test_backends.py 185
4335         #self.failIf(mockrename.called, mockrename.call_args_list)
4336         #self.failIf(mockstat.called, mockstat.call_args_list)
4337 
4338-    def test_handle_incoming(self):
4339-        incomingset = self.s.backend.get_incoming('teststorage_index')
4340-        self.failUnlessReallyEqual(incomingset, set())
4341-
4342-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4343-       
4344-        incomingset = self.s.backend.get_incoming('teststorage_index')
4345-        self.failUnlessReallyEqual(incomingset, set((0,)))
4346-
4347-        bs[0].remote_close()
4348-        self.failUnlessReallyEqual(incomingset, set())
4349-
4350     @mock.patch('os.path.exists')
4351     @mock.patch('os.path.getsize')
4352     @mock.patch('__builtin__.open')
4353hunk ./src/allmydata/test/test_backends.py 208
4354             self.failUnless('r' in mode, mode)
4355             self.failUnless('b' in mode, mode)
4356 
4357-            return StringIO(share_file_data)
4358+            return StringIO(share_data)
4359         mockopen.side_effect = call_open
4360 
4361hunk ./src/allmydata/test/test_backends.py 211
4362-        datalen = len(share_file_data)
4363+        datalen = len(share_data)
4364         def call_getsize(fname):
4365             self.failUnlessReallyEqual(fname, sharefname)
4366             return datalen
4367hunk ./src/allmydata/test/test_backends.py 223
4368         mockexists.side_effect = call_exists
4369 
4370         # Now begin the test.
4371-        bs = self.s.remote_get_buckets('teststorage_index')
4372+        bs = self.ss.remote_get_buckets('teststorage_index')
4373 
4374         self.failUnlessEqual(len(bs), 1)
4375hunk ./src/allmydata/test/test_backends.py 226
4376-        b = bs[0]
4377+        b = bs['0']
4378         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4379hunk ./src/allmydata/test/test_backends.py 228
4380-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4381+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4382         # If you try to read past the end you get the as much data as is there.
4383hunk ./src/allmydata/test/test_backends.py 230
4384-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4385+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4386         # If you start reading past the end of the file you get the empty string.
4387         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4388 
4389}
4390[jacp14 or so
4391wilcoxjg@gmail.com**20110713060346
4392 Ignore-this: 7026810f60879d65b525d450e43ff87a
4393] {
4394hunk ./src/allmydata/storage/backends/das/core.py 102
4395             for f in os.listdir(finalstoragedir):
4396                 if NUM_RE.match(f):
4397                     filename = os.path.join(finalstoragedir, f)
4398-                    yield ImmutableShare(filename, storageindex, f)
4399+                    yield ImmutableShare(filename, storageindex, int(f))
4400         except OSError:
4401             # Commonly caused by there being no shares at all.
4402             pass
4403hunk ./src/allmydata/storage/backends/null/core.py 25
4404     def set_storage_server(self, ss):
4405         self.ss = ss
4406 
4407+    def get_incoming(self, storageindex):
4408+        return set()
4409+
4410 class ImmutableShare:
4411     sharetype = "immutable"
4412 
4413hunk ./src/allmydata/storage/immutable.py 19
4414 
4415     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4416         self.ss = ss
4417-        self._max_size = max_size # don't allow the client to write more than this
4418+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4419+
4420         self._canary = canary
4421         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4422         self.closed = False
4423hunk ./src/allmydata/test/test_backends.py 135
4424         mockopen.side_effect = call_open
4425         self.backend = DASCore(tempdir, expiration_policy)
4426         self.ss = StorageServer(testnodeid, self.backend)
4427-        self.ssinf = StorageServer(testnodeid, self.backend)
4428+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4429+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4430 
4431     @mock.patch('time.time')
4432     def test_write_share(self, mocktime):
4433hunk ./src/allmydata/test/test_backends.py 161
4434         # with the same si, until BucketWriter.remote_close() has been called.
4435         self.failIf(bsa)
4436 
4437-        # Write 'a' to shnum 0. Only tested together with close and read.
4438-        bs[0].remote_write(0, 'a')
4439-
4440         # Test allocated size.
4441         spaceint = self.ss.allocated_size()
4442         self.failUnlessReallyEqual(spaceint, 1)
4443hunk ./src/allmydata/test/test_backends.py 165
4444 
4445-        # XXX (3) Inspect final and fail unless there's nothing there.
4446+        # Write 'a' to shnum 0. Only tested together with close and read.
4447+        bs[0].remote_write(0, 'a')
4448+       
4449+        # Preclose: Inspect final, failUnless nothing there.
4450         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4451         bs[0].remote_close()
4452hunk ./src/allmydata/test/test_backends.py 171
4453-        # XXX (4a) Inspect final and fail unless share 0 is there.
4454-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4455-        #contents = sharesinfinal[0].read_share_data(0,999)
4456-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4457-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4458 
4459hunk ./src/allmydata/test/test_backends.py 172
4460-        # What happens when there's not enough space for the client's request?
4461-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4462+        # Postclose: (Omnibus) failUnless written data is in final.
4463+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4464+        contents = sharesinfinal[0].read_share_data(0,73)
4465+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4466 
4467hunk ./src/allmydata/test/test_backends.py 177
4468-        # Now test the allocated_size method.
4469-        # self.failIf(mockexists.called, mockexists.call_args_list)
4470-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4471-        #self.failIf(mockrename.called, mockrename.call_args_list)
4472-        #self.failIf(mockstat.called, mockstat.call_args_list)
4473+        # Cover interior of for share in get_shares loop.
4474+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4475+       
4476+    @mock.patch('time.time')
4477+    @mock.patch('allmydata.util.fileutil.get_available_space')
4478+    def test_out_of_space(self, mockget_available_space, mocktime):
4479+        mocktime.return_value = 0
4480+       
4481+        def call_get_available_space(dir, reserve):
4482+            return 0
4483+
4484+        mockget_available_space.side_effect = call_get_available_space
4485+       
4486+       
4487+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4488 
4489     @mock.patch('os.path.exists')
4490     @mock.patch('os.path.getsize')
4491hunk ./src/allmydata/test/test_backends.py 234
4492         bs = self.ss.remote_get_buckets('teststorage_index')
4493 
4494         self.failUnlessEqual(len(bs), 1)
4495-        b = bs['0']
4496+        b = bs[0]
4497         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4498         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4499         # If you try to read past the end you get the as much data as is there.
4500}
4501[temporary work-in-progress patch to be unrecorded
4502zooko@zooko.com**20110714003008
4503 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4504 tidy up a few tests, work done in pair-programming with Zancas
4505] {
4506hunk ./src/allmydata/storage/backends/das/core.py 65
4507         self._clean_incomplete()
4508 
4509     def _clean_incomplete(self):
4510-        fileutil.rm_dir(self.incomingdir)
4511+        fileutil.rmtree(self.incomingdir)
4512         fileutil.make_dirs(self.incomingdir)
4513 
4514     def _setup_corruption_advisory(self):
4515hunk ./src/allmydata/storage/immutable.py 1
4516-import os, stat, struct, time
4517+import os, time
4518 
4519 from foolscap.api import Referenceable
4520 
4521hunk ./src/allmydata/storage/server.py 1
4522-import os, re, weakref, struct, time
4523+import os, weakref, struct, time
4524 
4525 from foolscap.api import Referenceable
4526 from twisted.application import service
4527hunk ./src/allmydata/storage/server.py 7
4528 
4529 from zope.interface import implements
4530-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4531+from allmydata.interfaces import RIStorageServer, IStatsProducer
4532 from allmydata.util import fileutil, idlib, log, time_format
4533 import allmydata # for __full_version__
4534 
4535hunk ./src/allmydata/storage/server.py 313
4536         self.add_latency("get", time.time() - start)
4537         return bucketreaders
4538 
4539-    def remote_get_incoming(self, storageindex):
4540-        incoming_share_set = self.backend.get_incoming(storageindex)
4541-        return incoming_share_set
4542-
4543     def get_leases(self, storageindex):
4544         """Provide an iterator that yields all of the leases attached to this
4545         bucket. Each lease is returned as a LeaseInfo instance.
4546hunk ./src/allmydata/test/test_backends.py 3
4547 from twisted.trial import unittest
4548 
4549+from twisted.path.filepath import FilePath
4550+
4551 from StringIO import StringIO
4552 
4553 from allmydata.test.common_util import ReallyEqualMixin
4554hunk ./src/allmydata/test/test_backends.py 38
4555 
4556 
4557 testnodeid = 'testnodeidxxxxxxxxxx'
4558-tempdir = 'teststoredir'
4559-basedir = os.path.join(tempdir, 'shares')
4560+storedir = 'teststoredir'
4561+storedirfp = FilePath(storedir)
4562+basedir = os.path.join(storedir, 'shares')
4563 baseincdir = os.path.join(basedir, 'incoming')
4564 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4565 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4566hunk ./src/allmydata/test/test_backends.py 53
4567                      'cutoff_date' : None,
4568                      'sharetypes' : None}
4569 
4570-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4571+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4572+    """ NullBackend is just for testing and executable documentation, so
4573+    this test is actually a test of StorageServer in which we're using
4574+    NullBackend as helper code for the test, rather than a test of
4575+    NullBackend. """
4576     def setUp(self):
4577         self.ss = StorageServer(testnodeid, backend=NullCore())
4578 
4579hunk ./src/allmydata/test/test_backends.py 62
4580     @mock.patch('os.mkdir')
4581+
4582     @mock.patch('__builtin__.open')
4583     @mock.patch('os.listdir')
4584     @mock.patch('os.path.isdir')
4585hunk ./src/allmydata/test/test_backends.py 69
4586     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4587         """ Write a new share. """
4588 
4589-        # Now begin the test.
4590         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4591         bs[0].remote_write(0, 'a')
4592         self.failIf(mockisdir.called)
4593hunk ./src/allmydata/test/test_backends.py 83
4594     @mock.patch('os.listdir')
4595     @mock.patch('os.path.isdir')
4596     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4597-        """ This tests whether a server instance can be constructed
4598-        with a filesystem backend. To pass the test, it has to use the
4599-        filesystem in only the prescribed ways. """
4600+        """ This tests whether a server instance can be constructed with a
4601+        filesystem backend. To pass the test, it mustn't use the filesystem
4602+        outside of its configured storedir. """
4603 
4604         def call_open(fname, mode):
4605hunk ./src/allmydata/test/test_backends.py 88
4606-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4607-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4608-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4609-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4610-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4611+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4612+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4613+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4614+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4615+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4616                 return StringIO()
4617             else:
4618hunk ./src/allmydata/test/test_backends.py 95
4619-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4620+                fnamefp = FilePath(fname)
4621+                self.failUnless(storedirfp in fnamefp.parents(),
4622+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4623         mockopen.side_effect = call_open
4624 
4625         def call_isdir(fname):
4626hunk ./src/allmydata/test/test_backends.py 101
4627-            if fname == os.path.join(tempdir,'shares'):
4628+            if fname == os.path.join(storedir, 'shares'):
4629                 return True
4630hunk ./src/allmydata/test/test_backends.py 103
4631-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4632+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4633                 return True
4634             else:
4635                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4636hunk ./src/allmydata/test/test_backends.py 109
4637         mockisdir.side_effect = call_isdir
4638 
4639+        mocklistdir.return_value = []
4640+
4641         def call_mkdir(fname, mode):
4642hunk ./src/allmydata/test/test_backends.py 112
4643-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4644             self.failUnlessEqual(0777, mode)
4645hunk ./src/allmydata/test/test_backends.py 113
4646-            if fname == tempdir:
4647-                return None
4648-            elif fname == os.path.join(tempdir,'shares'):
4649-                return None
4650-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4651-                return None
4652-            else:
4653-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4654+            self.failUnlessIn(fname,
4655+                              [storedir,
4656+                               os.path.join(storedir, 'shares'),
4657+                               os.path.join(storedir, 'shares', 'incoming')],
4658+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4659         mockmkdir.side_effect = call_mkdir
4660 
4661         # Now begin the test.
4662hunk ./src/allmydata/test/test_backends.py 121
4663-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4664+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4665 
4666         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4667 
4668hunk ./src/allmydata/test/test_backends.py 126
4669 
4670-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4671+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4672+    """ This tests both the StorageServer xyz """
4673     @mock.patch('__builtin__.open')
4674     def setUp(self, mockopen):
4675         def call_open(fname, mode):
4676hunk ./src/allmydata/test/test_backends.py 131
4677-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4678-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4679-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4680-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4681-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4682+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4683+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4684+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4685+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4686+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4687                 return StringIO()
4688             else:
4689                 _assert(False, "The tester code doesn't recognize this case.") 
4690hunk ./src/allmydata/test/test_backends.py 141
4691 
4692         mockopen.side_effect = call_open
4693-        self.backend = DASCore(tempdir, expiration_policy)
4694+        self.backend = DASCore(storedir, expiration_policy)
4695         self.ss = StorageServer(testnodeid, self.backend)
4696hunk ./src/allmydata/test/test_backends.py 143
4697-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4698+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4699         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4700 
4701     @mock.patch('time.time')
4702hunk ./src/allmydata/test/test_backends.py 147
4703-    def test_write_share(self, mocktime):
4704-        """ Write a new share. """
4705-        # Now begin the test.
4706+    def test_write_and_read_share(self, mocktime):
4707+        """
4708+        Write a new share, read it, and test the server's (and FS backend's)
4709+        handling of simultaneous and successive attempts to write the same
4710+        share.
4711+        """
4712 
4713         mocktime.return_value = 0
4714         # Inspect incoming and fail unless it's empty.
4715hunk ./src/allmydata/test/test_backends.py 159
4716         incomingset = self.ss.backend.get_incoming('teststorage_index')
4717         self.failUnlessReallyEqual(incomingset, set())
4718         
4719-        # Among other things, populate incoming with the sharenum: 0.
4720+        # Populate incoming with the sharenum: 0.
4721         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4722 
4723         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4724hunk ./src/allmydata/test/test_backends.py 163
4725-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4726+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4727         
4728hunk ./src/allmydata/test/test_backends.py 165
4729-        # Attempt to create a second share writer with the same share.
4730+        # Attempt to create a second share writer with the same sharenum.
4731         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4732 
4733         # Show that no sharewriter results from a remote_allocate_buckets
4734hunk ./src/allmydata/test/test_backends.py 169
4735-        # with the same si, until BucketWriter.remote_close() has been called.
4736+        # with the same si and sharenum, until BucketWriter.remote_close()
4737+        # has been called.
4738         self.failIf(bsa)
4739 
4740         # Test allocated size.
4741hunk ./src/allmydata/test/test_backends.py 187
4742         # Postclose: (Omnibus) failUnless written data is in final.
4743         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4744         contents = sharesinfinal[0].read_share_data(0,73)
4745-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4746+        self.failUnlessReallyEqual(contents, client_data)
4747 
4748hunk ./src/allmydata/test/test_backends.py 189
4749-        # Cover interior of for share in get_shares loop.
4750-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4751+        # Exercise the case that the share we're asking to allocate is
4752+        # already (completely) uploaded.
4753+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4754         
4755     @mock.patch('time.time')
4756     @mock.patch('allmydata.util.fileutil.get_available_space')
4757hunk ./src/allmydata/test/test_backends.py 210
4758     @mock.patch('os.path.getsize')
4759     @mock.patch('__builtin__.open')
4760     @mock.patch('os.listdir')
4761-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4762+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4763         """ This tests whether the code correctly finds and reads
4764         shares written out by old (Tahoe-LAFS <= v1.8.2)
4765         servers. There is a similar test in test_download, but that one
4766hunk ./src/allmydata/test/test_backends.py 219
4767         StorageServer object. """
4768 
4769         def call_listdir(dirname):
4770-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4771+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4772             return ['0']
4773 
4774         mocklistdir.side_effect = call_listdir
4775hunk ./src/allmydata/test/test_backends.py 226
4776 
4777         def call_open(fname, mode):
4778             self.failUnlessReallyEqual(fname, sharefname)
4779-            self.failUnless('r' in mode, mode)
4780+            self.failUnlessEqual(mode[0], 'r', mode)
4781             self.failUnless('b' in mode, mode)
4782 
4783             return StringIO(share_data)
4784hunk ./src/allmydata/test/test_backends.py 268
4785         filesystem in only the prescribed ways. """
4786 
4787         def call_open(fname, mode):
4788-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4789-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4790-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4791-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4792-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4793+            if fname == os.path.join(storedir,'bucket_counter.state'):
4794+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4795+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4796+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4797+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4798                 return StringIO()
4799             else:
4800                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4801hunk ./src/allmydata/test/test_backends.py 279
4802         mockopen.side_effect = call_open
4803 
4804         def call_isdir(fname):
4805-            if fname == os.path.join(tempdir,'shares'):
4806+            if fname == os.path.join(storedir,'shares'):
4807                 return True
4808hunk ./src/allmydata/test/test_backends.py 281
4809-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4810+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4811                 return True
4812             else:
4813                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4814hunk ./src/allmydata/test/test_backends.py 290
4815         def call_mkdir(fname, mode):
4816             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4817             self.failUnlessEqual(0777, mode)
4818-            if fname == tempdir:
4819+            if fname == storedir:
4820                 return None
4821hunk ./src/allmydata/test/test_backends.py 292
4822-            elif fname == os.path.join(tempdir,'shares'):
4823+            elif fname == os.path.join(storedir,'shares'):
4824                 return None
4825hunk ./src/allmydata/test/test_backends.py 294
4826-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4827+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4828                 return None
4829             else:
4830                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4831hunk ./src/allmydata/util/fileutil.py 5
4832 Futz with files like a pro.
4833 """
4834 
4835-import sys, exceptions, os, stat, tempfile, time, binascii
4836+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4837 
4838 from twisted.python import log
4839 
4840hunk ./src/allmydata/util/fileutil.py 186
4841             raise tx
4842         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4843 
4844-def rm_dir(dirname):
4845+def rmtree(dirname):
4846     """
4847     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4848     already gone, do nothing and return without raising an exception.  If this
4849hunk ./src/allmydata/util/fileutil.py 205
4850             else:
4851                 remove(fullname)
4852         os.rmdir(dirname)
4853-    except Exception, le:
4854-        # Ignore "No such file or directory"
4855-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4856+    except EnvironmentError, le:
4857+        # Ignore "No such file or directory", collect any other exception.
4858+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4859             excs.append(le)
4860hunk ./src/allmydata/util/fileutil.py 209
4861+    except Exception, le:
4862+        excs.append(le)
4863 
4864     # Okay, now we've recursively removed everything, ignoring any "No
4865     # such file or directory" errors, and collecting any other errors.
4866hunk ./src/allmydata/util/fileutil.py 222
4867             raise OSError, "Failed to remove dir for unknown reason."
4868         raise OSError, excs
4869 
4870+def rm_dir(dirname):
4871+    # Renamed to be like shutil.rmtree and unlike rmdir.
4872+    return rmtree(dirname)
4873 
4874 def remove_if_possible(f):
4875     try:
4876}
4877[work in progress intended to be unrecorded and never committed to trunk
4878zooko@zooko.com**20110714212139
4879 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
4880 switch from os.path.join to filepath
4881 incomplete refactoring of common "stay in your subtree" tester code into a superclass
4882 
4883] {
4884hunk ./src/allmydata/test/test_backends.py 3
4885 from twisted.trial import unittest
4886 
4887-from twisted.path.filepath import FilePath
4888+from twisted.python.filepath import FilePath
4889 
4890 from StringIO import StringIO
4891 
4892hunk ./src/allmydata/test/test_backends.py 10
4893 from allmydata.test.common_util import ReallyEqualMixin
4894 from allmydata.util.assertutil import _assert
4895 
4896-import mock, os
4897+import mock
4898 
4899 # This is the code that we're going to be testing.
4900 from allmydata.storage.server import StorageServer
4901hunk ./src/allmydata/test/test_backends.py 25
4902 shareversionnumber = '\x00\x00\x00\x01'
4903 sharedatalength = '\x00\x00\x00\x01'
4904 numberofleases = '\x00\x00\x00\x01'
4905+
4906 shareinputdata = 'a'
4907 ownernumber = '\x00\x00\x00\x00'
4908 renewsecret  = 'x'*32
4909hunk ./src/allmydata/test/test_backends.py 39
4910 
4911 
4912 testnodeid = 'testnodeidxxxxxxxxxx'
4913-storedir = 'teststoredir'
4914-storedirfp = FilePath(storedir)
4915-basedir = os.path.join(storedir, 'shares')
4916-baseincdir = os.path.join(basedir, 'incoming')
4917-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4918-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4919-shareincomingname = os.path.join(sharedirincomingname, '0')
4920-sharefname = os.path.join(sharedirfinalname, '0')
4921+
4922+class TestFilesMixin(unittest.TestCase):
4923+    def setUp(self):
4924+        self.storedir = FilePath('teststoredir')
4925+        self.basedir = self.storedir.child('shares')
4926+        self.baseincdir = self.basedir.child('incoming')
4927+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4928+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4929+        self.shareincomingname = self.sharedirincomingname.child('0')
4930+        self.sharefname = self.sharedirfinalname.child('0')
4931+
4932+    def call_open(self, fname, mode):
4933+        fnamefp = FilePath(fname)
4934+        if fnamefp == self.storedir.child('bucket_counter.state'):
4935+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
4936+        elif fnamefp == self.storedir.child('lease_checker.state'):
4937+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
4938+        elif fnamefp == self.storedir.child('lease_checker.history'):
4939+            return StringIO()
4940+        else:
4941+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4942+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
4943+
4944+    def call_isdir(self, fname):
4945+        fnamefp = FilePath(fname)
4946+        if fnamefp == self.storedir.child('shares'):
4947+            return True
4948+        elif fnamefp == self.storedir.child('shares').child('incoming'):
4949+            return True
4950+        else:
4951+            self.failUnless(self.storedir in fnamefp.parents(),
4952+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4953+
4954+    def call_mkdir(self, fname, mode):
4955+        self.failUnlessEqual(0777, mode)
4956+        fnamefp = FilePath(fname)
4957+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4958+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4959+
4960+
4961+    @mock.patch('os.mkdir')
4962+    @mock.patch('__builtin__.open')
4963+    @mock.patch('os.listdir')
4964+    @mock.patch('os.path.isdir')
4965+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4966+        mocklistdir.return_value = []
4967+        mockmkdir.side_effect = self.call_mkdir
4968+        mockisdir.side_effect = self.call_isdir
4969+        mockopen.side_effect = self.call_open
4970+        mocklistdir.return_value = []
4971+       
4972+        test_func()
4973+       
4974+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4975 
4976 expiration_policy = {'enabled' : False,
4977                      'mode' : 'age',
4978hunk ./src/allmydata/test/test_backends.py 123
4979         self.failIf(mockopen.called)
4980         self.failIf(mockmkdir.called)
4981 
4982-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
4983-    @mock.patch('time.time')
4984-    @mock.patch('os.mkdir')
4985-    @mock.patch('__builtin__.open')
4986-    @mock.patch('os.listdir')
4987-    @mock.patch('os.path.isdir')
4988-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4989+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
4990+    def test_create_server_fs_backend(self):
4991         """ This tests whether a server instance can be constructed with a
4992         filesystem backend. To pass the test, it mustn't use the filesystem
4993         outside of its configured storedir. """
4994hunk ./src/allmydata/test/test_backends.py 129
4995 
4996-        def call_open(fname, mode):
4997-            if fname == os.path.join(storedir, 'bucket_counter.state'):
4998-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4999-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5000-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5001-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5002-                return StringIO()
5003-            else:
5004-                fnamefp = FilePath(fname)
5005-                self.failUnless(storedirfp in fnamefp.parents(),
5006-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5007-        mockopen.side_effect = call_open
5008+        def _f():
5009+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5010 
5011hunk ./src/allmydata/test/test_backends.py 132
5012-        def call_isdir(fname):
5013-            if fname == os.path.join(storedir, 'shares'):
5014-                return True
5015-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
5016-                return True
5017-            else:
5018-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5019-        mockisdir.side_effect = call_isdir
5020-
5021-        mocklistdir.return_value = []
5022-
5023-        def call_mkdir(fname, mode):
5024-            self.failUnlessEqual(0777, mode)
5025-            self.failUnlessIn(fname,
5026-                              [storedir,
5027-                               os.path.join(storedir, 'shares'),
5028-                               os.path.join(storedir, 'shares', 'incoming')],
5029-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
5030-        mockmkdir.side_effect = call_mkdir
5031-
5032-        # Now begin the test.
5033-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5034-
5035-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5036+        self._help_test_stay_in_your_subtree(_f)
5037 
5038 
5039 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5040}
5041[another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
5042zooko@zooko.com**20110715191500
5043 Ignore-this: af33336789041800761e80510ea2f583
5044 In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
5045] {
5046hunk ./src/allmydata/storage/backends/das/core.py 59
5047                 log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5048                         umid="0wZ27w", level=log.UNUSUAL)
5049 
5050-        self.sharedir = os.path.join(self.storedir, "shares")
5051-        fileutil.make_dirs(self.sharedir)
5052-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
5053+        self.sharedir = self.storedir.child("shares")
5054+        fileutil.fp_make_dirs(self.sharedir)
5055+        self.incomingdir = self.sharedir.child('incoming')
5056         self._clean_incomplete()
5057 
5058     def _clean_incomplete(self):
5059hunk ./src/allmydata/storage/backends/das/core.py 65
5060-        fileutil.rmtree(self.incomingdir)
5061-        fileutil.make_dirs(self.incomingdir)
5062+        fileutil.fp_remove(self.incomingdir)
5063+        fileutil.fp_make_dirs(self.incomingdir)
5064 
5065     def _setup_corruption_advisory(self):
5066         # we don't actually create the corruption-advisory dir until necessary
5067hunk ./src/allmydata/storage/backends/das/core.py 70
5068-        self.corruption_advisory_dir = os.path.join(self.storedir,
5069-                                                    "corruption-advisories")
5070+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
5071 
5072     def _setup_bucket_counter(self):
5073hunk ./src/allmydata/storage/backends/das/core.py 73
5074-        statefname = os.path.join(self.storedir, "bucket_counter.state")
5075+        statefname = self.storedir.child("bucket_counter.state")
5076         self.bucket_counter = FSBucketCountingCrawler(statefname)
5077         self.bucket_counter.setServiceParent(self)
5078 
5079hunk ./src/allmydata/storage/backends/das/core.py 78
5080     def _setup_lease_checkerf(self, expiration_policy):
5081-        statefile = os.path.join(self.storedir, "lease_checker.state")
5082-        historyfile = os.path.join(self.storedir, "lease_checker.history")
5083+        statefile = self.storedir.child("lease_checker.state")
5084+        historyfile = self.storedir.child("lease_checker.history")
5085         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
5086         self.lease_checker.setServiceParent(self)
5087 
5088hunk ./src/allmydata/storage/backends/das/core.py 83
5089-    def get_incoming(self, storageindex):
5090+    def get_incoming_shnums(self, storageindex):
5091         """Return the set of incoming shnums."""
5092         try:
5093hunk ./src/allmydata/storage/backends/das/core.py 86
5094-            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
5095-            incominglist = os.listdir(incomingsharesdir)
5096-            incomingshnums = [int(x) for x in incominglist]
5097-            return set(incomingshnums)
5098-        except OSError:
5099-            # XXX I'd like to make this more specific. If there are no shares at all.
5100-            return set()
5101+           
5102+            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5103+            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5104+            return frozenset(incomingshnums)
5105+        except UnlistableError:
5106+            # There is no shares directory at all.
5107+            return frozenset()
5108             
5109     def get_shares(self, storageindex):
5110         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5111hunk ./src/allmydata/storage/backends/das/core.py 96
5112-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
5113+        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5114         try:
5115hunk ./src/allmydata/storage/backends/das/core.py 98
5116-            for f in os.listdir(finalstoragedir):
5117-                if NUM_RE.match(f):
5118-                    filename = os.path.join(finalstoragedir, f)
5119-                    yield ImmutableShare(filename, storageindex, int(f))
5120-        except OSError:
5121-            # Commonly caused by there being no shares at all.
5122+            for f in finalstoragedir.listdir():
5123+                if NUM_RE.match(f.basename):
5124+                    yield ImmutableShare(f, storageindex, int(f))
5125+        except UnlistableError:
5126+            # There is no shares directory at all.
5127             pass
5128         
5129     def get_available_space(self):
5130hunk ./src/allmydata/storage/backends/das/core.py 149
5131 # then the value stored in this field will be the actual share data length
5132 # modulo 2**32.
5133 
5134-class ImmutableShare:
5135+class ImmutableShare(object):
5136     LEASE_SIZE = struct.calcsize(">L32s32sL")
5137     sharetype = "immutable"
5138 
5139hunk ./src/allmydata/storage/backends/das/core.py 166
5140         if create:
5141             # touch the file, so later callers will see that we're working on
5142             # it. Also construct the metadata.
5143-            assert not os.path.exists(self.finalhome)
5144-            fileutil.make_dirs(os.path.dirname(self.incominghome))
5145+            assert not finalhome.exists()
5146+            fp_make_dirs(self.incominghome)
5147             f = open(self.incominghome, 'wb')
5148             # The second field -- the four-byte share data length -- is no
5149             # longer used as of Tahoe v1.3.0, but we continue to write it in
5150hunk ./src/allmydata/storage/backends/das/core.py 316
5151         except IndexError:
5152             self.add_lease(lease_info)
5153 
5154-
5155     def cancel_lease(self, cancel_secret):
5156         """Remove a lease with the given cancel_secret. If the last lease is
5157         cancelled, the file will be removed. Return the number of bytes that
5158hunk ./src/allmydata/storage/common.py 19
5159 def si_a2b(ascii_storageindex):
5160     return base32.a2b(ascii_storageindex)
5161 
5162-def storage_index_to_dir(storageindex):
5163+def storage_index_to_dir(startfp, storageindex):
5164     sia = si_b2a(storageindex)
5165     return os.path.join(sia[:2], sia)
5166hunk ./src/allmydata/storage/server.py 210
5167 
5168         # fill incoming with all shares that are incoming use a set operation
5169         # since there's no need to operate on individual pieces
5170-        incoming = self.backend.get_incoming(storageindex)
5171+        incoming = self.backend.get_incoming_shnums(storageindex)
5172 
5173         for shnum in ((sharenums - alreadygot) - incoming):
5174             if (not limited) or (remaining_space >= max_space_per_bucket):
5175hunk ./src/allmydata/test/test_backends.py 5
5176 
5177 from twisted.python.filepath import FilePath
5178 
5179+from allmydata.util.log import msg
5180+
5181 from StringIO import StringIO
5182 
5183 from allmydata.test.common_util import ReallyEqualMixin
5184hunk ./src/allmydata/test/test_backends.py 42
5185 
5186 testnodeid = 'testnodeidxxxxxxxxxx'
5187 
5188-class TestFilesMixin(unittest.TestCase):
5189-    def setUp(self):
5190-        self.storedir = FilePath('teststoredir')
5191-        self.basedir = self.storedir.child('shares')
5192-        self.baseincdir = self.basedir.child('incoming')
5193-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5194-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5195-        self.shareincomingname = self.sharedirincomingname.child('0')
5196-        self.sharefname = self.sharedirfinalname.child('0')
5197+class MockStat:
5198+    def __init__(self):
5199+        self.st_mode = None
5200 
5201hunk ./src/allmydata/test/test_backends.py 46
5202+class MockFiles(unittest.TestCase):
5203+    """ I simulate a filesystem that the code under test can use. I flag the
5204+    code under test if it reads or writes outside of its prescribed
5205+    subtree. I simulate just the parts of the filesystem that the current
5206+    implementation of DAS backend needs. """
5207     def call_open(self, fname, mode):
5208         fnamefp = FilePath(fname)
5209hunk ./src/allmydata/test/test_backends.py 53
5210+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5211+                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5212+
5213         if fnamefp == self.storedir.child('bucket_counter.state'):
5214             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
5215         elif fnamefp == self.storedir.child('lease_checker.state'):
5216hunk ./src/allmydata/test/test_backends.py 61
5217             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
5218         elif fnamefp == self.storedir.child('lease_checker.history'):
5219+            # This is separated out from the else clause below just because
5220+            # we know this particular file is going to be used by the
5221+            # current implementation of DAS backend, and we might want to
5222+            # use this information in this test in the future...
5223             return StringIO()
5224         else:
5225hunk ./src/allmydata/test/test_backends.py 67
5226-            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5227-                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5228+            # Anything else you open inside your subtree appears to be an
5229+            # empty file.
5230+            return StringIO()
5231 
5232     def call_isdir(self, fname):
5233         fnamefp = FilePath(fname)
5234hunk ./src/allmydata/test/test_backends.py 73
5235-        if fnamefp == self.storedir.child('shares'):
5236+        return fnamefp.isdir()
5237+
5238+        self.failUnless(self.storedir == self or self.storedir in self.parents(),
5239+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
5240+
5241+        # The first two cases are separate from the else clause below just
5242+        # because we know that the current implementation of the DAS backend
5243+        # inspects these two directories and we might want to make use of
5244+        # that information in the tests in the future...
5245+        if self == self.storedir.child('shares'):
5246             return True
5247hunk ./src/allmydata/test/test_backends.py 84
5248-        elif fnamefp == self.storedir.child('shares').child('incoming'):
5249+        elif self == self.storedir.child('shares').child('incoming'):
5250             return True
5251         else:
5252hunk ./src/allmydata/test/test_backends.py 87
5253-            self.failUnless(self.storedir in fnamefp.parents(),
5254-                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5255+            # Anything else you open inside your subtree appears to be a
5256+            # directory.
5257+            return True
5258 
5259     def call_mkdir(self, fname, mode):
5260hunk ./src/allmydata/test/test_backends.py 92
5261-        self.failUnlessEqual(0777, mode)
5262         fnamefp = FilePath(fname)
5263         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5264                         "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5265hunk ./src/allmydata/test/test_backends.py 95
5266+        self.failUnlessEqual(0777, mode)
5267 
5268hunk ./src/allmydata/test/test_backends.py 97
5269+    def call_listdir(self, fname):
5270+        fnamefp = FilePath(fname)
5271+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5272+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5273 
5274hunk ./src/allmydata/test/test_backends.py 102
5275-    @mock.patch('os.mkdir')
5276-    @mock.patch('__builtin__.open')
5277-    @mock.patch('os.listdir')
5278-    @mock.patch('os.path.isdir')
5279-    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5280-        mocklistdir.return_value = []
5281+    def call_stat(self, fname):
5282+        fnamefp = FilePath(fname)
5283+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5284+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5285+
5286+        msg("%s.call_stat(%s)" % (self, fname,))
5287+        mstat = MockStat()
5288+        mstat.st_mode = 16893 # a directory
5289+        return mstat
5290+
5291+    def setUp(self):
5292+        msg( "%s.setUp()" % (self,))
5293+        self.storedir = FilePath('teststoredir')
5294+        self.basedir = self.storedir.child('shares')
5295+        self.baseincdir = self.basedir.child('incoming')
5296+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5297+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5298+        self.shareincomingname = self.sharedirincomingname.child('0')
5299+        self.sharefname = self.sharedirfinalname.child('0')
5300+
5301+        self.mocklistdirp = mock.patch('os.listdir')
5302+        mocklistdir = self.mocklistdirp.__enter__()
5303+        mocklistdir.side_effect = self.call_listdir
5304+
5305+        self.mockmkdirp = mock.patch('os.mkdir')
5306+        mockmkdir = self.mockmkdirp.__enter__()
5307         mockmkdir.side_effect = self.call_mkdir
5308hunk ./src/allmydata/test/test_backends.py 129
5309+
5310+        self.mockisdirp = mock.patch('os.path.isdir')
5311+        mockisdir = self.mockisdirp.__enter__()
5312         mockisdir.side_effect = self.call_isdir
5313hunk ./src/allmydata/test/test_backends.py 133
5314+
5315+        self.mockopenp = mock.patch('__builtin__.open')
5316+        mockopen = self.mockopenp.__enter__()
5317         mockopen.side_effect = self.call_open
5318hunk ./src/allmydata/test/test_backends.py 137
5319-        mocklistdir.return_value = []
5320-       
5321-        test_func()
5322-       
5323-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5324+
5325+        self.mockstatp = mock.patch('os.stat')
5326+        mockstat = self.mockstatp.__enter__()
5327+        mockstat.side_effect = self.call_stat
5328+
5329+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
5330+        mockfpstat = self.mockfpstatp.__enter__()
5331+        mockfpstat.side_effect = self.call_stat
5332+
5333+    def tearDown(self):
5334+        msg( "%s.tearDown()" % (self,))
5335+        self.mockfpstatp.__exit__()
5336+        self.mockstatp.__exit__()
5337+        self.mockopenp.__exit__()
5338+        self.mockisdirp.__exit__()
5339+        self.mockmkdirp.__exit__()
5340+        self.mocklistdirp.__exit__()
5341 
5342 expiration_policy = {'enabled' : False,
5343                      'mode' : 'age',
5344hunk ./src/allmydata/test/test_backends.py 184
5345         self.failIf(mockopen.called)
5346         self.failIf(mockmkdir.called)
5347 
5348-class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5349+class TestServerConstruction(MockFiles, ReallyEqualMixin):
5350     def test_create_server_fs_backend(self):
5351         """ This tests whether a server instance can be constructed with a
5352         filesystem backend. To pass the test, it mustn't use the filesystem
5353hunk ./src/allmydata/test/test_backends.py 190
5354         outside of its configured storedir. """
5355 
5356-        def _f():
5357-            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5358+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5359 
5360hunk ./src/allmydata/test/test_backends.py 192
5361-        self._help_test_stay_in_your_subtree(_f)
5362-
5363-
5364-class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5365-    """ This tests both the StorageServer xyz """
5366-    @mock.patch('__builtin__.open')
5367-    def setUp(self, mockopen):
5368-        def call_open(fname, mode):
5369-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5370-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5371-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5372-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5373-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5374-                return StringIO()
5375-            else:
5376-                _assert(False, "The tester code doesn't recognize this case.") 
5377-
5378-        mockopen.side_effect = call_open
5379-        self.backend = DASCore(storedir, expiration_policy)
5380-        self.ss = StorageServer(testnodeid, self.backend)
5381-        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
5382-        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5383+class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
5384+    """ This tests both the StorageServer and the DAS backend together. """
5385+    def setUp(self):
5386+        MockFiles.setUp(self)
5387+        try:
5388+            self.backend = DASCore(self.storedir, expiration_policy)
5389+            self.ss = StorageServer(testnodeid, self.backend)
5390+            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
5391+            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5392+        except:
5393+            MockFiles.tearDown(self)
5394+            raise
5395 
5396     @mock.patch('time.time')
5397     def test_write_and_read_share(self, mocktime):
5398hunk ./src/allmydata/util/fileutil.py 8
5399 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5400 
5401 from twisted.python import log
5402+from twisted.python.filepath import UnlistableError
5403 
5404 from pycryptopp.cipher.aes import AES
5405 
5406hunk ./src/allmydata/util/fileutil.py 187
5407             raise tx
5408         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5409 
5410+def fp_make_dirs(dirfp):
5411+    """
5412+    An idempotent version of FilePath.makedirs().  If the dir already
5413+    exists, do nothing and return without raising an exception.  If this
5414+    call creates the dir, return without raising an exception.  If there is
5415+    an error that prevents creation or if the directory gets deleted after
5416+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
5417+    exists, raise an exception.
5418+    """
5419+    log.msg( "xxx 0 %s" % (dirfp,))
5420+    tx = None
5421+    try:
5422+        dirfp.makedirs()
5423+    except OSError, x:
5424+        tx = x
5425+
5426+    if not dirfp.isdir():
5427+        if tx:
5428+            raise tx
5429+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5430+
5431 def rmtree(dirname):
5432     """
5433     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5434hunk ./src/allmydata/util/fileutil.py 244
5435             raise OSError, "Failed to remove dir for unknown reason."
5436         raise OSError, excs
5437 
5438+def fp_remove(dirfp):
5439+    try:
5440+        dirfp.remove()
5441+    except UnlistableError, e:
5442+        if e.originalException.errno != errno.ENOENT:
5443+            raise
5444+
5445 def rm_dir(dirname):
5446     # Renamed to be like shutil.rmtree and unlike rmdir.
5447     return rmtree(dirname)
5448}
5449[another temporary patch for sharing work-in-progress
5450zooko@zooko.com**20110720055918
5451 Ignore-this: dfa0270476cbc6511cdb54c5c9a55a8e
5452 A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
5453 There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
5454 (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
5455 
5456] {
5457hunk ./src/allmydata/storage/backends/das/core.py 5
5458 
5459 from allmydata.interfaces import IStorageBackend
5460 from allmydata.storage.backends.base import Backend
5461-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5462+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5463 from allmydata.util.assertutil import precondition
5464 
5465 #from foolscap.api import Referenceable
5466hunk ./src/allmydata/storage/backends/das/core.py 10
5467 from twisted.application import service
5468+from twisted.python.filepath import UnlistableError
5469 
5470 from zope.interface import implements
5471 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
5472hunk ./src/allmydata/storage/backends/das/core.py 17
5473 from allmydata.util import fileutil, idlib, log, time_format
5474 import allmydata # for __full_version__
5475 
5476-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5477-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5478+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5479+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5480 from allmydata.storage.lease import LeaseInfo
5481 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5482      create_mutable_sharefile
5483hunk ./src/allmydata/storage/backends/das/core.py 41
5484 # $SHARENUM matches this regex:
5485 NUM_RE=re.compile("^[0-9]+$")
5486 
5487+def is_num(fp):
5488+    return NUM_RE.match(fp.basename)
5489+
5490 class DASCore(Backend):
5491     implements(IStorageBackend)
5492     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
5493hunk ./src/allmydata/storage/backends/das/core.py 58
5494         self.storedir = storedir
5495         self.readonly = readonly
5496         self.reserved_space = int(reserved_space)
5497-        if self.reserved_space:
5498-            if self.get_available_space() is None:
5499-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5500-                        umid="0wZ27w", level=log.UNUSUAL)
5501-
5502         self.sharedir = self.storedir.child("shares")
5503         fileutil.fp_make_dirs(self.sharedir)
5504         self.incomingdir = self.sharedir.child('incoming')
5505hunk ./src/allmydata/storage/backends/das/core.py 62
5506         self._clean_incomplete()
5507+        if self.reserved_space and (self.get_available_space() is None):
5508+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5509+                    umid="0wZ27w", level=log.UNUSUAL)
5510+
5511 
5512     def _clean_incomplete(self):
5513         fileutil.fp_remove(self.incomingdir)
5514hunk ./src/allmydata/storage/backends/das/core.py 87
5515         self.lease_checker.setServiceParent(self)
5516 
5517     def get_incoming_shnums(self, storageindex):
5518-        """Return the set of incoming shnums."""
5519+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
5520+        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
5521         try:
5522hunk ./src/allmydata/storage/backends/das/core.py 90
5523-           
5524-            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5525-            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5526-            return frozenset(incomingshnums)
5527+            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
5528+            shnums = [ int(fp.basename) for fp in childfps ]
5529+            return frozenset(shnums)
5530         except UnlistableError:
5531             # There is no shares directory at all.
5532             return frozenset()
5533hunk ./src/allmydata/storage/backends/das/core.py 98
5534             
5535     def get_shares(self, storageindex):
5536-        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5537+        """ Generate ImmutableShare objects for shares we have for this
5538+        storageindex. ("Shares we have" means completed ones, excluding
5539+        incoming ones.)"""
5540         finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5541         try:
5542hunk ./src/allmydata/storage/backends/das/core.py 103
5543-            for f in finalstoragedir.listdir():
5544-                if NUM_RE.match(f.basename):
5545-                    yield ImmutableShare(f, storageindex, int(f))
5546+            for fp in finalstoragedir.children():
5547+                if is_num(fp):
5548+                    yield ImmutableShare(fp, storageindex)
5549         except UnlistableError:
5550             # There is no shares directory at all.
5551             pass
5552hunk ./src/allmydata/storage/backends/das/core.py 116
5553         return fileutil.get_available_space(self.storedir, self.reserved_space)
5554 
5555     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
5556-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
5557-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
5558+        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
5559+        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
5560         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
5561         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
5562         return bw
5563hunk ./src/allmydata/storage/backends/das/expirer.py 50
5564     slow_start = 360 # wait 6 minutes after startup
5565     minimum_cycle_time = 12*60*60 # not more than twice per day
5566 
5567-    def __init__(self, statefile, historyfile, expiration_policy):
5568-        self.historyfile = historyfile
5569+    def __init__(self, statefile, historyfp, expiration_policy):
5570+        self.historyfp = historyfp
5571         self.expiration_enabled = expiration_policy['enabled']
5572         self.mode = expiration_policy['mode']
5573         self.override_lease_duration = None
5574hunk ./src/allmydata/storage/backends/das/expirer.py 80
5575             self.state["cycle-to-date"].setdefault(k, so_far[k])
5576 
5577         # initialize history
5578-        if not os.path.exists(self.historyfile):
5579+        if not self.historyfp.exists():
5580             history = {} # cyclenum -> dict
5581hunk ./src/allmydata/storage/backends/das/expirer.py 82
5582-            f = open(self.historyfile, "wb")
5583-            pickle.dump(history, f)
5584-            f.close()
5585+            self.historyfp.setContent(pickle.dumps(history))
5586 
5587     def create_empty_cycle_dict(self):
5588         recovered = self.create_empty_recovered_dict()
5589hunk ./src/allmydata/storage/backends/das/expirer.py 305
5590         # copy() needs to become a deepcopy
5591         h["space-recovered"] = s["space-recovered"].copy()
5592 
5593-        history = pickle.load(open(self.historyfile, "rb"))
5594+        history = pickle.load(self.historyfp.getContent())
5595         history[cycle] = h
5596         while len(history) > 10:
5597             oldcycles = sorted(history.keys())
5598hunk ./src/allmydata/storage/backends/das/expirer.py 310
5599             del history[oldcycles[0]]
5600-        f = open(self.historyfile, "wb")
5601-        pickle.dump(history, f)
5602-        f.close()
5603+        self.historyfp.setContent(pickle.dumps(history))
5604 
5605     def get_state(self):
5606         """In addition to the crawler state described in
5607hunk ./src/allmydata/storage/backends/das/expirer.py 379
5608         progress = self.get_progress()
5609 
5610         state = ShareCrawler.get_state(self) # does a shallow copy
5611-        history = pickle.load(open(self.historyfile, "rb"))
5612+        history = pickle.load(self.historyfp.getContent())
5613         state["history"] = history
5614 
5615         if not progress["cycle-in-progress"]:
5616hunk ./src/allmydata/storage/common.py 19
5617 def si_a2b(ascii_storageindex):
5618     return base32.a2b(ascii_storageindex)
5619 
5620-def storage_index_to_dir(startfp, storageindex):
5621+def si_dir(startfp, storageindex):
5622     sia = si_b2a(storageindex)
5623hunk ./src/allmydata/storage/common.py 21
5624-    return os.path.join(sia[:2], sia)
5625+    return startfp.child(sia[:2]).child(sia)
5626hunk ./src/allmydata/storage/crawler.py 68
5627     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
5628     minimum_cycle_time = 300 # don't run a cycle faster than this
5629 
5630-    def __init__(self, statefname, allowed_cpu_percentage=None):
5631+    def __init__(self, statefp, allowed_cpu_percentage=None):
5632         service.MultiService.__init__(self)
5633         if allowed_cpu_percentage is not None:
5634             self.allowed_cpu_percentage = allowed_cpu_percentage
5635hunk ./src/allmydata/storage/crawler.py 72
5636-        self.statefname = statefname
5637+        self.statefp = statefp
5638         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
5639                          for i in range(2**10)]
5640         self.prefixes.sort()
5641hunk ./src/allmydata/storage/crawler.py 192
5642         #                            of the last bucket to be processed, or
5643         #                            None if we are sleeping between cycles
5644         try:
5645-            f = open(self.statefname, "rb")
5646-            state = pickle.load(f)
5647-            f.close()
5648+            state = pickle.loads(self.statefp.getContent())
5649         except EnvironmentError:
5650             state = {"version": 1,
5651                      "last-cycle-finished": None,
5652hunk ./src/allmydata/storage/crawler.py 228
5653         else:
5654             last_complete_prefix = self.prefixes[lcpi]
5655         self.state["last-complete-prefix"] = last_complete_prefix
5656-        tmpfile = self.statefname + ".tmp"
5657-        f = open(tmpfile, "wb")
5658-        pickle.dump(self.state, f)
5659-        f.close()
5660-        fileutil.move_into_place(tmpfile, self.statefname)
5661+        self.statefp.setContent(pickle.dumps(self.state))
5662 
5663     def startService(self):
5664         # arrange things to look like we were just sleeping, so
5665hunk ./src/allmydata/storage/crawler.py 440
5666 
5667     minimum_cycle_time = 60*60 # we don't need this more than once an hour
5668 
5669-    def __init__(self, statefname, num_sample_prefixes=1):
5670-        FSShareCrawler.__init__(self, statefname)
5671+    def __init__(self, statefp, num_sample_prefixes=1):
5672+        FSShareCrawler.__init__(self, statefp)
5673         self.num_sample_prefixes = num_sample_prefixes
5674 
5675     def add_initial_state(self):
5676hunk ./src/allmydata/storage/server.py 11
5677 from allmydata.util import fileutil, idlib, log, time_format
5678 import allmydata # for __full_version__
5679 
5680-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5681-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5682+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5683+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5684 from allmydata.storage.lease import LeaseInfo
5685 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5686      create_mutable_sharefile
5687hunk ./src/allmydata/storage/server.py 173
5688         # to a particular owner.
5689         start = time.time()
5690         self.count("allocate")
5691-        alreadygot = set()
5692         incoming = set()
5693         bucketwriters = {} # k: shnum, v: BucketWriter
5694 
5695hunk ./src/allmydata/storage/server.py 199
5696             remaining_space -= self.allocated_size()
5697         # self.readonly_storage causes remaining_space <= 0
5698 
5699-        # fill alreadygot with all shares that we have, not just the ones
5700+        # Fill alreadygot with all shares that we have, not just the ones
5701         # they asked about: this will save them a lot of work. Add or update
5702         # leases for all of them: if they want us to hold shares for this
5703hunk ./src/allmydata/storage/server.py 202
5704-        # file, they'll want us to hold leases for this file.
5705+        # file, they'll want us to hold leases for all the shares of it.
5706+        alreadygot = set()
5707         for share in self.backend.get_shares(storageindex):
5708hunk ./src/allmydata/storage/server.py 205
5709-            alreadygot.add(share.shnum)
5710             share.add_or_renew_lease(lease_info)
5711hunk ./src/allmydata/storage/server.py 206
5712+            alreadygot.add(share.shnum)
5713 
5714hunk ./src/allmydata/storage/server.py 208
5715-        # fill incoming with all shares that are incoming use a set operation
5716-        # since there's no need to operate on individual pieces
5717+        # all share numbers that are incoming
5718         incoming = self.backend.get_incoming_shnums(storageindex)
5719 
5720         for shnum in ((sharenums - alreadygot) - incoming):
5721hunk ./src/allmydata/storage/server.py 282
5722             total_space_freed += sf.cancel_lease(cancel_secret)
5723 
5724         if found_buckets:
5725-            storagedir = os.path.join(self.sharedir,
5726-                                      storage_index_to_dir(storageindex))
5727-            if not os.listdir(storagedir):
5728-                os.rmdir(storagedir)
5729+            storagedir = si_dir(self.sharedir, storageindex)
5730+            fp_rmdir_if_empty(storagedir)
5731 
5732         if self.stats_provider:
5733             self.stats_provider.count('storage_server.bytes_freed',
5734hunk ./src/allmydata/test/test_backends.py 52
5735     subtree. I simulate just the parts of the filesystem that the current
5736     implementation of DAS backend needs. """
5737     def call_open(self, fname, mode):
5738+        assert isinstance(fname, basestring), fname
5739         fnamefp = FilePath(fname)
5740         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5741                         "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5742hunk ./src/allmydata/test/test_backends.py 104
5743                         "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5744 
5745     def call_stat(self, fname):
5746+        assert isinstance(fname, basestring), fname
5747         fnamefp = FilePath(fname)
5748         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5749                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5750hunk ./src/allmydata/test/test_backends.py 217
5751 
5752         mocktime.return_value = 0
5753         # Inspect incoming and fail unless it's empty.
5754-        incomingset = self.ss.backend.get_incoming('teststorage_index')
5755-        self.failUnlessReallyEqual(incomingset, set())
5756+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
5757+        self.failUnlessReallyEqual(incomingset, frozenset())
5758         
5759         # Populate incoming with the sharenum: 0.
5760hunk ./src/allmydata/test/test_backends.py 221
5761-        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5762+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5763 
5764         # Inspect incoming and fail unless the sharenum: 0 is listed there.
5765hunk ./src/allmydata/test/test_backends.py 224
5766-        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
5767+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
5768         
5769         # Attempt to create a second share writer with the same sharenum.
5770hunk ./src/allmydata/test/test_backends.py 227
5771-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5772+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5773 
5774         # Show that no sharewriter results from a remote_allocate_buckets
5775         # with the same si and sharenum, until BucketWriter.remote_close()
5776hunk ./src/allmydata/test/test_backends.py 280
5777         StorageServer object. """
5778 
5779         def call_listdir(dirname):
5780+            precondition(isinstance(dirname, basestring), dirname)
5781             self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
5782             return ['0']
5783 
5784hunk ./src/allmydata/test/test_backends.py 287
5785         mocklistdir.side_effect = call_listdir
5786 
5787         def call_open(fname, mode):
5788+            precondition(isinstance(fname, basestring), fname)
5789             self.failUnlessReallyEqual(fname, sharefname)
5790             self.failUnlessEqual(mode[0], 'r', mode)
5791             self.failUnless('b' in mode, mode)
5792hunk ./src/allmydata/test/test_backends.py 297
5793 
5794         datalen = len(share_data)
5795         def call_getsize(fname):
5796+            precondition(isinstance(fname, basestring), fname)
5797             self.failUnlessReallyEqual(fname, sharefname)
5798             return datalen
5799         mockgetsize.side_effect = call_getsize
5800hunk ./src/allmydata/test/test_backends.py 303
5801 
5802         def call_exists(fname):
5803+            precondition(isinstance(fname, basestring), fname)
5804             self.failUnlessReallyEqual(fname, sharefname)
5805             return True
5806         mockexists.side_effect = call_exists
5807hunk ./src/allmydata/test/test_backends.py 321
5808         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
5809 
5810 
5811-class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
5812-    @mock.patch('time.time')
5813-    @mock.patch('os.mkdir')
5814-    @mock.patch('__builtin__.open')
5815-    @mock.patch('os.listdir')
5816-    @mock.patch('os.path.isdir')
5817-    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5818+class TestBackendConstruction(MockFiles, ReallyEqualMixin):
5819+    def test_create_fs_backend(self):
5820         """ This tests whether a file system backend instance can be
5821         constructed. To pass the test, it has to use the
5822         filesystem in only the prescribed ways. """
5823hunk ./src/allmydata/test/test_backends.py 327
5824 
5825-        def call_open(fname, mode):
5826-            if fname == os.path.join(storedir,'bucket_counter.state'):
5827-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5828-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5829-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5830-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5831-                return StringIO()
5832-            else:
5833-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5834-        mockopen.side_effect = call_open
5835-
5836-        def call_isdir(fname):
5837-            if fname == os.path.join(storedir,'shares'):
5838-                return True
5839-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5840-                return True
5841-            else:
5842-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5843-        mockisdir.side_effect = call_isdir
5844-
5845-        def call_mkdir(fname, mode):
5846-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
5847-            self.failUnlessEqual(0777, mode)
5848-            if fname == storedir:
5849-                return None
5850-            elif fname == os.path.join(storedir,'shares'):
5851-                return None
5852-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5853-                return None
5854-            else:
5855-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
5856-        mockmkdir.side_effect = call_mkdir
5857-
5858         # Now begin the test.
5859hunk ./src/allmydata/test/test_backends.py 328
5860-        DASCore('teststoredir', expiration_policy)
5861-
5862-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5863-
5864+        DASCore(self.storedir, expiration_policy)
5865hunk ./src/allmydata/util/fileutil.py 7
5866 
5867 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5868 
5869+from allmydata.util.assertutil import precondition
5870+
5871 from twisted.python import log
5872hunk ./src/allmydata/util/fileutil.py 10
5873-from twisted.python.filepath import UnlistableError
5874+from twisted.python.filepath import FilePath, UnlistableError
5875 
5876 from pycryptopp.cipher.aes import AES
5877 
5878hunk ./src/allmydata/util/fileutil.py 210
5879             raise tx
5880         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5881 
5882+def fp_rmdir_if_empty(dirfp):
5883+    """ Remove the directory if it is empty. """
5884+    try:
5885+        os.rmdir(dirfp.path)
5886+    except OSError, e:
5887+        if e.errno != errno.ENOTEMPTY:
5888+            raise
5889+    else:
5890+        dirfp.changed()
5891+
5892 def rmtree(dirname):
5893     """
5894     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5895hunk ./src/allmydata/util/fileutil.py 257
5896         raise OSError, excs
5897 
5898 def fp_remove(dirfp):
5899+    """
5900+    An idempotent version of shutil.rmtree().  If the dir is already gone,
5901+    do nothing and return without raising an exception.  If this call
5902+    removes the dir, return without raising an exception.  If there is an
5903+    error that prevents removal or if the directory gets created again by
5904+    someone else after this deletes it and before this checks that it is
5905+    gone, raise an exception.
5906+    """
5907     try:
5908         dirfp.remove()
5909     except UnlistableError, e:
5910hunk ./src/allmydata/util/fileutil.py 270
5911         if e.originalException.errno != errno.ENOENT:
5912             raise
5913+    except OSError, e:
5914+        if e.errno != errno.ENOENT:
5915+            raise
5916 
5917 def rm_dir(dirname):
5918     # Renamed to be like shutil.rmtree and unlike rmdir.
5919hunk ./src/allmydata/util/fileutil.py 387
5920         import traceback
5921         traceback.print_exc()
5922 
5923-def get_disk_stats(whichdir, reserved_space=0):
5924+def get_disk_stats(whichdirfp, reserved_space=0):
5925     """Return disk statistics for the storage disk, in the form of a dict
5926     with the following fields.
5927       total:            total bytes on disk
5928hunk ./src/allmydata/util/fileutil.py 408
5929     you can pass how many bytes you would like to leave unused on this
5930     filesystem as reserved_space.
5931     """
5932+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5933 
5934     if have_GetDiskFreeSpaceExW:
5935         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
5936hunk ./src/allmydata/util/fileutil.py 419
5937         n_free_for_nonroot = c_ulonglong(0)
5938         n_total            = c_ulonglong(0)
5939         n_free_for_root    = c_ulonglong(0)
5940-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
5941+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
5942                                                byref(n_total),
5943                                                byref(n_free_for_root))
5944         if retval == 0:
5945hunk ./src/allmydata/util/fileutil.py 424
5946             raise OSError("Windows error %d attempting to get disk statistics for %r"
5947-                          % (GetLastError(), whichdir))
5948+                          % (GetLastError(), whichdirfp.path))
5949         free_for_nonroot = n_free_for_nonroot.value
5950         total            = n_total.value
5951         free_for_root    = n_free_for_root.value
5952hunk ./src/allmydata/util/fileutil.py 433
5953         # <http://docs.python.org/library/os.html#os.statvfs>
5954         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
5955         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
5956-        s = os.statvfs(whichdir)
5957+        s = os.statvfs(whichdirfp.path)
5958 
5959         # on my mac laptop:
5960         #  statvfs(2) is a wrapper around statfs(2).
5961hunk ./src/allmydata/util/fileutil.py 460
5962              'avail': avail,
5963            }
5964 
5965-def get_available_space(whichdir, reserved_space):
5966+def get_available_space(whichdirfp, reserved_space):
5967     """Returns available space for share storage in bytes, or None if no
5968     API to get this information is available.
5969 
5970hunk ./src/allmydata/util/fileutil.py 472
5971     you can pass how many bytes you would like to leave unused on this
5972     filesystem as reserved_space.
5973     """
5974+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5975     try:
5976hunk ./src/allmydata/util/fileutil.py 474
5977-        return get_disk_stats(whichdir, reserved_space)['avail']
5978+        return get_disk_stats(whichdirfp, reserved_space)['avail']
5979     except AttributeError:
5980         return None
5981hunk ./src/allmydata/util/fileutil.py 477
5982-    except EnvironmentError:
5983-        log.msg("OS call to get disk statistics failed")
5984-        return 0
5985}
5986[jacp16 or so
5987wilcoxjg@gmail.com**20110722070036
5988 Ignore-this: 7548785cad146056eede9a16b93b569f
5989] {
5990merger 0.0 (
5991hunk ./src/allmydata/_auto_deps.py 21
5992-    "Twisted >= 2.4.0",
5993+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
5994+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
5995+    # support asynchronous close.
5996+    "Twisted >= 10.1.0",
5997hunk ./src/allmydata/_auto_deps.py 21
5998-    "Twisted >= 2.4.0",
5999+    "Twisted >= 11.0",
6000)
6001hunk ./src/allmydata/storage/backends/das/core.py 2
6002 import os, re, weakref, struct, time, stat
6003+from twisted.application import service
6004+from twisted.python.filepath import UnlistableError
6005+from twisted.python.filepath import FilePath
6006+from zope.interface import implements
6007 
6008hunk ./src/allmydata/storage/backends/das/core.py 7
6009+import allmydata # for __full_version__
6010 from allmydata.interfaces import IStorageBackend
6011 from allmydata.storage.backends.base import Backend
6012hunk ./src/allmydata/storage/backends/das/core.py 10
6013-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6014+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6015 from allmydata.util.assertutil import precondition
6016hunk ./src/allmydata/storage/backends/das/core.py 12
6017-
6018-#from foolscap.api import Referenceable
6019-from twisted.application import service
6020-from twisted.python.filepath import UnlistableError
6021-
6022-from zope.interface import implements
6023 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6024 from allmydata.util import fileutil, idlib, log, time_format
6025hunk ./src/allmydata/storage/backends/das/core.py 14
6026-import allmydata # for __full_version__
6027-
6028-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6029-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6030 from allmydata.storage.lease import LeaseInfo
6031 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6032      create_mutable_sharefile
6033hunk ./src/allmydata/storage/backends/das/core.py 21
6034 from allmydata.storage.crawler import FSBucketCountingCrawler
6035 from allmydata.util.hashutil import constant_time_compare
6036 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6037-
6038-from zope.interface import implements
6039+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6040 
6041 # storage/
6042 # storage/shares/incoming
6043hunk ./src/allmydata/storage/backends/das/core.py 49
6044         self._setup_lease_checkerf(expiration_policy)
6045 
6046     def _setup_storage(self, storedir, readonly, reserved_space):
6047+        precondition(isinstance(storedir, FilePath)) 
6048         self.storedir = storedir
6049         self.readonly = readonly
6050         self.reserved_space = int(reserved_space)
6051hunk ./src/allmydata/storage/backends/das/core.py 83
6052 
6053     def get_incoming_shnums(self, storageindex):
6054         """ Return a frozenset of the shnum (as ints) of incoming shares. """
6055-        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
6056+        incomingdir = si_si2dir(self.incomingdir, storageindex)
6057         try:
6058             childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
6059             shnums = [ int(fp.basename) for fp in childfps ]
6060hunk ./src/allmydata/storage/backends/das/core.py 96
6061         """ Generate ImmutableShare objects for shares we have for this
6062         storageindex. ("Shares we have" means completed ones, excluding
6063         incoming ones.)"""
6064-        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
6065+        finalstoragedir = si_si2dir(self.sharedir, storageindex)
6066         try:
6067             for fp in finalstoragedir.children():
6068                 if is_num(fp):
6069hunk ./src/allmydata/storage/backends/das/core.py 111
6070         return fileutil.get_available_space(self.storedir, self.reserved_space)
6071 
6072     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6073-        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
6074-        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6075+        finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
6076+        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6077         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
6078         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
6079         return bw
6080hunk ./src/allmydata/storage/backends/null/core.py 18
6081         return None
6082 
6083     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6084-       
6085-        immutableshare = ImmutableShare()
6086+        immutableshare = ImmutableShare()
6087         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
6088 
6089     def set_storage_server(self, ss):
6090hunk ./src/allmydata/storage/backends/null/core.py 24
6091         self.ss = ss
6092 
6093-    def get_incoming(self, storageindex):
6094-        return set()
6095+    def get_incoming_shnums(self, storageindex):
6096+        return frozenset()
6097 
6098 class ImmutableShare:
6099     sharetype = "immutable"
6100hunk ./src/allmydata/storage/common.py 19
6101 def si_a2b(ascii_storageindex):
6102     return base32.a2b(ascii_storageindex)
6103 
6104-def si_dir(startfp, storageindex):
6105+def si_si2dir(startfp, storageindex):
6106     sia = si_b2a(storageindex)
6107     return startfp.child(sia[:2]).child(sia)
6108hunk ./src/allmydata/storage/immutable.py 20
6109     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
6110         self.ss = ss
6111         self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
6112-
6113         self._canary = canary
6114         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
6115         self.closed = False
6116hunk ./src/allmydata/storage/lease.py 17
6117 
6118     def get_expiration_time(self):
6119         return self.expiration_time
6120+
6121     def get_grant_renew_time_time(self):
6122         # hack, based upon fixed 31day expiration period
6123         return self.expiration_time - 31*24*60*60
6124hunk ./src/allmydata/storage/lease.py 21
6125+
6126     def get_age(self):
6127         return time.time() - self.get_grant_renew_time_time()
6128 
6129hunk ./src/allmydata/storage/lease.py 32
6130          self.expiration_time) = struct.unpack(">L32s32sL", data)
6131         self.nodeid = None
6132         return self
6133+
6134     def to_immutable_data(self):
6135         return struct.pack(">L32s32sL",
6136                            self.owner_num,
6137hunk ./src/allmydata/storage/lease.py 45
6138                            int(self.expiration_time),
6139                            self.renew_secret, self.cancel_secret,
6140                            self.nodeid)
6141+
6142     def from_mutable_data(self, data):
6143         (self.owner_num,
6144          self.expiration_time,
6145hunk ./src/allmydata/storage/server.py 11
6146 from allmydata.util import fileutil, idlib, log, time_format
6147 import allmydata # for __full_version__
6148 
6149-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6150-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6151+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6152+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6153 from allmydata.storage.lease import LeaseInfo
6154 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6155      create_mutable_sharefile
6156hunk ./src/allmydata/storage/server.py 88
6157             else:
6158                 stats["mean"] = None
6159 
6160-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
6161-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
6162-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
6163+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
6164+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
6165+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
6166                              (0.999, "99_9_percentile", 1000)]
6167 
6168             for percentile, percentilestring, minnumtoobserve in orderstatlist:
6169hunk ./src/allmydata/storage/server.py 231
6170             header = f.read(32)
6171             f.close()
6172             if header[:32] == MutableShareFile.MAGIC:
6173+                # XXX  Can I exploit this code?
6174                 sf = MutableShareFile(filename, self)
6175                 # note: if the share has been migrated, the renew_lease()
6176                 # call will throw an exception, with information to help the
6177hunk ./src/allmydata/storage/server.py 237
6178                 # client update the lease.
6179             elif header[:4] == struct.pack(">L", 1):
6180+                # Check if version number is "1".
6181+                # XXX WHAT ABOUT OTHER VERSIONS!!!!!!!?
6182                 sf = ShareFile(filename)
6183             else:
6184                 continue # non-sharefile
6185hunk ./src/allmydata/storage/server.py 285
6186             total_space_freed += sf.cancel_lease(cancel_secret)
6187 
6188         if found_buckets:
6189-            storagedir = si_dir(self.sharedir, storageindex)
6190+            # XXX  Yikes looks like code that shouldn't be in the server!
6191+            storagedir = si_si2dir(self.sharedir, storageindex)
6192             fp_rmdir_if_empty(storagedir)
6193 
6194         if self.stats_provider:
6195hunk ./src/allmydata/storage/server.py 301
6196             self.stats_provider.count('storage_server.bytes_added', consumed_size)
6197         del self._active_writers[bw]
6198 
6199-
6200     def remote_get_buckets(self, storageindex):
6201         start = time.time()
6202         self.count("get")
6203hunk ./src/allmydata/storage/server.py 329
6204         except StopIteration:
6205             return iter([])
6206 
6207+    #  XXX  As far as Zancas' grockery has gotten.
6208     def remote_slot_testv_and_readv_and_writev(self, storageindex,
6209                                                secrets,
6210                                                test_and_write_vectors,
6211hunk ./src/allmydata/storage/server.py 338
6212         self.count("writev")
6213         si_s = si_b2a(storageindex)
6214         log.msg("storage: slot_writev %s" % si_s)
6215-        si_dir = storage_index_to_dir(storageindex)
6216+       
6217         (write_enabler, renew_secret, cancel_secret) = secrets
6218         # shares exist if there is a file for them
6219hunk ./src/allmydata/storage/server.py 341
6220-        bucketdir = os.path.join(self.sharedir, si_dir)
6221+        bucketdir = si_si2dir(self.sharedir, storageindex)
6222         shares = {}
6223         if os.path.isdir(bucketdir):
6224             for sharenum_s in os.listdir(bucketdir):
6225hunk ./src/allmydata/storage/server.py 430
6226         si_s = si_b2a(storageindex)
6227         lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
6228                      facility="tahoe.storage", level=log.OPERATIONAL)
6229-        si_dir = storage_index_to_dir(storageindex)
6230         # shares exist if there is a file for them
6231hunk ./src/allmydata/storage/server.py 431
6232-        bucketdir = os.path.join(self.sharedir, si_dir)
6233+        bucketdir = si_si2dir(self.sharedir, storageindex)
6234         if not os.path.isdir(bucketdir):
6235             self.add_latency("readv", time.time() - start)
6236             return {}
6237hunk ./src/allmydata/test/test_backends.py 2
6238 from twisted.trial import unittest
6239-
6240 from twisted.python.filepath import FilePath
6241hunk ./src/allmydata/test/test_backends.py 3
6242-
6243 from allmydata.util.log import msg
6244hunk ./src/allmydata/test/test_backends.py 4
6245-
6246 from StringIO import StringIO
6247hunk ./src/allmydata/test/test_backends.py 5
6248-
6249 from allmydata.test.common_util import ReallyEqualMixin
6250 from allmydata.util.assertutil import _assert
6251hunk ./src/allmydata/test/test_backends.py 7
6252-
6253 import mock
6254 
6255 # This is the code that we're going to be testing.
6256hunk ./src/allmydata/test/test_backends.py 11
6257 from allmydata.storage.server import StorageServer
6258-
6259 from allmydata.storage.backends.das.core import DASCore
6260 from allmydata.storage.backends.null.core import NullCore
6261 
6262hunk ./src/allmydata/test/test_backends.py 14
6263-
6264-# The following share file contents was generated with
6265+# The following share file content was generated with
6266 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
6267hunk ./src/allmydata/test/test_backends.py 16
6268-# with share data == 'a'.
6269+# with share data == 'a'. The total size of this input
6270+# is 85 bytes.
6271 shareversionnumber = '\x00\x00\x00\x01'
6272 sharedatalength = '\x00\x00\x00\x01'
6273 numberofleases = '\x00\x00\x00\x01'
6274hunk ./src/allmydata/test/test_backends.py 21
6275-
6276 shareinputdata = 'a'
6277 ownernumber = '\x00\x00\x00\x00'
6278 renewsecret  = 'x'*32
6279hunk ./src/allmydata/test/test_backends.py 31
6280 client_data = shareinputdata + ownernumber + renewsecret + \
6281     cancelsecret + expirationtime + nextlease
6282 share_data = containerdata + client_data
6283-
6284-
6285 testnodeid = 'testnodeidxxxxxxxxxx'
6286 
6287 class MockStat:
6288hunk ./src/allmydata/test/test_backends.py 105
6289         mstat.st_mode = 16893 # a directory
6290         return mstat
6291 
6292+    def call_get_available_space(self, storedir, reservedspace):
6293+        # The input vector has an input size of 85.
6294+        return 85 - reservedspace
6295+
6296+    def call_exists(self):
6297+        # I'm only called in the ImmutableShareFile constructor.
6298+        return False
6299+
6300     def setUp(self):
6301         msg( "%s.setUp()" % (self,))
6302         self.storedir = FilePath('teststoredir')
6303hunk ./src/allmydata/test/test_backends.py 147
6304         mockfpstat = self.mockfpstatp.__enter__()
6305         mockfpstat.side_effect = self.call_stat
6306 
6307+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6308+        mockget_available_space = self.mockget_available_space.__enter__()
6309+        mockget_available_space.side_effect = self.call_get_available_space
6310+
6311+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6312+        mockfpexists = self.mockfpexists.__enter__()
6313+        mockfpexists.side_effect = self.call_exists
6314+
6315     def tearDown(self):
6316         msg( "%s.tearDown()" % (self,))
6317hunk ./src/allmydata/test/test_backends.py 157
6318+        self.mockfpexists.__exit__()
6319+        self.mockget_available_space.__exit__()
6320         self.mockfpstatp.__exit__()
6321         self.mockstatp.__exit__()
6322         self.mockopenp.__exit__()
6323hunk ./src/allmydata/test/test_backends.py 166
6324         self.mockmkdirp.__exit__()
6325         self.mocklistdirp.__exit__()
6326 
6327+
6328 expiration_policy = {'enabled' : False,
6329                      'mode' : 'age',
6330                      'override_lease_duration' : None,
6331hunk ./src/allmydata/test/test_backends.py 182
6332         self.ss = StorageServer(testnodeid, backend=NullCore())
6333 
6334     @mock.patch('os.mkdir')
6335-
6336     @mock.patch('__builtin__.open')
6337     @mock.patch('os.listdir')
6338     @mock.patch('os.path.isdir')
6339hunk ./src/allmydata/test/test_backends.py 201
6340         filesystem backend. To pass the test, it mustn't use the filesystem
6341         outside of its configured storedir. """
6342 
6343-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
6344+        StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
6345 
6346 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
6347     """ This tests both the StorageServer and the DAS backend together. """
6348hunk ./src/allmydata/test/test_backends.py 205
6349+   
6350     def setUp(self):
6351         MockFiles.setUp(self)
6352         try:
6353hunk ./src/allmydata/test/test_backends.py 211
6354             self.backend = DASCore(self.storedir, expiration_policy)
6355             self.ss = StorageServer(testnodeid, self.backend)
6356-            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6357-            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
6358+            self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6359+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
6360         except:
6361             MockFiles.tearDown(self)
6362             raise
6363hunk ./src/allmydata/test/test_backends.py 233
6364         # Populate incoming with the sharenum: 0.
6365         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6366 
6367-        # Inspect incoming and fail unless the sharenum: 0 is listed there.
6368-        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6369+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
6370+        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6371         
6372         # Attempt to create a second share writer with the same sharenum.
6373         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6374hunk ./src/allmydata/test/test_backends.py 257
6375 
6376         # Postclose: (Omnibus) failUnless written data is in final.
6377         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
6378-        contents = sharesinfinal[0].read_share_data(0,73)
6379+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
6380+        contents = sharesinfinal[0].read_share_data(0, 73)
6381         self.failUnlessReallyEqual(contents, client_data)
6382 
6383         # Exercise the case that the share we're asking to allocate is
6384hunk ./src/allmydata/test/test_backends.py 276
6385         mockget_available_space.side_effect = call_get_available_space
6386         
6387         
6388-        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6389+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6390 
6391     @mock.patch('os.path.exists')
6392     @mock.patch('os.path.getsize')
6393}
6394[jacp17
6395wilcoxjg@gmail.com**20110722203244
6396 Ignore-this: e79a5924fb2eb786ee4e9737a8228f87
6397] {
6398hunk ./src/allmydata/storage/backends/das/core.py 14
6399 from allmydata.util.assertutil import precondition
6400 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6401 from allmydata.util import fileutil, idlib, log, time_format
6402+from allmydata.util.fileutil import fp_make_dirs
6403 from allmydata.storage.lease import LeaseInfo
6404 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6405      create_mutable_sharefile
6406hunk ./src/allmydata/storage/backends/das/core.py 19
6407 from allmydata.storage.immutable import BucketWriter, BucketReader
6408-from allmydata.storage.crawler import FSBucketCountingCrawler
6409+from allmydata.storage.crawler import BucketCountingCrawler
6410 from allmydata.util.hashutil import constant_time_compare
6411hunk ./src/allmydata/storage/backends/das/core.py 21
6412-from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6413+from allmydata.storage.backends.das.expirer import LeaseCheckingCrawler
6414 _pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6415 
6416 # storage/
6417hunk ./src/allmydata/storage/backends/das/core.py 43
6418     implements(IStorageBackend)
6419     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
6420         Backend.__init__(self)
6421-
6422         self._setup_storage(storedir, readonly, reserved_space)
6423         self._setup_corruption_advisory()
6424         self._setup_bucket_counter()
6425hunk ./src/allmydata/storage/backends/das/core.py 72
6426 
6427     def _setup_bucket_counter(self):
6428         statefname = self.storedir.child("bucket_counter.state")
6429-        self.bucket_counter = FSBucketCountingCrawler(statefname)
6430+        self.bucket_counter = BucketCountingCrawler(statefname)
6431         self.bucket_counter.setServiceParent(self)
6432 
6433     def _setup_lease_checkerf(self, expiration_policy):
6434hunk ./src/allmydata/storage/backends/das/core.py 78
6435         statefile = self.storedir.child("lease_checker.state")
6436         historyfile = self.storedir.child("lease_checker.history")
6437-        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6438+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6439         self.lease_checker.setServiceParent(self)
6440 
6441     def get_incoming_shnums(self, storageindex):
6442hunk ./src/allmydata/storage/backends/das/core.py 168
6443             # it. Also construct the metadata.
6444             assert not finalhome.exists()
6445             fp_make_dirs(self.incominghome)
6446-            f = open(self.incominghome, 'wb')
6447+            f = self.incominghome.child(str(self.shnum))
6448             # The second field -- the four-byte share data length -- is no
6449             # longer used as of Tahoe v1.3.0, but we continue to write it in
6450             # there in case someone downgrades a storage server from >=
6451hunk ./src/allmydata/storage/backends/das/core.py 178
6452             # the largest length that can fit into the field. That way, even
6453             # if this does happen, the old < v1.3.0 server will still allow
6454             # clients to read the first part of the share.
6455-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6456-            f.close()
6457+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6458+            #f.close()
6459             self._lease_offset = max_size + 0x0c
6460             self._num_leases = 0
6461         else:
6462hunk ./src/allmydata/storage/backends/das/core.py 261
6463         f.write(data)
6464         f.close()
6465 
6466-    def _write_lease_record(self, f, lease_number, lease_info):
6467+    def _write_lease_record(self, lease_number, lease_info):
6468         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6469         f.seek(offset)
6470         assert f.tell() == offset
6471hunk ./src/allmydata/storage/backends/das/core.py 290
6472                 yield LeaseInfo().from_immutable_data(data)
6473 
6474     def add_lease(self, lease_info):
6475-        f = open(self.incominghome, 'rb+')
6476+        self.incominghome, 'rb+')
6477         num_leases = self._read_num_leases(f)
6478         self._write_lease_record(f, num_leases, lease_info)
6479         self._write_num_leases(f, num_leases+1)
6480hunk ./src/allmydata/storage/backends/das/expirer.py 1
6481-import time, os, pickle, struct
6482-from allmydata.storage.crawler import FSShareCrawler
6483+import time, os, pickle, struct # os, pickle, and struct will almost certainly be migrated to the backend...
6484+from allmydata.storage.crawler import ShareCrawler
6485 from allmydata.storage.common import UnknownMutableContainerVersionError, \
6486      UnknownImmutableContainerVersionError
6487 from twisted.python import log as twlog
6488hunk ./src/allmydata/storage/backends/das/expirer.py 7
6489 
6490-class FSLeaseCheckingCrawler(FSShareCrawler):
6491+class LeaseCheckingCrawler(ShareCrawler):
6492     """I examine the leases on all shares, determining which are still valid
6493     and which have expired. I can remove the expired leases (if so
6494     configured), and the share will be deleted when the last lease is
6495hunk ./src/allmydata/storage/backends/das/expirer.py 66
6496         else:
6497             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
6498         self.sharetypes_to_expire = expiration_policy['sharetypes']
6499-        FSShareCrawler.__init__(self, statefile)
6500+        ShareCrawler.__init__(self, statefile)
6501 
6502     def add_initial_state(self):
6503         # we fill ["cycle-to-date"] here (even though they will be reset in
6504hunk ./src/allmydata/storage/crawler.py 1
6505-
6506 import os, time, struct
6507 import cPickle as pickle
6508 from twisted.internet import reactor
6509hunk ./src/allmydata/storage/crawler.py 11
6510 class TimeSliceExceeded(Exception):
6511     pass
6512 
6513-class FSShareCrawler(service.MultiService):
6514-    """A subcless of ShareCrawler is attached to a StorageServer, and
6515+class ShareCrawler(service.MultiService):
6516+    """A subclass of ShareCrawler is attached to a StorageServer, and
6517     periodically walks all of its shares, processing each one in some
6518     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
6519     since large servers can easily have a terabyte of shares, in several
6520hunk ./src/allmydata/storage/crawler.py 426
6521         pass
6522 
6523 
6524-class FSBucketCountingCrawler(FSShareCrawler):
6525+class BucketCountingCrawler(ShareCrawler):
6526     """I keep track of how many buckets are being managed by this server.
6527     This is equivalent to the number of distributed files and directories for
6528     which I am providing storage. The actual number of files+directories in
6529hunk ./src/allmydata/storage/crawler.py 440
6530     minimum_cycle_time = 60*60 # we don't need this more than once an hour
6531 
6532     def __init__(self, statefp, num_sample_prefixes=1):
6533-        FSShareCrawler.__init__(self, statefp)
6534+        ShareCrawler.__init__(self, statefp)
6535         self.num_sample_prefixes = num_sample_prefixes
6536 
6537     def add_initial_state(self):
6538hunk ./src/allmydata/test/test_backends.py 113
6539         # I'm only called in the ImmutableShareFile constructor.
6540         return False
6541 
6542+    def call_setContent(self, inputstring):
6543+        # XXX Good enough for expirer, not sure about elsewhere...
6544+        return True
6545+
6546     def setUp(self):
6547         msg( "%s.setUp()" % (self,))
6548         self.storedir = FilePath('teststoredir')
6549hunk ./src/allmydata/test/test_backends.py 159
6550         mockfpexists = self.mockfpexists.__enter__()
6551         mockfpexists.side_effect = self.call_exists
6552 
6553+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6554+        mocksetContent = self.mocksetContent.__enter__()
6555+        mocksetContent.side_effect = self.call_setContent
6556+
6557     def tearDown(self):
6558         msg( "%s.tearDown()" % (self,))
6559hunk ./src/allmydata/test/test_backends.py 165
6560+        self.mocksetContent.__exit__()
6561         self.mockfpexists.__exit__()
6562         self.mockget_available_space.__exit__()
6563         self.mockfpstatp.__exit__()
6564}
6565[jacp18
6566wilcoxjg@gmail.com**20110723031915
6567 Ignore-this: 21e7f22ac20e3f8af22ea2e9b755d6a5
6568] {
6569hunk ./src/allmydata/_auto_deps.py 21
6570     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6571     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6572 
6573-    "Twisted >= 2.4.0",
6574+v v v v v v v
6575+    "Twisted >= 11.0",
6576+*************
6577+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6578+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6579+    # support asynchronous close.
6580+    "Twisted >= 10.1.0",
6581+^ ^ ^ ^ ^ ^ ^
6582 
6583     # foolscap < 0.5.1 had a performance bug which spent
6584     # O(N**2) CPU for transferring large mutable files
6585hunk ./src/allmydata/storage/backends/das/core.py 168
6586             # it. Also construct the metadata.
6587             assert not finalhome.exists()
6588             fp_make_dirs(self.incominghome)
6589-            f = self.incominghome.child(str(self.shnum))
6590+            f = self.incominghome
6591             # The second field -- the four-byte share data length -- is no
6592             # longer used as of Tahoe v1.3.0, but we continue to write it in
6593             # there in case someone downgrades a storage server from >=
6594hunk ./src/allmydata/storage/backends/das/core.py 178
6595             # the largest length that can fit into the field. That way, even
6596             # if this does happen, the old < v1.3.0 server will still allow
6597             # clients to read the first part of the share.
6598-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6599-            #f.close()
6600+            print 'f: ',f
6601+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6602             self._lease_offset = max_size + 0x0c
6603             self._num_leases = 0
6604         else:
6605hunk ./src/allmydata/storage/backends/das/core.py 263
6606 
6607     def _write_lease_record(self, lease_number, lease_info):
6608         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6609-        f.seek(offset)
6610-        assert f.tell() == offset
6611-        f.write(lease_info.to_immutable_data())
6612+        fh = f.open()
6613+        try:
6614+            fh.seek(offset)
6615+            assert fh.tell() == offset
6616+            fh.write(lease_info.to_immutable_data())
6617+        finally:
6618+            fh.close()
6619 
6620     def _read_num_leases(self, f):
6621hunk ./src/allmydata/storage/backends/das/core.py 272
6622-        f.seek(0x08)
6623-        (num_leases,) = struct.unpack(">L", f.read(4))
6624+        fh = f.open()
6625+        try:
6626+            fh.seek(0x08)
6627+            ro = fh.read(4)
6628+            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6629+            (num_leases,) = struct.unpack(">L", ro)
6630+        finally:
6631+            fh.close()
6632         return num_leases
6633 
6634     def _write_num_leases(self, f, num_leases):
6635hunk ./src/allmydata/storage/backends/das/core.py 283
6636-        f.seek(0x08)
6637-        f.write(struct.pack(">L", num_leases))
6638+        fh = f.open()
6639+        try:
6640+            fh.seek(0x08)
6641+            fh.write(struct.pack(">L", num_leases))
6642+        finally:
6643+            fh.close()
6644 
6645     def _truncate_leases(self, f, num_leases):
6646         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
6647hunk ./src/allmydata/storage/backends/das/core.py 304
6648                 yield LeaseInfo().from_immutable_data(data)
6649 
6650     def add_lease(self, lease_info):
6651-        self.incominghome, 'rb+')
6652-        num_leases = self._read_num_leases(f)
6653+        f = self.incominghome
6654+        num_leases = self._read_num_leases(self.incominghome)
6655         self._write_lease_record(f, num_leases, lease_info)
6656         self._write_num_leases(f, num_leases+1)
6657hunk ./src/allmydata/storage/backends/das/core.py 308
6658-        f.close()
6659-
6660+       
6661     def renew_lease(self, renew_secret, new_expire_time):
6662         for i,lease in enumerate(self.get_leases()):
6663             if constant_time_compare(lease.renew_secret, renew_secret):
6664hunk ./src/allmydata/test/test_backends.py 33
6665 share_data = containerdata + client_data
6666 testnodeid = 'testnodeidxxxxxxxxxx'
6667 
6668+
6669 class MockStat:
6670     def __init__(self):
6671         self.st_mode = None
6672hunk ./src/allmydata/test/test_backends.py 43
6673     code under test if it reads or writes outside of its prescribed
6674     subtree. I simulate just the parts of the filesystem that the current
6675     implementation of DAS backend needs. """
6676+
6677+    def setUp(self):
6678+        msg( "%s.setUp()" % (self,))
6679+        self.storedir = FilePath('teststoredir')
6680+        self.basedir = self.storedir.child('shares')
6681+        self.baseincdir = self.basedir.child('incoming')
6682+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6683+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6684+        self.shareincomingname = self.sharedirincomingname.child('0')
6685+        self.sharefilename = self.sharedirfinalname.child('0')
6686+        self.sharefilecontents = StringIO(share_data)
6687+
6688+        self.mocklistdirp = mock.patch('os.listdir')
6689+        mocklistdir = self.mocklistdirp.__enter__()
6690+        mocklistdir.side_effect = self.call_listdir
6691+
6692+        self.mockmkdirp = mock.patch('os.mkdir')
6693+        mockmkdir = self.mockmkdirp.__enter__()
6694+        mockmkdir.side_effect = self.call_mkdir
6695+
6696+        self.mockisdirp = mock.patch('os.path.isdir')
6697+        mockisdir = self.mockisdirp.__enter__()
6698+        mockisdir.side_effect = self.call_isdir
6699+
6700+        self.mockopenp = mock.patch('__builtin__.open')
6701+        mockopen = self.mockopenp.__enter__()
6702+        mockopen.side_effect = self.call_open
6703+
6704+        self.mockstatp = mock.patch('os.stat')
6705+        mockstat = self.mockstatp.__enter__()
6706+        mockstat.side_effect = self.call_stat
6707+
6708+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6709+        mockfpstat = self.mockfpstatp.__enter__()
6710+        mockfpstat.side_effect = self.call_stat
6711+
6712+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6713+        mockget_available_space = self.mockget_available_space.__enter__()
6714+        mockget_available_space.side_effect = self.call_get_available_space
6715+
6716+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6717+        mockfpexists = self.mockfpexists.__enter__()
6718+        mockfpexists.side_effect = self.call_exists
6719+
6720+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6721+        mocksetContent = self.mocksetContent.__enter__()
6722+        mocksetContent.side_effect = self.call_setContent
6723+
6724     def call_open(self, fname, mode):
6725         assert isinstance(fname, basestring), fname
6726         fnamefp = FilePath(fname)
6727hunk ./src/allmydata/test/test_backends.py 107
6728             # current implementation of DAS backend, and we might want to
6729             # use this information in this test in the future...
6730             return StringIO()
6731+        elif fnamefp == self.shareincomingname:
6732+            print "repr(fnamefp): ", repr(fnamefp)
6733         else:
6734             # Anything else you open inside your subtree appears to be an
6735             # empty file.
6736hunk ./src/allmydata/test/test_backends.py 168
6737         # XXX Good enough for expirer, not sure about elsewhere...
6738         return True
6739 
6740-    def setUp(self):
6741-        msg( "%s.setUp()" % (self,))
6742-        self.storedir = FilePath('teststoredir')
6743-        self.basedir = self.storedir.child('shares')
6744-        self.baseincdir = self.basedir.child('incoming')
6745-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6746-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6747-        self.shareincomingname = self.sharedirincomingname.child('0')
6748-        self.sharefname = self.sharedirfinalname.child('0')
6749-
6750-        self.mocklistdirp = mock.patch('os.listdir')
6751-        mocklistdir = self.mocklistdirp.__enter__()
6752-        mocklistdir.side_effect = self.call_listdir
6753-
6754-        self.mockmkdirp = mock.patch('os.mkdir')
6755-        mockmkdir = self.mockmkdirp.__enter__()
6756-        mockmkdir.side_effect = self.call_mkdir
6757-
6758-        self.mockisdirp = mock.patch('os.path.isdir')
6759-        mockisdir = self.mockisdirp.__enter__()
6760-        mockisdir.side_effect = self.call_isdir
6761-
6762-        self.mockopenp = mock.patch('__builtin__.open')
6763-        mockopen = self.mockopenp.__enter__()
6764-        mockopen.side_effect = self.call_open
6765-
6766-        self.mockstatp = mock.patch('os.stat')
6767-        mockstat = self.mockstatp.__enter__()
6768-        mockstat.side_effect = self.call_stat
6769-
6770-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6771-        mockfpstat = self.mockfpstatp.__enter__()
6772-        mockfpstat.side_effect = self.call_stat
6773-
6774-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6775-        mockget_available_space = self.mockget_available_space.__enter__()
6776-        mockget_available_space.side_effect = self.call_get_available_space
6777-
6778-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6779-        mockfpexists = self.mockfpexists.__enter__()
6780-        mockfpexists.side_effect = self.call_exists
6781-
6782-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6783-        mocksetContent = self.mocksetContent.__enter__()
6784-        mocksetContent.side_effect = self.call_setContent
6785 
6786     def tearDown(self):
6787         msg( "%s.tearDown()" % (self,))
6788hunk ./src/allmydata/test/test_backends.py 239
6789         handling of simultaneous and successive attempts to write the same
6790         share.
6791         """
6792-
6793         mocktime.return_value = 0
6794         # Inspect incoming and fail unless it's empty.
6795         incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
6796}
6797[jacp19orso
6798wilcoxjg@gmail.com**20110724034230
6799 Ignore-this: f001093c467225c289489636a61935fe
6800] {
6801hunk ./src/allmydata/_auto_deps.py 21
6802     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6803     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6804 
6805-v v v v v v v
6806-    "Twisted >= 11.0",
6807-*************
6808+
6809     # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6810     # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6811     # support asynchronous close.
6812hunk ./src/allmydata/_auto_deps.py 26
6813     "Twisted >= 10.1.0",
6814-^ ^ ^ ^ ^ ^ ^
6815+
6816 
6817     # foolscap < 0.5.1 had a performance bug which spent
6818     # O(N**2) CPU for transferring large mutable files
6819hunk ./src/allmydata/storage/backends/das/core.py 153
6820     LEASE_SIZE = struct.calcsize(">L32s32sL")
6821     sharetype = "immutable"
6822 
6823-    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
6824+    def __init__(self, finalhome, storageindex, shnum, incominghome, max_size=None, create=False):
6825         """ If max_size is not None then I won't allow more than
6826         max_size to be written to me. If create=True then max_size
6827         must not be None. """
6828hunk ./src/allmydata/storage/backends/das/core.py 167
6829             # touch the file, so later callers will see that we're working on
6830             # it. Also construct the metadata.
6831             assert not finalhome.exists()
6832-            fp_make_dirs(self.incominghome)
6833-            f = self.incominghome
6834+            fp_make_dirs(self.incominghome.parent())
6835             # The second field -- the four-byte share data length -- is no
6836             # longer used as of Tahoe v1.3.0, but we continue to write it in
6837             # there in case someone downgrades a storage server from >=
6838hunk ./src/allmydata/storage/backends/das/core.py 177
6839             # the largest length that can fit into the field. That way, even
6840             # if this does happen, the old < v1.3.0 server will still allow
6841             # clients to read the first part of the share.
6842-            print 'f: ',f
6843-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6844+            self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6845             self._lease_offset = max_size + 0x0c
6846             self._num_leases = 0
6847         else:
6848hunk ./src/allmydata/storage/backends/das/core.py 182
6849             f = open(self.finalhome, 'rb')
6850-            filesize = os.path.getsize(self.finalhome)
6851             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
6852             f.close()
6853hunk ./src/allmydata/storage/backends/das/core.py 184
6854+            filesize = self.finalhome.getsize()
6855             if version != 1:
6856                 msg = "sharefile %s had version %d but we wanted 1" % \
6857                       (self.finalhome, version)
6858hunk ./src/allmydata/storage/backends/das/core.py 259
6859         f.write(data)
6860         f.close()
6861 
6862-    def _write_lease_record(self, lease_number, lease_info):
6863+    def _write_lease_record(self, f, lease_number, lease_info):
6864         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6865         fh = f.open()
6866hunk ./src/allmydata/storage/backends/das/core.py 262
6867+        print fh
6868         try:
6869             fh.seek(offset)
6870             assert fh.tell() == offset
6871hunk ./src/allmydata/storage/backends/das/core.py 271
6872             fh.close()
6873 
6874     def _read_num_leases(self, f):
6875-        fh = f.open()
6876+        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
6877         try:
6878             fh.seek(0x08)
6879             ro = fh.read(4)
6880hunk ./src/allmydata/storage/backends/das/core.py 275
6881-            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6882             (num_leases,) = struct.unpack(">L", ro)
6883         finally:
6884             fh.close()
6885hunk ./src/allmydata/storage/backends/das/core.py 302
6886                 yield LeaseInfo().from_immutable_data(data)
6887 
6888     def add_lease(self, lease_info):
6889-        f = self.incominghome
6890         num_leases = self._read_num_leases(self.incominghome)
6891hunk ./src/allmydata/storage/backends/das/core.py 303
6892-        self._write_lease_record(f, num_leases, lease_info)
6893-        self._write_num_leases(f, num_leases+1)
6894+        self._write_lease_record(self.incominghome, num_leases, lease_info)
6895+        self._write_num_leases(self.incominghome, num_leases+1)
6896         
6897     def renew_lease(self, renew_secret, new_expire_time):
6898         for i,lease in enumerate(self.get_leases()):
6899hunk ./src/allmydata/test/test_backends.py 52
6900         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6901         self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6902         self.shareincomingname = self.sharedirincomingname.child('0')
6903-        self.sharefilename = self.sharedirfinalname.child('0')
6904-        self.sharefilecontents = StringIO(share_data)
6905+        self.sharefinalname = self.sharedirfinalname.child('0')
6906 
6907hunk ./src/allmydata/test/test_backends.py 54
6908-        self.mocklistdirp = mock.patch('os.listdir')
6909-        mocklistdir = self.mocklistdirp.__enter__()
6910-        mocklistdir.side_effect = self.call_listdir
6911+        # Make patcher, patch, and make effects for fs using functions.
6912+        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
6913+        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
6914+        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
6915 
6916hunk ./src/allmydata/test/test_backends.py 59
6917-        self.mockmkdirp = mock.patch('os.mkdir')
6918-        mockmkdir = self.mockmkdirp.__enter__()
6919-        mockmkdir.side_effect = self.call_mkdir
6920+        #self.mockmkdirp = mock.patch('os.mkdir')
6921+        #mockmkdir = self.mockmkdirp.__enter__()
6922+        #mockmkdir.side_effect = self.call_mkdir
6923 
6924hunk ./src/allmydata/test/test_backends.py 63
6925-        self.mockisdirp = mock.patch('os.path.isdir')
6926+        self.mockisdirp = mock.patch('FilePath.isdir')
6927         mockisdir = self.mockisdirp.__enter__()
6928         mockisdir.side_effect = self.call_isdir
6929 
6930hunk ./src/allmydata/test/test_backends.py 67
6931-        self.mockopenp = mock.patch('__builtin__.open')
6932+        self.mockopenp = mock.patch('FilePath.open')
6933         mockopen = self.mockopenp.__enter__()
6934         mockopen.side_effect = self.call_open
6935 
6936hunk ./src/allmydata/test/test_backends.py 71
6937-        self.mockstatp = mock.patch('os.stat')
6938+        self.mockstatp = mock.patch('filepath.stat')
6939         mockstat = self.mockstatp.__enter__()
6940         mockstat.side_effect = self.call_stat
6941 
6942hunk ./src/allmydata/test/test_backends.py 91
6943         mocksetContent = self.mocksetContent.__enter__()
6944         mocksetContent.side_effect = self.call_setContent
6945 
6946+    #  The behavior of mocked filesystem using functions
6947     def call_open(self, fname, mode):
6948         assert isinstance(fname, basestring), fname
6949         fnamefp = FilePath(fname)
6950hunk ./src/allmydata/test/test_backends.py 109
6951             # use this information in this test in the future...
6952             return StringIO()
6953         elif fnamefp == self.shareincomingname:
6954-            print "repr(fnamefp): ", repr(fnamefp)
6955+            self.incomingsharefilecontents.closed = False
6956+            return self.incomingsharefilecontents
6957         else:
6958             # Anything else you open inside your subtree appears to be an
6959             # empty file.
6960hunk ./src/allmydata/test/test_backends.py 152
6961         fnamefp = FilePath(fname)
6962         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
6963                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
6964-
6965         msg("%s.call_stat(%s)" % (self, fname,))
6966         mstat = MockStat()
6967         mstat.st_mode = 16893 # a directory
6968hunk ./src/allmydata/test/test_backends.py 166
6969         return False
6970 
6971     def call_setContent(self, inputstring):
6972-        # XXX Good enough for expirer, not sure about elsewhere...
6973-        return True
6974-
6975+        self.incomingsharefilecontents = StringIO(inputstring)
6976 
6977     def tearDown(self):
6978         msg( "%s.tearDown()" % (self,))
6979}
6980[jacp19
6981wilcoxjg@gmail.com**20110727080553
6982 Ignore-this: 851b1ebdeeee712abfbda557af142726
6983] {
6984hunk ./src/allmydata/storage/backends/das/core.py 1
6985-import os, re, weakref, struct, time, stat
6986+import re, weakref, struct, time, stat
6987 from twisted.application import service
6988 from twisted.python.filepath import UnlistableError
6989hunk ./src/allmydata/storage/backends/das/core.py 4
6990+from twisted.python import filepath
6991 from twisted.python.filepath import FilePath
6992 from zope.interface import implements
6993 
6994hunk ./src/allmydata/storage/backends/das/core.py 50
6995         self._setup_lease_checkerf(expiration_policy)
6996 
6997     def _setup_storage(self, storedir, readonly, reserved_space):
6998-        precondition(isinstance(storedir, FilePath)) 
6999+        precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7000         self.storedir = storedir
7001         self.readonly = readonly
7002         self.reserved_space = int(reserved_space)
7003hunk ./src/allmydata/storage/backends/das/core.py 195
7004         self._data_offset = 0xc
7005 
7006     def close(self):
7007-        fileutil.make_dirs(os.path.dirname(self.finalhome))
7008-        fileutil.rename(self.incominghome, self.finalhome)
7009+        fileutil.fp_make_dirs(self.finalhome.parent())
7010+        self.incominghome.moveTo(self.finalhome)
7011         try:
7012             # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
7013             # We try to delete the parent (.../ab/abcde) to avoid leaving
7014hunk ./src/allmydata/storage/backends/das/core.py 209
7015             # their children to know when they should do the rmdir. This
7016             # approach is simpler, but relies on os.rmdir refusing to delete
7017             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
7018-            #print "os.path.dirname(self.incominghome): "
7019-            #print os.path.dirname(self.incominghome)
7020-            os.rmdir(os.path.dirname(self.incominghome))
7021+            fileutil.fp_rmdir_if_empty(self.incominghome.parent())
7022             # we also delete the grandparent (prefix) directory, .../ab ,
7023             # again to avoid leaving directories lying around. This might
7024             # fail if there is another bucket open that shares a prefix (like
7025hunk ./src/allmydata/storage/backends/das/core.py 214
7026             # ab/abfff).
7027-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
7028+            fileutil.fp_rmdir_if_empty(self.incominghome.parent().parent())
7029             # we leave the great-grandparent (incoming/) directory in place.
7030         except EnvironmentError:
7031             # ignore the "can't rmdir because the directory is not empty"
7032hunk ./src/allmydata/storage/backends/das/core.py 224
7033         pass
7034         
7035     def stat(self):
7036-        return os.stat(self.finalhome)[stat.ST_SIZE]
7037-        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
7038+        return filepath.stat(self.finalhome)[stat.ST_SIZE]
7039 
7040     def get_shnum(self):
7041         return self.shnum
7042hunk ./src/allmydata/storage/backends/das/core.py 230
7043 
7044     def unlink(self):
7045-        os.unlink(self.finalhome)
7046+        self.finalhome.remove()
7047 
7048     def read_share_data(self, offset, length):
7049         precondition(offset >= 0)
7050hunk ./src/allmydata/storage/backends/das/core.py 237
7051         # Reads beyond the end of the data are truncated. Reads that start
7052         # beyond the end of the data return an empty string.
7053         seekpos = self._data_offset+offset
7054-        fsize = os.path.getsize(self.finalhome)
7055+        fsize = self.finalhome.getsize()
7056         actuallength = max(0, min(length, fsize-seekpos))
7057         if actuallength == 0:
7058             return ""
7059hunk ./src/allmydata/storage/backends/das/core.py 241
7060-        f = open(self.finalhome, 'rb')
7061-        f.seek(seekpos)
7062-        return f.read(actuallength)
7063+        try:
7064+            fh = open(self.finalhome, 'rb')
7065+            fh.seek(seekpos)
7066+            sharedata = fh.read(actuallength)
7067+        finally:
7068+            fh.close()
7069+        return sharedata
7070 
7071     def write_share_data(self, offset, data):
7072         length = len(data)
7073hunk ./src/allmydata/storage/backends/das/core.py 264
7074     def _write_lease_record(self, f, lease_number, lease_info):
7075         offset = self._lease_offset + lease_number * self.LEASE_SIZE
7076         fh = f.open()
7077-        print fh
7078         try:
7079             fh.seek(offset)
7080             assert fh.tell() == offset
7081hunk ./src/allmydata/storage/backends/das/core.py 269
7082             fh.write(lease_info.to_immutable_data())
7083         finally:
7084+            print dir(fh)
7085             fh.close()
7086 
7087     def _read_num_leases(self, f):
7088hunk ./src/allmydata/storage/backends/das/core.py 273
7089-        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
7090+        fh = f.open() #XXX  Should be mocking FilePath.open()
7091         try:
7092             fh.seek(0x08)
7093             ro = fh.read(4)
7094hunk ./src/allmydata/storage/backends/das/core.py 280
7095             (num_leases,) = struct.unpack(">L", ro)
7096         finally:
7097             fh.close()
7098+            print "end of _read_num_leases"
7099         return num_leases
7100 
7101     def _write_num_leases(self, f, num_leases):
7102hunk ./src/allmydata/storage/crawler.py 6
7103 from twisted.internet import reactor
7104 from twisted.application import service
7105 from allmydata.storage.common import si_b2a
7106-from allmydata.util import fileutil
7107 
7108 class TimeSliceExceeded(Exception):
7109     pass
7110hunk ./src/allmydata/storage/crawler.py 478
7111             old_cycle,buckets = self.state["storage-index-samples"][prefix]
7112             if old_cycle != cycle:
7113                 del self.state["storage-index-samples"][prefix]
7114-
7115hunk ./src/allmydata/test/test_backends.py 1
7116+import os
7117 from twisted.trial import unittest
7118 from twisted.python.filepath import FilePath
7119 from allmydata.util.log import msg
7120hunk ./src/allmydata/test/test_backends.py 9
7121 from allmydata.test.common_util import ReallyEqualMixin
7122 from allmydata.util.assertutil import _assert
7123 import mock
7124+from mock import Mock
7125 
7126 # This is the code that we're going to be testing.
7127 from allmydata.storage.server import StorageServer
7128hunk ./src/allmydata/test/test_backends.py 40
7129     def __init__(self):
7130         self.st_mode = None
7131 
7132+class MockFilePath:
7133+    def __init__(self, PathString):
7134+        self.PathName = PathString
7135+    def child(self, ChildString):
7136+        return MockFilePath(os.path.join(self.PathName, ChildString))
7137+    def parent(self):
7138+        return MockFilePath(os.path.dirname(self.PathName))
7139+    def makedirs(self):
7140+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7141+        pass
7142+    def isdir(self):
7143+        return True
7144+    def remove(self):
7145+        pass
7146+    def children(self):
7147+        return []
7148+    def exists(self):
7149+        return False
7150+    def setContent(self, ContentString):
7151+        self.File = MockFile(ContentString)
7152+    def open(self):
7153+        return self.File.open()
7154+
7155+class MockFile:
7156+    def __init__(self, ContentString):
7157+        self.Contents = ContentString
7158+    def open(self):
7159+        return self
7160+    def close(self):
7161+        pass
7162+    def seek(self, position):
7163+        pass
7164+    def read(self, amount):
7165+        pass
7166+
7167+
7168+class MockBCC:
7169+    def setServiceParent(self, Parent):
7170+        pass
7171+
7172+class MockLCC:
7173+    def setServiceParent(self, Parent):
7174+        pass
7175+
7176 class MockFiles(unittest.TestCase):
7177     """ I simulate a filesystem that the code under test can use. I flag the
7178     code under test if it reads or writes outside of its prescribed
7179hunk ./src/allmydata/test/test_backends.py 91
7180     implementation of DAS backend needs. """
7181 
7182     def setUp(self):
7183+        # Make patcher, patch, and make effects for fs using functions.
7184         msg( "%s.setUp()" % (self,))
7185hunk ./src/allmydata/test/test_backends.py 93
7186-        self.storedir = FilePath('teststoredir')
7187+        self.storedir = MockFilePath('teststoredir')
7188         self.basedir = self.storedir.child('shares')
7189         self.baseincdir = self.basedir.child('incoming')
7190         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
7191hunk ./src/allmydata/test/test_backends.py 101
7192         self.shareincomingname = self.sharedirincomingname.child('0')
7193         self.sharefinalname = self.sharedirfinalname.child('0')
7194 
7195-        # Make patcher, patch, and make effects for fs using functions.
7196-        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
7197-        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
7198-        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
7199-
7200-        #self.mockmkdirp = mock.patch('os.mkdir')
7201-        #mockmkdir = self.mockmkdirp.__enter__()
7202-        #mockmkdir.side_effect = self.call_mkdir
7203-
7204-        self.mockisdirp = mock.patch('FilePath.isdir')
7205-        mockisdir = self.mockisdirp.__enter__()
7206-        mockisdir.side_effect = self.call_isdir
7207+        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
7208+        FakePath = self.FilePathFake.__enter__()
7209 
7210hunk ./src/allmydata/test/test_backends.py 104
7211-        self.mockopenp = mock.patch('FilePath.open')
7212-        mockopen = self.mockopenp.__enter__()
7213-        mockopen.side_effect = self.call_open
7214+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
7215+        FakeBCC = self.BCountingCrawler.__enter__()
7216+        FakeBCC.side_effect = self.call_FakeBCC
7217 
7218hunk ./src/allmydata/test/test_backends.py 108
7219-        self.mockstatp = mock.patch('filepath.stat')
7220-        mockstat = self.mockstatp.__enter__()
7221-        mockstat.side_effect = self.call_stat
7222+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
7223+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
7224+        FakeLCC.side_effect = self.call_FakeLCC
7225 
7226hunk ./src/allmydata/test/test_backends.py 112
7227-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
7228-        mockfpstat = self.mockfpstatp.__enter__()
7229-        mockfpstat.side_effect = self.call_stat
7230+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
7231+        GetSpace = self.get_available_space.__enter__()
7232+        GetSpace.side_effect = self.call_get_available_space
7233 
7234hunk ./src/allmydata/test/test_backends.py 116
7235-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
7236-        mockget_available_space = self.mockget_available_space.__enter__()
7237-        mockget_available_space.side_effect = self.call_get_available_space
7238+    def call_FakeBCC(self, StateFile):
7239+        return MockBCC()
7240 
7241hunk ./src/allmydata/test/test_backends.py 119
7242-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
7243-        mockfpexists = self.mockfpexists.__enter__()
7244-        mockfpexists.side_effect = self.call_exists
7245-
7246-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
7247-        mocksetContent = self.mocksetContent.__enter__()
7248-        mocksetContent.side_effect = self.call_setContent
7249-
7250-    #  The behavior of mocked filesystem using functions
7251-    def call_open(self, fname, mode):
7252-        assert isinstance(fname, basestring), fname
7253-        fnamefp = FilePath(fname)
7254-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
7255-                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
7256-
7257-        if fnamefp == self.storedir.child('bucket_counter.state'):
7258-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
7259-        elif fnamefp == self.storedir.child('lease_checker.state'):
7260-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
7261-        elif fnamefp == self.storedir.child('lease_checker.history'):
7262-            # This is separated out from the else clause below just because
7263-            # we know this particular file is going to be used by the
7264-            # current implementation of DAS backend, and we might want to
7265-            # use this information in this test in the future...
7266-            return StringIO()
7267-        elif fnamefp == self.shareincomingname:
7268-            self.incomingsharefilecontents.closed = False
7269-            return self.incomingsharefilecontents
7270-        else:
7271-            # Anything else you open inside your subtree appears to be an
7272-            # empty file.
7273-            return StringIO()
7274-
7275-    def call_isdir(self, fname):
7276-        fnamefp = FilePath(fname)
7277-        return fnamefp.isdir()
7278-
7279-        self.failUnless(self.storedir == self or self.storedir in self.parents(),
7280-                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
7281-
7282-        # The first two cases are separate from the else clause below just
7283-        # because we know that the current implementation of the DAS backend
7284-        # inspects these two directories and we might want to make use of
7285-        # that information in the tests in the future...
7286-        if self == self.storedir.child('shares'):
7287-            return True
7288-        elif self == self.storedir.child('shares').child('incoming'):
7289-            return True
7290-        else:
7291-            # Anything else you open inside your subtree appears to be a
7292-            # directory.
7293-            return True
7294-
7295-    def call_mkdir(self, fname, mode):
7296-        fnamefp = FilePath(fname)
7297-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
7298-                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
7299-        self.failUnlessEqual(0777, mode)
7300+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
7301+        return MockLCC()
7302 
7303     def call_listdir(self, fname):
7304         fnamefp = FilePath(fname)
7305hunk ./src/allmydata/test/test_backends.py 150
7306 
7307     def tearDown(self):
7308         msg( "%s.tearDown()" % (self,))
7309-        self.mocksetContent.__exit__()
7310-        self.mockfpexists.__exit__()
7311-        self.mockget_available_space.__exit__()
7312-        self.mockfpstatp.__exit__()
7313-        self.mockstatp.__exit__()
7314-        self.mockopenp.__exit__()
7315-        self.mockisdirp.__exit__()
7316-        self.mockmkdirp.__exit__()
7317-        self.mocklistdirp.__exit__()
7318-
7319+        FakePath = self.FilePathFake.__exit__()       
7320+        FakeBCC = self.BCountingCrawler.__exit__()
7321 
7322 expiration_policy = {'enabled' : False,
7323                      'mode' : 'age',
7324hunk ./src/allmydata/test/test_backends.py 222
7325         # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7326         
7327         # Attempt to create a second share writer with the same sharenum.
7328-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7329+        # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7330 
7331         # Show that no sharewriter results from a remote_allocate_buckets
7332         # with the same si and sharenum, until BucketWriter.remote_close()
7333hunk ./src/allmydata/test/test_backends.py 227
7334         # has been called.
7335-        self.failIf(bsa)
7336+        # self.failIf(bsa)
7337 
7338         # Test allocated size.
7339hunk ./src/allmydata/test/test_backends.py 230
7340-        spaceint = self.ss.allocated_size()
7341-        self.failUnlessReallyEqual(spaceint, 1)
7342+        # spaceint = self.ss.allocated_size()
7343+        # self.failUnlessReallyEqual(spaceint, 1)
7344 
7345         # Write 'a' to shnum 0. Only tested together with close and read.
7346hunk ./src/allmydata/test/test_backends.py 234
7347-        bs[0].remote_write(0, 'a')
7348+        # bs[0].remote_write(0, 'a')
7349         
7350         # Preclose: Inspect final, failUnless nothing there.
7351hunk ./src/allmydata/test/test_backends.py 237
7352-        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
7353-        bs[0].remote_close()
7354+        # self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
7355+        # bs[0].remote_close()
7356 
7357         # Postclose: (Omnibus) failUnless written data is in final.
7358hunk ./src/allmydata/test/test_backends.py 241
7359-        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
7360-        self.failUnlessReallyEqual(len(sharesinfinal), 1)
7361-        contents = sharesinfinal[0].read_share_data(0, 73)
7362-        self.failUnlessReallyEqual(contents, client_data)
7363+        # sharesinfinal = list(self.backend.get_shares('teststorage_index'))
7364+        # self.failUnlessReallyEqual(len(sharesinfinal), 1)
7365+        # contents = sharesinfinal[0].read_share_data(0, 73)
7366+        # self.failUnlessReallyEqual(contents, client_data)
7367 
7368         # Exercise the case that the share we're asking to allocate is
7369         # already (completely) uploaded.
7370hunk ./src/allmydata/test/test_backends.py 248
7371-        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
7372+        # self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
7373         
7374     @mock.patch('time.time')
7375     @mock.patch('allmydata.util.fileutil.get_available_space')
7376}
7377
7378Context:
7379
7380[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
7381david-sarah@jacaranda.org**20110721234941
7382 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
7383]
7384[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
7385david-sarah@jacaranda.org**20110722000320
7386 Ignore-this: 55cd558b791526113db3f83c00ec328a
7387]
7388[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
7389david-sarah@jacaranda.org**20110721233658
7390 Ignore-this: 81b41745477163c9b39c0b59db91cc62
7391]
7392[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
7393david-sarah@jacaranda.org**20110722035402
7394 Ignore-this: 5d03f544c4154f088e26c7107494bf39
7395]
7396[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
7397david-sarah@jacaranda.org**20110722024907
7398 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
7399]
7400[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
7401david-sarah@jacaranda.org**20110718005949
7402 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
7403]
7404[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
7405david-sarah@jacaranda.org**20110717194315
7406 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
7407]
7408[README.txt: say that quickstart.rst is in the docs directory.
7409david-sarah@jacaranda.org**20110717192400
7410 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
7411]
7412[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
7413zooko@zooko.com**20110717114226
7414 Ignore-this: df222120d41447ce4102616921626c82
7415 fixes #1383
7416]
7417[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
7418david-sarah@jacaranda.org**20110716181813
7419 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
7420]
7421[docs: add missing link in NEWS.rst
7422zooko@zooko.com**20110712153307
7423 Ignore-this: be7b7eb81c03700b739daa1027d72b35
7424]
7425[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
7426zooko@zooko.com**20110712153229
7427 Ignore-this: 723c4f9e2211027c79d711715d972c5
7428 Also remove a couple of vestigial references to figleaf, which is long gone.
7429 fixes #1409 (remove contrib/fuse)
7430]
7431[add Protovis.js-based download-status timeline visualization
7432Brian Warner <warner@lothar.com>**20110629222606
7433 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
7434 
7435 provide status overlap info on the webapi t=json output, add decode/decrypt
7436 rate tooltips, add zoomin/zoomout buttons
7437]
7438[add more download-status data, fix tests
7439Brian Warner <warner@lothar.com>**20110629222555
7440 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
7441]
7442[prepare for viz: improve DownloadStatus events
7443Brian Warner <warner@lothar.com>**20110629222542
7444 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
7445 
7446 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
7447]
7448[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
7449zooko@zooko.com**20110629185711
7450 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
7451]
7452[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
7453david-sarah@jacaranda.org**20110130235809
7454 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
7455]
7456[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
7457david-sarah@jacaranda.org**20110626054124
7458 Ignore-this: abb864427a1b91bd10d5132b4589fd90
7459]
7460[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
7461david-sarah@jacaranda.org**20110623205528
7462 Ignore-this: c63e23146c39195de52fb17c7c49b2da
7463]
7464[Rename test_package_initialization.py to (much shorter) test_import.py .
7465Brian Warner <warner@lothar.com>**20110611190234
7466 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
7467 
7468 The former name was making my 'ls' listings hard to read, by forcing them
7469 down to just two columns.
7470]
7471[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
7472zooko@zooko.com**20110611163741
7473 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
7474 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
7475 fixes #1412
7476]
7477[wui: right-align the size column in the WUI
7478zooko@zooko.com**20110611153758
7479 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
7480 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
7481 fixes #1412
7482]
7483[docs: three minor fixes
7484zooko@zooko.com**20110610121656
7485 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
7486 CREDITS for arc for stats tweak
7487 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
7488 English usage tweak
7489]
7490[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
7491david-sarah@jacaranda.org**20110609223719
7492 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
7493]
7494[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
7495wilcoxjg@gmail.com**20110527120135
7496 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
7497 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
7498 NEWS.rst, stats.py: documentation of change to get_latencies
7499 stats.rst: now documents percentile modification in get_latencies
7500 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
7501 fixes #1392
7502]
7503[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
7504david-sarah@jacaranda.org**20110517011214
7505 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
7506]
7507[docs: convert NEWS to NEWS.rst and change all references to it.
7508david-sarah@jacaranda.org**20110517010255
7509 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
7510]
7511[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
7512david-sarah@jacaranda.org**20110512140559
7513 Ignore-this: 784548fc5367fac5450df1c46890876d
7514]
7515[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
7516david-sarah@jacaranda.org**20110130164923
7517 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
7518]
7519[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
7520zooko@zooko.com**20110128142006
7521 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
7522 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
7523]
7524[M-x whitespace-cleanup
7525zooko@zooko.com**20110510193653
7526 Ignore-this: dea02f831298c0f65ad096960e7df5c7
7527]
7528[docs: fix typo in running.rst, thanks to arch_o_median
7529zooko@zooko.com**20110510193633
7530 Ignore-this: ca06de166a46abbc61140513918e79e8
7531]
7532[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
7533david-sarah@jacaranda.org**20110204204902
7534 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
7535]
7536[relnotes.txt: forseeable -> foreseeable. refs #1342
7537david-sarah@jacaranda.org**20110204204116
7538 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
7539]
7540[replace remaining .html docs with .rst docs
7541zooko@zooko.com**20110510191650
7542 Ignore-this: d557d960a986d4ac8216d1677d236399
7543 Remove install.html (long since deprecated).
7544 Also replace some obsolete references to install.html with references to quickstart.rst.
7545 Fix some broken internal references within docs/historical/historical_known_issues.txt.
7546 Thanks to Ravi Pinjala and Patrick McDonald.
7547 refs #1227
7548]
7549[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
7550zooko@zooko.com**20110428055232
7551 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
7552]
7553[munin tahoe_files plugin: fix incorrect file count
7554francois@ctrlaltdel.ch**20110428055312
7555 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
7556 fixes #1391
7557]
7558[corrected "k must never be smaller than N" to "k must never be greater than N"
7559secorp@allmydata.org**20110425010308
7560 Ignore-this: 233129505d6c70860087f22541805eac
7561]
7562[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
7563david-sarah@jacaranda.org**20110411190738
7564 Ignore-this: 7847d26bc117c328c679f08a7baee519
7565]
7566[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
7567david-sarah@jacaranda.org**20110410155844
7568 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
7569]
7570[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
7571david-sarah@jacaranda.org**20110410155705
7572 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
7573]
7574[remove unused variable detected by pyflakes
7575zooko@zooko.com**20110407172231
7576 Ignore-this: 7344652d5e0720af822070d91f03daf9
7577]
7578[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
7579david-sarah@jacaranda.org**20110401202750
7580 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
7581]
7582[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
7583Brian Warner <warner@lothar.com>**20110325232511
7584 Ignore-this: d5307faa6900f143193bfbe14e0f01a
7585]
7586[control.py: remove all uses of s.get_serverid()
7587warner@lothar.com**20110227011203
7588 Ignore-this: f80a787953bd7fa3d40e828bde00e855
7589]
7590[web: remove some uses of s.get_serverid(), not all
7591warner@lothar.com**20110227011159
7592 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
7593]
7594[immutable/downloader/fetcher.py: remove all get_serverid() calls
7595warner@lothar.com**20110227011156
7596 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
7597]
7598[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
7599warner@lothar.com**20110227011153
7600 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
7601 
7602 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
7603 _shares_from_server dict was being popped incorrectly (using shnum as the
7604 index instead of serverid). I'm still thinking through the consequences of
7605 this bug. It was probably benign and really hard to detect. I think it would
7606 cause us to incorrectly believe that we're pulling too many shares from a
7607 server, and thus prefer a different server rather than asking for a second
7608 share from the first server. The diversity code is intended to spread out the
7609 number of shares simultaneously being requested from each server, but with
7610 this bug, it might be spreading out the total number of shares requested at
7611 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
7612 segment, so the effect doesn't last very long).
7613]
7614[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
7615warner@lothar.com**20110227011150
7616 Ignore-this: d8d56dd8e7b280792b40105e13664554
7617 
7618 test_download.py: create+check MyShare instances better, make sure they share
7619 Server objects, now that finder.py cares
7620]
7621[immutable/downloader/finder.py: reduce use of get_serverid(), one left
7622warner@lothar.com**20110227011146
7623 Ignore-this: 5785be173b491ae8a78faf5142892020
7624]
7625[immutable/offloaded.py: reduce use of get_serverid() a bit more
7626warner@lothar.com**20110227011142
7627 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
7628]
7629[immutable/upload.py: reduce use of get_serverid()
7630warner@lothar.com**20110227011138
7631 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
7632]
7633[immutable/checker.py: remove some uses of s.get_serverid(), not all
7634warner@lothar.com**20110227011134
7635 Ignore-this: e480a37efa9e94e8016d826c492f626e
7636]
7637[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
7638warner@lothar.com**20110227011132
7639 Ignore-this: 6078279ddf42b179996a4b53bee8c421
7640 MockIServer stubs
7641]
7642[upload.py: rearrange _make_trackers a bit, no behavior changes
7643warner@lothar.com**20110227011128
7644 Ignore-this: 296d4819e2af452b107177aef6ebb40f
7645]
7646[happinessutil.py: finally rename merge_peers to merge_servers
7647warner@lothar.com**20110227011124
7648 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
7649]
7650[test_upload.py: factor out FakeServerTracker
7651warner@lothar.com**20110227011120
7652 Ignore-this: 6c182cba90e908221099472cc159325b
7653]
7654[test_upload.py: server-vs-tracker cleanup
7655warner@lothar.com**20110227011115
7656 Ignore-this: 2915133be1a3ba456e8603885437e03
7657]
7658[happinessutil.py: server-vs-tracker cleanup
7659warner@lothar.com**20110227011111
7660 Ignore-this: b856c84033562d7d718cae7cb01085a9
7661]
7662[upload.py: more tracker-vs-server cleanup
7663warner@lothar.com**20110227011107
7664 Ignore-this: bb75ed2afef55e47c085b35def2de315
7665]
7666[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
7667warner@lothar.com**20110227011103
7668 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
7669]
7670[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
7671warner@lothar.com**20110227011100
7672 Ignore-this: 7ea858755cbe5896ac212a925840fe68
7673 
7674 No behavioral changes, just updating variable/method names and log messages.
7675 The effects outside these three files should be minimal: some exception
7676 messages changed (to say "server" instead of "peer"), and some internal class
7677 names were changed. A few things still use "peer" to minimize external
7678 changes, like UploadResults.timings["peer_selection"] and
7679 happinessutil.merge_peers, which can be changed later.
7680]
7681[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
7682warner@lothar.com**20110227011056
7683 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
7684]
7685[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
7686warner@lothar.com**20110227011051
7687 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
7688]
7689[test: increase timeout on a network test because Francois's ARM machine hit that timeout
7690zooko@zooko.com**20110317165909
7691 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
7692 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
7693]
7694[docs/configuration.rst: add a "Frontend Configuration" section
7695Brian Warner <warner@lothar.com>**20110222014323
7696 Ignore-this: 657018aa501fe4f0efef9851628444ca
7697 
7698 this points to docs/frontends/*.rst, which were previously underlinked
7699]
7700[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
7701"Brian Warner <warner@lothar.com>"**20110221061544
7702 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
7703]
7704[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
7705david-sarah@jacaranda.org**20110221015817
7706 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
7707]
7708[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
7709david-sarah@jacaranda.org**20110221020125
7710 Ignore-this: b0744ed58f161bf188e037bad077fc48
7711]
7712[Refactor StorageFarmBroker handling of servers
7713Brian Warner <warner@lothar.com>**20110221015804
7714 Ignore-this: 842144ed92f5717699b8f580eab32a51
7715 
7716 Pass around IServer instance instead of (peerid, rref) tuple. Replace
7717 "descriptor" with "server". Other replacements:
7718 
7719  get_all_servers -> get_connected_servers/get_known_servers
7720  get_servers_for_index -> get_servers_for_psi (now returns IServers)
7721 
7722 This change still needs to be pushed further down: lots of code is now
7723 getting the IServer and then distributing (peerid, rref) internally.
7724 Instead, it ought to distribute the IServer internally and delay
7725 extracting a serverid or rref until the last moment.
7726 
7727 no_network.py was updated to retain parallelism.
7728]
7729[TAG allmydata-tahoe-1.8.2
7730warner@lothar.com**20110131020101]
7731Patch bundle hash:
77322ccb93cab3f1135f177961223f1e36971ccd4dfc