Ticket #999: JACP20_Zancas20110801.darcs.patch

File JACP20_Zancas20110801.darcs.patch, 405.1 KB (added by Zancas, at 2011-08-01T09:47:05Z)

uggg... bugs...

Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
50  * adding comments to clarify what I'm about to do.
51
52Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
53  * branching back, no longer attempting to mock inside TestServerFSBackend
54
55Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
56  * checkpoint12 TestServerFSBackend no longer mocks filesystem
57
58Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
59  * JACP
60
61Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
62  * testing get incoming
63
64Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
65  * ImmutableShareFile does not know its StorageIndex
66
67Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
68  * get_incoming correctly reports the 0 share after it has arrived
69
70Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
71  * jacp14
72
73Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
74  * jacp14 or so
75
76Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
77  * temporary work-in-progress patch to be unrecorded
78  tidy up a few tests, work done in pair-programming with Zancas
79
80Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
81  * work in progress intended to be unrecorded and never committed to trunk
82  switch from os.path.join to filepath
83  incomplete refactoring of common "stay in your subtree" tester code into a superclass
84 
85
86Fri Jul 15 13:15:00 MDT 2011  zooko@zooko.com
87  * another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
88  In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
89
90Tue Jul 19 23:59:18 MDT 2011  zooko@zooko.com
91  * another temporary patch for sharing work-in-progress
92  A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
93  There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
94  (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
95 
96
97Fri Jul 22 01:00:36 MDT 2011  wilcoxjg@gmail.com
98  * jacp16 or so
99
100Fri Jul 22 14:32:44 MDT 2011  wilcoxjg@gmail.com
101  * jacp17
102
103Fri Jul 22 21:19:15 MDT 2011  wilcoxjg@gmail.com
104  * jacp18
105
106Sat Jul 23 21:42:30 MDT 2011  wilcoxjg@gmail.com
107  * jacp19orso
108
109Wed Jul 27 02:05:53 MDT 2011  wilcoxjg@gmail.com
110  * jacp19
111
112Thu Jul 28 01:25:14 MDT 2011  wilcoxjg@gmail.com
113  * jacp20
114
115Thu Jul 28 22:38:30 MDT 2011  wilcoxjg@gmail.com
116  * Completed FilePath based test_write_and_read_share
117
118Fri Jul 29 17:53:56 MDT 2011  wilcoxjg@gmail.com
119  * TestServerAndFSBackend.test_read_old_share passes
120
121Fri Jul 29 19:00:25 MDT 2011  wilcoxjg@gmail.com
122  * TestServerAndFSBackend passes en total!
123
124Fri Jul 29 21:41:59 MDT 2011  wilcoxjg@gmail.com
125  * current test_backend tests pass
126
127Mon Aug  1 03:46:03 MDT 2011  wilcoxjg@gmail.com
128  * jacp21Zancas20110801.darcs.patch
129
130New patches:
131
132[storage: new mocking tests of storage server read and write
133wilcoxjg@gmail.com**20110325203514
134 Ignore-this: df65c3c4f061dd1516f88662023fdb41
135 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
136] {
137addfile ./src/allmydata/test/test_server.py
138hunk ./src/allmydata/test/test_server.py 1
139+from twisted.trial import unittest
140+
141+from StringIO import StringIO
142+
143+from allmydata.test.common_util import ReallyEqualMixin
144+
145+import mock
146+
147+# This is the code that we're going to be testing.
148+from allmydata.storage.server import StorageServer
149+
150+# The following share file contents was generated with
151+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
152+# with share data == 'a'.
153+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
154+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
155+
156+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
157+
158+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
159+    @mock.patch('__builtin__.open')
160+    def test_create_server(self, mockopen):
161+        """ This tests whether a server instance can be constructed. """
162+
163+        def call_open(fname, mode):
164+            if fname == 'testdir/bucket_counter.state':
165+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
166+            elif fname == 'testdir/lease_checker.state':
167+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
168+            elif fname == 'testdir/lease_checker.history':
169+                return StringIO()
170+        mockopen.side_effect = call_open
171+
172+        # Now begin the test.
173+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
174+
175+        # You passed!
176+
177+class TestServer(unittest.TestCase, ReallyEqualMixin):
178+    @mock.patch('__builtin__.open')
179+    def setUp(self, mockopen):
180+        def call_open(fname, mode):
181+            if fname == 'testdir/bucket_counter.state':
182+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
183+            elif fname == 'testdir/lease_checker.state':
184+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
185+            elif fname == 'testdir/lease_checker.history':
186+                return StringIO()
187+        mockopen.side_effect = call_open
188+
189+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
190+
191+
192+    @mock.patch('time.time')
193+    @mock.patch('os.mkdir')
194+    @mock.patch('__builtin__.open')
195+    @mock.patch('os.listdir')
196+    @mock.patch('os.path.isdir')
197+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
198+        """Handle a report of corruption."""
199+
200+        def call_listdir(dirname):
201+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
202+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
203+
204+        mocklistdir.side_effect = call_listdir
205+
206+        class MockFile:
207+            def __init__(self):
208+                self.buffer = ''
209+                self.pos = 0
210+            def write(self, instring):
211+                begin = self.pos
212+                padlen = begin - len(self.buffer)
213+                if padlen > 0:
214+                    self.buffer += '\x00' * padlen
215+                end = self.pos + len(instring)
216+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
217+                self.pos = end
218+            def close(self):
219+                pass
220+            def seek(self, pos):
221+                self.pos = pos
222+            def read(self, numberbytes):
223+                return self.buffer[self.pos:self.pos+numberbytes]
224+            def tell(self):
225+                return self.pos
226+
227+        mocktime.return_value = 0
228+
229+        sharefile = MockFile()
230+        def call_open(fname, mode):
231+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
232+            return sharefile
233+
234+        mockopen.side_effect = call_open
235+        # Now begin the test.
236+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
237+        print bs
238+        bs[0].remote_write(0, 'a')
239+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
240+
241+
242+    @mock.patch('os.path.exists')
243+    @mock.patch('os.path.getsize')
244+    @mock.patch('__builtin__.open')
245+    @mock.patch('os.listdir')
246+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
247+        """ This tests whether the code correctly finds and reads
248+        shares written out by old (Tahoe-LAFS <= v1.8.2)
249+        servers. There is a similar test in test_download, but that one
250+        is from the perspective of the client and exercises a deeper
251+        stack of code. This one is for exercising just the
252+        StorageServer object. """
253+
254+        def call_listdir(dirname):
255+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
256+            return ['0']
257+
258+        mocklistdir.side_effect = call_listdir
259+
260+        def call_open(fname, mode):
261+            self.failUnlessReallyEqual(fname, sharefname)
262+            self.failUnless('r' in mode, mode)
263+            self.failUnless('b' in mode, mode)
264+
265+            return StringIO(share_file_data)
266+        mockopen.side_effect = call_open
267+
268+        datalen = len(share_file_data)
269+        def call_getsize(fname):
270+            self.failUnlessReallyEqual(fname, sharefname)
271+            return datalen
272+        mockgetsize.side_effect = call_getsize
273+
274+        def call_exists(fname):
275+            self.failUnlessReallyEqual(fname, sharefname)
276+            return True
277+        mockexists.side_effect = call_exists
278+
279+        # Now begin the test.
280+        bs = self.s.remote_get_buckets('teststorage_index')
281+
282+        self.failUnlessEqual(len(bs), 1)
283+        b = bs[0]
284+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
285+        # If you try to read past the end you get the as much data as is there.
286+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
287+        # If you start reading past the end of the file you get the empty string.
288+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
289}
290[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
291wilcoxjg@gmail.com**20110624202850
292 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
293 sloppy not for production
294] {
295move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
296hunk ./src/allmydata/storage/crawler.py 13
297     pass
298 
299 class ShareCrawler(service.MultiService):
300-    """A ShareCrawler subclass is attached to a StorageServer, and
301+    """A subcless of ShareCrawler is attached to a StorageServer, and
302     periodically walks all of its shares, processing each one in some
303     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
304     since large servers can easily have a terabyte of shares, in several
305hunk ./src/allmydata/storage/crawler.py 31
306     We assume that the normal upload/download/get_buckets traffic of a tahoe
307     grid will cause the prefixdir contents to be mostly cached in the kernel,
308     or that the number of buckets in each prefixdir will be small enough to
309-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
310+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
311     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
312     prefix. On this server, each prefixdir took 130ms-200ms to list the first
313     time, and 17ms to list the second time.
314hunk ./src/allmydata/storage/crawler.py 68
315     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
316     minimum_cycle_time = 300 # don't run a cycle faster than this
317 
318-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
319+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
320         service.MultiService.__init__(self)
321         if allowed_cpu_percentage is not None:
322             self.allowed_cpu_percentage = allowed_cpu_percentage
323hunk ./src/allmydata/storage/crawler.py 72
324-        self.server = server
325-        self.sharedir = server.sharedir
326-        self.statefile = statefile
327+        self.backend = backend
328         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
329                          for i in range(2**10)]
330         self.prefixes.sort()
331hunk ./src/allmydata/storage/crawler.py 446
332 
333     minimum_cycle_time = 60*60 # we don't need this more than once an hour
334 
335-    def __init__(self, server, statefile, num_sample_prefixes=1):
336-        ShareCrawler.__init__(self, server, statefile)
337+    def __init__(self, statefile, num_sample_prefixes=1):
338+        ShareCrawler.__init__(self, statefile)
339         self.num_sample_prefixes = num_sample_prefixes
340 
341     def add_initial_state(self):
342hunk ./src/allmydata/storage/expirer.py 15
343     removed.
344 
345     I collect statistics on the leases and make these available to a web
346-    status page, including::
347+    status page, including:
348 
349     Space recovered during this cycle-so-far:
350      actual (only if expiration_enabled=True):
351hunk ./src/allmydata/storage/expirer.py 51
352     slow_start = 360 # wait 6 minutes after startup
353     minimum_cycle_time = 12*60*60 # not more than twice per day
354 
355-    def __init__(self, server, statefile, historyfile,
356+    def __init__(self, statefile, historyfile,
357                  expiration_enabled, mode,
358                  override_lease_duration, # used if expiration_mode=="age"
359                  cutoff_date, # used if expiration_mode=="cutoff-date"
360hunk ./src/allmydata/storage/expirer.py 71
361         else:
362             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
363         self.sharetypes_to_expire = sharetypes
364-        ShareCrawler.__init__(self, server, statefile)
365+        ShareCrawler.__init__(self, statefile)
366 
367     def add_initial_state(self):
368         # we fill ["cycle-to-date"] here (even though they will be reset in
369hunk ./src/allmydata/storage/immutable.py 44
370     sharetype = "immutable"
371 
372     def __init__(self, filename, max_size=None, create=False):
373-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
374+        """ If max_size is not None then I won't allow more than
375+        max_size to be written to me. If create=True then max_size
376+        must not be None. """
377         precondition((max_size is not None) or (not create), max_size, create)
378         self.home = filename
379         self._max_size = max_size
380hunk ./src/allmydata/storage/immutable.py 87
381 
382     def read_share_data(self, offset, length):
383         precondition(offset >= 0)
384-        # reads beyond the end of the data are truncated. Reads that start
385-        # beyond the end of the data return an empty string. I wonder why
386-        # Python doesn't do the following computation for me?
387+        # Reads beyond the end of the data are truncated. Reads that start
388+        # beyond the end of the data return an empty string.
389         seekpos = self._data_offset+offset
390         fsize = os.path.getsize(self.home)
391         actuallength = max(0, min(length, fsize-seekpos))
392hunk ./src/allmydata/storage/immutable.py 198
393             space_freed += os.stat(self.home)[stat.ST_SIZE]
394             self.unlink()
395         return space_freed
396+class NullBucketWriter(Referenceable):
397+    implements(RIBucketWriter)
398 
399hunk ./src/allmydata/storage/immutable.py 201
400+    def remote_write(self, offset, data):
401+        return
402 
403 class BucketWriter(Referenceable):
404     implements(RIBucketWriter)
405hunk ./src/allmydata/storage/server.py 7
406 from twisted.application import service
407 
408 from zope.interface import implements
409-from allmydata.interfaces import RIStorageServer, IStatsProducer
410+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
411 from allmydata.util import fileutil, idlib, log, time_format
412 import allmydata # for __full_version__
413 
414hunk ./src/allmydata/storage/server.py 16
415 from allmydata.storage.lease import LeaseInfo
416 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
417      create_mutable_sharefile
418-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
419+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
420 from allmydata.storage.crawler import BucketCountingCrawler
421 from allmydata.storage.expirer import LeaseCheckingCrawler
422 
423hunk ./src/allmydata/storage/server.py 20
424+from zope.interface import implements
425+
426+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
427+# be started and stopped.
428+class Backend(service.MultiService):
429+    implements(IStatsProducer)
430+    def __init__(self):
431+        service.MultiService.__init__(self)
432+
433+    def get_bucket_shares(self):
434+        """XXX"""
435+        raise NotImplementedError
436+
437+    def get_share(self):
438+        """XXX"""
439+        raise NotImplementedError
440+
441+    def make_bucket_writer(self):
442+        """XXX"""
443+        raise NotImplementedError
444+
445+class NullBackend(Backend):
446+    def __init__(self):
447+        Backend.__init__(self)
448+
449+    def get_available_space(self):
450+        return None
451+
452+    def get_bucket_shares(self, storage_index):
453+        return set()
454+
455+    def get_share(self, storage_index, sharenum):
456+        return None
457+
458+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
459+        return NullBucketWriter()
460+
461+class FSBackend(Backend):
462+    def __init__(self, storedir, readonly=False, reserved_space=0):
463+        Backend.__init__(self)
464+
465+        self._setup_storage(storedir, readonly, reserved_space)
466+        self._setup_corruption_advisory()
467+        self._setup_bucket_counter()
468+        self._setup_lease_checkerf()
469+
470+    def _setup_storage(self, storedir, readonly, reserved_space):
471+        self.storedir = storedir
472+        self.readonly = readonly
473+        self.reserved_space = int(reserved_space)
474+        if self.reserved_space:
475+            if self.get_available_space() is None:
476+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
477+                        umid="0wZ27w", level=log.UNUSUAL)
478+
479+        self.sharedir = os.path.join(self.storedir, "shares")
480+        fileutil.make_dirs(self.sharedir)
481+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
482+        self._clean_incomplete()
483+
484+    def _clean_incomplete(self):
485+        fileutil.rm_dir(self.incomingdir)
486+        fileutil.make_dirs(self.incomingdir)
487+
488+    def _setup_corruption_advisory(self):
489+        # we don't actually create the corruption-advisory dir until necessary
490+        self.corruption_advisory_dir = os.path.join(self.storedir,
491+                                                    "corruption-advisories")
492+
493+    def _setup_bucket_counter(self):
494+        statefile = os.path.join(self.storedir, "bucket_counter.state")
495+        self.bucket_counter = BucketCountingCrawler(statefile)
496+        self.bucket_counter.setServiceParent(self)
497+
498+    def _setup_lease_checkerf(self):
499+        statefile = os.path.join(self.storedir, "lease_checker.state")
500+        historyfile = os.path.join(self.storedir, "lease_checker.history")
501+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
502+                                   expiration_enabled, expiration_mode,
503+                                   expiration_override_lease_duration,
504+                                   expiration_cutoff_date,
505+                                   expiration_sharetypes)
506+        self.lease_checker.setServiceParent(self)
507+
508+    def get_available_space(self):
509+        if self.readonly:
510+            return 0
511+        return fileutil.get_available_space(self.storedir, self.reserved_space)
512+
513+    def get_bucket_shares(self, storage_index):
514+        """Return a list of (shnum, pathname) tuples for files that hold
515+        shares for this storage_index. In each tuple, 'shnum' will always be
516+        the integer form of the last component of 'pathname'."""
517+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
518+        try:
519+            for f in os.listdir(storagedir):
520+                if NUM_RE.match(f):
521+                    filename = os.path.join(storagedir, f)
522+                    yield (int(f), filename)
523+        except OSError:
524+            # Commonly caused by there being no buckets at all.
525+            pass
526+
527 # storage/
528 # storage/shares/incoming
529 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
530hunk ./src/allmydata/storage/server.py 143
531     name = 'storage'
532     LeaseCheckerClass = LeaseCheckingCrawler
533 
534-    def __init__(self, storedir, nodeid, reserved_space=0,
535-                 discard_storage=False, readonly_storage=False,
536+    def __init__(self, nodeid, backend, reserved_space=0,
537+                 readonly_storage=False,
538                  stats_provider=None,
539                  expiration_enabled=False,
540                  expiration_mode="age",
541hunk ./src/allmydata/storage/server.py 155
542         assert isinstance(nodeid, str)
543         assert len(nodeid) == 20
544         self.my_nodeid = nodeid
545-        self.storedir = storedir
546-        sharedir = os.path.join(storedir, "shares")
547-        fileutil.make_dirs(sharedir)
548-        self.sharedir = sharedir
549-        # we don't actually create the corruption-advisory dir until necessary
550-        self.corruption_advisory_dir = os.path.join(storedir,
551-                                                    "corruption-advisories")
552-        self.reserved_space = int(reserved_space)
553-        self.no_storage = discard_storage
554-        self.readonly_storage = readonly_storage
555         self.stats_provider = stats_provider
556         if self.stats_provider:
557             self.stats_provider.register_producer(self)
558hunk ./src/allmydata/storage/server.py 158
559-        self.incomingdir = os.path.join(sharedir, 'incoming')
560-        self._clean_incomplete()
561-        fileutil.make_dirs(self.incomingdir)
562         self._active_writers = weakref.WeakKeyDictionary()
563hunk ./src/allmydata/storage/server.py 159
564+        self.backend = backend
565+        self.backend.setServiceParent(self)
566         log.msg("StorageServer created", facility="tahoe.storage")
567 
568hunk ./src/allmydata/storage/server.py 163
569-        if reserved_space:
570-            if self.get_available_space() is None:
571-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
572-                        umin="0wZ27w", level=log.UNUSUAL)
573-
574         self.latencies = {"allocate": [], # immutable
575                           "write": [],
576                           "close": [],
577hunk ./src/allmydata/storage/server.py 174
578                           "renew": [],
579                           "cancel": [],
580                           }
581-        self.add_bucket_counter()
582-
583-        statefile = os.path.join(self.storedir, "lease_checker.state")
584-        historyfile = os.path.join(self.storedir, "lease_checker.history")
585-        klass = self.LeaseCheckerClass
586-        self.lease_checker = klass(self, statefile, historyfile,
587-                                   expiration_enabled, expiration_mode,
588-                                   expiration_override_lease_duration,
589-                                   expiration_cutoff_date,
590-                                   expiration_sharetypes)
591-        self.lease_checker.setServiceParent(self)
592 
593     def __repr__(self):
594         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
595hunk ./src/allmydata/storage/server.py 178
596 
597-    def add_bucket_counter(self):
598-        statefile = os.path.join(self.storedir, "bucket_counter.state")
599-        self.bucket_counter = BucketCountingCrawler(self, statefile)
600-        self.bucket_counter.setServiceParent(self)
601-
602     def count(self, name, delta=1):
603         if self.stats_provider:
604             self.stats_provider.count("storage_server." + name, delta)
605hunk ./src/allmydata/storage/server.py 233
606             kwargs["facility"] = "tahoe.storage"
607         return log.msg(*args, **kwargs)
608 
609-    def _clean_incomplete(self):
610-        fileutil.rm_dir(self.incomingdir)
611-
612     def get_stats(self):
613         # remember: RIStatsProvider requires that our return dict
614         # contains numeric values.
615hunk ./src/allmydata/storage/server.py 269
616             stats['storage_server.total_bucket_count'] = bucket_count
617         return stats
618 
619-    def get_available_space(self):
620-        """Returns available space for share storage in bytes, or None if no
621-        API to get this information is available."""
622-
623-        if self.readonly_storage:
624-            return 0
625-        return fileutil.get_available_space(self.storedir, self.reserved_space)
626-
627     def allocated_size(self):
628         space = 0
629         for bw in self._active_writers:
630hunk ./src/allmydata/storage/server.py 276
631         return space
632 
633     def remote_get_version(self):
634-        remaining_space = self.get_available_space()
635+        remaining_space = self.backend.get_available_space()
636         if remaining_space is None:
637             # We're on a platform that has no API to get disk stats.
638             remaining_space = 2**64
639hunk ./src/allmydata/storage/server.py 301
640         self.count("allocate")
641         alreadygot = set()
642         bucketwriters = {} # k: shnum, v: BucketWriter
643-        si_dir = storage_index_to_dir(storage_index)
644-        si_s = si_b2a(storage_index)
645 
646hunk ./src/allmydata/storage/server.py 302
647+        si_s = si_b2a(storage_index)
648         log.msg("storage: allocate_buckets %s" % si_s)
649 
650         # in this implementation, the lease information (including secrets)
651hunk ./src/allmydata/storage/server.py 316
652 
653         max_space_per_bucket = allocated_size
654 
655-        remaining_space = self.get_available_space()
656+        remaining_space = self.backend.get_available_space()
657         limited = remaining_space is not None
658         if limited:
659             # this is a bit conservative, since some of this allocated_size()
660hunk ./src/allmydata/storage/server.py 329
661         # they asked about: this will save them a lot of work. Add or update
662         # leases for all of them: if they want us to hold shares for this
663         # file, they'll want us to hold leases for this file.
664-        for (shnum, fn) in self._get_bucket_shares(storage_index):
665+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
666             alreadygot.add(shnum)
667             sf = ShareFile(fn)
668             sf.add_or_renew_lease(lease_info)
669hunk ./src/allmydata/storage/server.py 335
670 
671         for shnum in sharenums:
672-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
673-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
674-            if os.path.exists(finalhome):
675+            share = self.backend.get_share(storage_index, shnum)
676+
677+            if not share:
678+                if (not limited) or (remaining_space >= max_space_per_bucket):
679+                    # ok! we need to create the new share file.
680+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
681+                                      max_space_per_bucket, lease_info, canary)
682+                    bucketwriters[shnum] = bw
683+                    self._active_writers[bw] = 1
684+                    if limited:
685+                        remaining_space -= max_space_per_bucket
686+                else:
687+                    # bummer! not enough space to accept this bucket
688+                    pass
689+
690+            elif share.is_complete():
691                 # great! we already have it. easy.
692                 pass
693hunk ./src/allmydata/storage/server.py 353
694-            elif os.path.exists(incominghome):
695+            elif not share.is_complete():
696                 # Note that we don't create BucketWriters for shnums that
697                 # have a partial share (in incoming/), so if a second upload
698                 # occurs while the first is still in progress, the second
699hunk ./src/allmydata/storage/server.py 359
700                 # uploader will use different storage servers.
701                 pass
702-            elif (not limited) or (remaining_space >= max_space_per_bucket):
703-                # ok! we need to create the new share file.
704-                bw = BucketWriter(self, incominghome, finalhome,
705-                                  max_space_per_bucket, lease_info, canary)
706-                if self.no_storage:
707-                    bw.throw_out_all_data = True
708-                bucketwriters[shnum] = bw
709-                self._active_writers[bw] = 1
710-                if limited:
711-                    remaining_space -= max_space_per_bucket
712-            else:
713-                # bummer! not enough space to accept this bucket
714-                pass
715-
716-        if bucketwriters:
717-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
718 
719         self.add_latency("allocate", time.time() - start)
720         return alreadygot, bucketwriters
721hunk ./src/allmydata/storage/server.py 437
722             self.stats_provider.count('storage_server.bytes_added', consumed_size)
723         del self._active_writers[bw]
724 
725-    def _get_bucket_shares(self, storage_index):
726-        """Return a list of (shnum, pathname) tuples for files that hold
727-        shares for this storage_index. In each tuple, 'shnum' will always be
728-        the integer form of the last component of 'pathname'."""
729-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
730-        try:
731-            for f in os.listdir(storagedir):
732-                if NUM_RE.match(f):
733-                    filename = os.path.join(storagedir, f)
734-                    yield (int(f), filename)
735-        except OSError:
736-            # Commonly caused by there being no buckets at all.
737-            pass
738 
739     def remote_get_buckets(self, storage_index):
740         start = time.time()
741hunk ./src/allmydata/storage/server.py 444
742         si_s = si_b2a(storage_index)
743         log.msg("storage: get_buckets %s" % si_s)
744         bucketreaders = {} # k: sharenum, v: BucketReader
745-        for shnum, filename in self._get_bucket_shares(storage_index):
746+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
747             bucketreaders[shnum] = BucketReader(self, filename,
748                                                 storage_index, shnum)
749         self.add_latency("get", time.time() - start)
750hunk ./src/allmydata/test/test_backends.py 10
751 import mock
752 
753 # This is the code that we're going to be testing.
754-from allmydata.storage.server import StorageServer
755+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
756 
757 # The following share file contents was generated with
758 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
759hunk ./src/allmydata/test/test_backends.py 21
760 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
761 
762 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
763+    @mock.patch('time.time')
764+    @mock.patch('os.mkdir')
765+    @mock.patch('__builtin__.open')
766+    @mock.patch('os.listdir')
767+    @mock.patch('os.path.isdir')
768+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
769+        """ This tests whether a server instance can be constructed
770+        with a null backend. The server instance fails the test if it
771+        tries to read or write to the file system. """
772+
773+        # Now begin the test.
774+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
775+
776+        self.failIf(mockisdir.called)
777+        self.failIf(mocklistdir.called)
778+        self.failIf(mockopen.called)
779+        self.failIf(mockmkdir.called)
780+
781+        # You passed!
782+
783+    @mock.patch('time.time')
784+    @mock.patch('os.mkdir')
785     @mock.patch('__builtin__.open')
786hunk ./src/allmydata/test/test_backends.py 44
787-    def test_create_server(self, mockopen):
788-        """ This tests whether a server instance can be constructed. """
789+    @mock.patch('os.listdir')
790+    @mock.patch('os.path.isdir')
791+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
792+        """ This tests whether a server instance can be constructed
793+        with a filesystem backend. To pass the test, it has to use the
794+        filesystem in only the prescribed ways. """
795 
796         def call_open(fname, mode):
797             if fname == 'testdir/bucket_counter.state':
798hunk ./src/allmydata/test/test_backends.py 58
799                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
800             elif fname == 'testdir/lease_checker.history':
801                 return StringIO()
802+            else:
803+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
804         mockopen.side_effect = call_open
805 
806         # Now begin the test.
807hunk ./src/allmydata/test/test_backends.py 63
808-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
809+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
810+
811+        self.failIf(mockisdir.called)
812+        self.failIf(mocklistdir.called)
813+        self.failIf(mockopen.called)
814+        self.failIf(mockmkdir.called)
815+        self.failIf(mocktime.called)
816 
817         # You passed!
818 
819hunk ./src/allmydata/test/test_backends.py 73
820-class TestServer(unittest.TestCase, ReallyEqualMixin):
821+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
822+    def setUp(self):
823+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
824+
825+    @mock.patch('os.mkdir')
826+    @mock.patch('__builtin__.open')
827+    @mock.patch('os.listdir')
828+    @mock.patch('os.path.isdir')
829+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
830+        """ Write a new share. """
831+
832+        # Now begin the test.
833+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
834+        bs[0].remote_write(0, 'a')
835+        self.failIf(mockisdir.called)
836+        self.failIf(mocklistdir.called)
837+        self.failIf(mockopen.called)
838+        self.failIf(mockmkdir.called)
839+
840+    @mock.patch('os.path.exists')
841+    @mock.patch('os.path.getsize')
842+    @mock.patch('__builtin__.open')
843+    @mock.patch('os.listdir')
844+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
845+        """ This tests whether the code correctly finds and reads
846+        shares written out by old (Tahoe-LAFS <= v1.8.2)
847+        servers. There is a similar test in test_download, but that one
848+        is from the perspective of the client and exercises a deeper
849+        stack of code. This one is for exercising just the
850+        StorageServer object. """
851+
852+        # Now begin the test.
853+        bs = self.s.remote_get_buckets('teststorage_index')
854+
855+        self.failUnlessEqual(len(bs), 0)
856+        self.failIf(mocklistdir.called)
857+        self.failIf(mockopen.called)
858+        self.failIf(mockgetsize.called)
859+        self.failIf(mockexists.called)
860+
861+
862+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
863     @mock.patch('__builtin__.open')
864     def setUp(self, mockopen):
865         def call_open(fname, mode):
866hunk ./src/allmydata/test/test_backends.py 126
867                 return StringIO()
868         mockopen.side_effect = call_open
869 
870-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
871-
872+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
873 
874     @mock.patch('time.time')
875     @mock.patch('os.mkdir')
876hunk ./src/allmydata/test/test_backends.py 134
877     @mock.patch('os.listdir')
878     @mock.patch('os.path.isdir')
879     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
880-        """Handle a report of corruption."""
881+        """ Write a new share. """
882 
883         def call_listdir(dirname):
884             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
885hunk ./src/allmydata/test/test_backends.py 173
886         mockopen.side_effect = call_open
887         # Now begin the test.
888         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
889-        print bs
890         bs[0].remote_write(0, 'a')
891         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
892 
893hunk ./src/allmydata/test/test_backends.py 176
894-
895     @mock.patch('os.path.exists')
896     @mock.patch('os.path.getsize')
897     @mock.patch('__builtin__.open')
898hunk ./src/allmydata/test/test_backends.py 218
899 
900         self.failUnlessEqual(len(bs), 1)
901         b = bs[0]
902+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
903         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
904         # If you try to read past the end you get the as much data as is there.
905         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
906hunk ./src/allmydata/test/test_backends.py 224
907         # If you start reading past the end of the file you get the empty string.
908         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
909+
910+
911}
912[a temp patch used as a snapshot
913wilcoxjg@gmail.com**20110626052732
914 Ignore-this: 95f05e314eaec870afa04c76d979aa44
915] {
916hunk ./docs/configuration.rst 637
917   [storage]
918   enabled = True
919   readonly = True
920-  sizelimit = 10000000000
921 
922 
923   [helper]
924hunk ./docs/garbage-collection.rst 16
925 
926 When a file or directory in the virtual filesystem is no longer referenced,
927 the space that its shares occupied on each storage server can be freed,
928-making room for other shares. Tahoe currently uses a garbage collection
929+making room for other shares. Tahoe uses a garbage collection
930 ("GC") mechanism to implement this space-reclamation process. Each share has
931 one or more "leases", which are managed by clients who want the
932 file/directory to be retained. The storage server accepts each share for a
933hunk ./docs/garbage-collection.rst 34
934 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
935 If lease renewal occurs quickly and with 100% reliability, than any renewal
936 time that is shorter than the lease duration will suffice, but a larger ratio
937-of duration-over-renewal-time will be more robust in the face of occasional
938+of lease duration to renewal time will be more robust in the face of occasional
939 delays or failures.
940 
941 The current recommended values for a small Tahoe grid are to renew the leases
942replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
943hunk ./src/allmydata/client.py 260
944             sharetypes.append("mutable")
945         expiration_sharetypes = tuple(sharetypes)
946 
947+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
948+            xyz
949+        xyz
950         ss = StorageServer(storedir, self.nodeid,
951                            reserved_space=reserved,
952                            discard_storage=discard,
953hunk ./src/allmydata/storage/crawler.py 234
954         f = open(tmpfile, "wb")
955         pickle.dump(self.state, f)
956         f.close()
957-        fileutil.move_into_place(tmpfile, self.statefile)
958+        fileutil.move_into_place(tmpfile, self.statefname)
959 
960     def startService(self):
961         # arrange things to look like we were just sleeping, so
962}
963[snapshot of progress on backend implementation (not suitable for trunk)
964wilcoxjg@gmail.com**20110626053244
965 Ignore-this: 50c764af791c2b99ada8289546806a0a
966] {
967adddir ./src/allmydata/storage/backends
968adddir ./src/allmydata/storage/backends/das
969move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
970adddir ./src/allmydata/storage/backends/null
971hunk ./src/allmydata/interfaces.py 270
972         store that on disk.
973         """
974 
975+class IStorageBackend(Interface):
976+    """
977+    Objects of this kind live on the server side and are used by the
978+    storage server object.
979+    """
980+    def get_available_space(self, reserved_space):
981+        """ Returns available space for share storage in bytes, or
982+        None if this information is not available or if the available
983+        space is unlimited.
984+
985+        If the backend is configured for read-only mode then this will
986+        return 0.
987+
988+        reserved_space is how many bytes to subtract from the answer, so
989+        you can pass how many bytes you would like to leave unused on this
990+        filesystem as reserved_space. """
991+
992+    def get_bucket_shares(self):
993+        """XXX"""
994+
995+    def get_share(self):
996+        """XXX"""
997+
998+    def make_bucket_writer(self):
999+        """XXX"""
1000+
1001+class IStorageBackendShare(Interface):
1002+    """
1003+    This object contains as much as all of the share data.  It is intended
1004+    for lazy evaluation such that in many use cases substantially less than
1005+    all of the share data will be accessed.
1006+    """
1007+    def is_complete(self):
1008+        """
1009+        Returns the share state, or None if the share does not exist.
1010+        """
1011+
1012 class IStorageBucketWriter(Interface):
1013     """
1014     Objects of this kind live on the client side.
1015hunk ./src/allmydata/interfaces.py 2492
1016 
1017 class EmptyPathnameComponentError(Exception):
1018     """The webapi disallows empty pathname components."""
1019+
1020+class IShareStore(Interface):
1021+    pass
1022+
1023addfile ./src/allmydata/storage/backends/__init__.py
1024addfile ./src/allmydata/storage/backends/das/__init__.py
1025addfile ./src/allmydata/storage/backends/das/core.py
1026hunk ./src/allmydata/storage/backends/das/core.py 1
1027+from allmydata.interfaces import IStorageBackend
1028+from allmydata.storage.backends.base import Backend
1029+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1030+from allmydata.util.assertutil import precondition
1031+
1032+import os, re, weakref, struct, time
1033+
1034+from foolscap.api import Referenceable
1035+from twisted.application import service
1036+
1037+from zope.interface import implements
1038+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
1039+from allmydata.util import fileutil, idlib, log, time_format
1040+import allmydata # for __full_version__
1041+
1042+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1043+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
1044+from allmydata.storage.lease import LeaseInfo
1045+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1046+     create_mutable_sharefile
1047+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1048+from allmydata.storage.crawler import FSBucketCountingCrawler
1049+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1050+
1051+from zope.interface import implements
1052+
1053+class DASCore(Backend):
1054+    implements(IStorageBackend)
1055+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1056+        Backend.__init__(self)
1057+
1058+        self._setup_storage(storedir, readonly, reserved_space)
1059+        self._setup_corruption_advisory()
1060+        self._setup_bucket_counter()
1061+        self._setup_lease_checkerf(expiration_policy)
1062+
1063+    def _setup_storage(self, storedir, readonly, reserved_space):
1064+        self.storedir = storedir
1065+        self.readonly = readonly
1066+        self.reserved_space = int(reserved_space)
1067+        if self.reserved_space:
1068+            if self.get_available_space() is None:
1069+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1070+                        umid="0wZ27w", level=log.UNUSUAL)
1071+
1072+        self.sharedir = os.path.join(self.storedir, "shares")
1073+        fileutil.make_dirs(self.sharedir)
1074+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1075+        self._clean_incomplete()
1076+
1077+    def _clean_incomplete(self):
1078+        fileutil.rm_dir(self.incomingdir)
1079+        fileutil.make_dirs(self.incomingdir)
1080+
1081+    def _setup_corruption_advisory(self):
1082+        # we don't actually create the corruption-advisory dir until necessary
1083+        self.corruption_advisory_dir = os.path.join(self.storedir,
1084+                                                    "corruption-advisories")
1085+
1086+    def _setup_bucket_counter(self):
1087+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1088+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1089+        self.bucket_counter.setServiceParent(self)
1090+
1091+    def _setup_lease_checkerf(self, expiration_policy):
1092+        statefile = os.path.join(self.storedir, "lease_checker.state")
1093+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1094+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1095+        self.lease_checker.setServiceParent(self)
1096+
1097+    def get_available_space(self):
1098+        if self.readonly:
1099+            return 0
1100+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1101+
1102+    def get_shares(self, storage_index):
1103+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1104+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1105+        try:
1106+            for f in os.listdir(finalstoragedir):
1107+                if NUM_RE.match(f):
1108+                    filename = os.path.join(finalstoragedir, f)
1109+                    yield FSBShare(filename, int(f))
1110+        except OSError:
1111+            # Commonly caused by there being no buckets at all.
1112+            pass
1113+       
1114+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1115+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1116+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1117+        return bw
1118+       
1119+
1120+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1121+# and share data. The share data is accessed by RIBucketWriter.write and
1122+# RIBucketReader.read . The lease information is not accessible through these
1123+# interfaces.
1124+
1125+# The share file has the following layout:
1126+#  0x00: share file version number, four bytes, current version is 1
1127+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1128+#  0x08: number of leases, four bytes big-endian
1129+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1130+#  A+0x0c = B: first lease. Lease format is:
1131+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1132+#   B+0x04: renew secret, 32 bytes (SHA256)
1133+#   B+0x24: cancel secret, 32 bytes (SHA256)
1134+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1135+#   B+0x48: next lease, or end of record
1136+
1137+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1138+# but it is still filled in by storage servers in case the storage server
1139+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1140+# share file is moved from one storage server to another. The value stored in
1141+# this field is truncated, so if the actual share data length is >= 2**32,
1142+# then the value stored in this field will be the actual share data length
1143+# modulo 2**32.
1144+
1145+class ImmutableShare:
1146+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1147+    sharetype = "immutable"
1148+
1149+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1150+        """ If max_size is not None then I won't allow more than
1151+        max_size to be written to me. If create=True then max_size
1152+        must not be None. """
1153+        precondition((max_size is not None) or (not create), max_size, create)
1154+        self.shnum = shnum
1155+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1156+        self._max_size = max_size
1157+        if create:
1158+            # touch the file, so later callers will see that we're working on
1159+            # it. Also construct the metadata.
1160+            assert not os.path.exists(self.fname)
1161+            fileutil.make_dirs(os.path.dirname(self.fname))
1162+            f = open(self.fname, 'wb')
1163+            # The second field -- the four-byte share data length -- is no
1164+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1165+            # there in case someone downgrades a storage server from >=
1166+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1167+            # server to another, etc. We do saturation -- a share data length
1168+            # larger than 2**32-1 (what can fit into the field) is marked as
1169+            # the largest length that can fit into the field. That way, even
1170+            # if this does happen, the old < v1.3.0 server will still allow
1171+            # clients to read the first part of the share.
1172+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1173+            f.close()
1174+            self._lease_offset = max_size + 0x0c
1175+            self._num_leases = 0
1176+        else:
1177+            f = open(self.fname, 'rb')
1178+            filesize = os.path.getsize(self.fname)
1179+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1180+            f.close()
1181+            if version != 1:
1182+                msg = "sharefile %s had version %d but we wanted 1" % \
1183+                      (self.fname, version)
1184+                raise UnknownImmutableContainerVersionError(msg)
1185+            self._num_leases = num_leases
1186+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1187+        self._data_offset = 0xc
1188+
1189+    def unlink(self):
1190+        os.unlink(self.fname)
1191+
1192+    def read_share_data(self, offset, length):
1193+        precondition(offset >= 0)
1194+        # Reads beyond the end of the data are truncated. Reads that start
1195+        # beyond the end of the data return an empty string.
1196+        seekpos = self._data_offset+offset
1197+        fsize = os.path.getsize(self.fname)
1198+        actuallength = max(0, min(length, fsize-seekpos))
1199+        if actuallength == 0:
1200+            return ""
1201+        f = open(self.fname, 'rb')
1202+        f.seek(seekpos)
1203+        return f.read(actuallength)
1204+
1205+    def write_share_data(self, offset, data):
1206+        length = len(data)
1207+        precondition(offset >= 0, offset)
1208+        if self._max_size is not None and offset+length > self._max_size:
1209+            raise DataTooLargeError(self._max_size, offset, length)
1210+        f = open(self.fname, 'rb+')
1211+        real_offset = self._data_offset+offset
1212+        f.seek(real_offset)
1213+        assert f.tell() == real_offset
1214+        f.write(data)
1215+        f.close()
1216+
1217+    def _write_lease_record(self, f, lease_number, lease_info):
1218+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1219+        f.seek(offset)
1220+        assert f.tell() == offset
1221+        f.write(lease_info.to_immutable_data())
1222+
1223+    def _read_num_leases(self, f):
1224+        f.seek(0x08)
1225+        (num_leases,) = struct.unpack(">L", f.read(4))
1226+        return num_leases
1227+
1228+    def _write_num_leases(self, f, num_leases):
1229+        f.seek(0x08)
1230+        f.write(struct.pack(">L", num_leases))
1231+
1232+    def _truncate_leases(self, f, num_leases):
1233+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1234+
1235+    def get_leases(self):
1236+        """Yields a LeaseInfo instance for all leases."""
1237+        f = open(self.fname, 'rb')
1238+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1239+        f.seek(self._lease_offset)
1240+        for i in range(num_leases):
1241+            data = f.read(self.LEASE_SIZE)
1242+            if data:
1243+                yield LeaseInfo().from_immutable_data(data)
1244+
1245+    def add_lease(self, lease_info):
1246+        f = open(self.fname, 'rb+')
1247+        num_leases = self._read_num_leases(f)
1248+        self._write_lease_record(f, num_leases, lease_info)
1249+        self._write_num_leases(f, num_leases+1)
1250+        f.close()
1251+
1252+    def renew_lease(self, renew_secret, new_expire_time):
1253+        for i,lease in enumerate(self.get_leases()):
1254+            if constant_time_compare(lease.renew_secret, renew_secret):
1255+                # yup. See if we need to update the owner time.
1256+                if new_expire_time > lease.expiration_time:
1257+                    # yes
1258+                    lease.expiration_time = new_expire_time
1259+                    f = open(self.fname, 'rb+')
1260+                    self._write_lease_record(f, i, lease)
1261+                    f.close()
1262+                return
1263+        raise IndexError("unable to renew non-existent lease")
1264+
1265+    def add_or_renew_lease(self, lease_info):
1266+        try:
1267+            self.renew_lease(lease_info.renew_secret,
1268+                             lease_info.expiration_time)
1269+        except IndexError:
1270+            self.add_lease(lease_info)
1271+
1272+
1273+    def cancel_lease(self, cancel_secret):
1274+        """Remove a lease with the given cancel_secret. If the last lease is
1275+        cancelled, the file will be removed. Return the number of bytes that
1276+        were freed (by truncating the list of leases, and possibly by
1277+        deleting the file. Raise IndexError if there was no lease with the
1278+        given cancel_secret.
1279+        """
1280+
1281+        leases = list(self.get_leases())
1282+        num_leases_removed = 0
1283+        for i,lease in enumerate(leases):
1284+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1285+                leases[i] = None
1286+                num_leases_removed += 1
1287+        if not num_leases_removed:
1288+            raise IndexError("unable to find matching lease to cancel")
1289+        if num_leases_removed:
1290+            # pack and write out the remaining leases. We write these out in
1291+            # the same order as they were added, so that if we crash while
1292+            # doing this, we won't lose any non-cancelled leases.
1293+            leases = [l for l in leases if l] # remove the cancelled leases
1294+            f = open(self.fname, 'rb+')
1295+            for i,lease in enumerate(leases):
1296+                self._write_lease_record(f, i, lease)
1297+            self._write_num_leases(f, len(leases))
1298+            self._truncate_leases(f, len(leases))
1299+            f.close()
1300+        space_freed = self.LEASE_SIZE * num_leases_removed
1301+        if not len(leases):
1302+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1303+            self.unlink()
1304+        return space_freed
1305hunk ./src/allmydata/storage/backends/das/expirer.py 2
1306 import time, os, pickle, struct
1307-from allmydata.storage.crawler import ShareCrawler
1308-from allmydata.storage.shares import get_share_file
1309+from allmydata.storage.crawler import FSShareCrawler
1310 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1311      UnknownImmutableContainerVersionError
1312 from twisted.python import log as twlog
1313hunk ./src/allmydata/storage/backends/das/expirer.py 7
1314 
1315-class LeaseCheckingCrawler(ShareCrawler):
1316+class FSLeaseCheckingCrawler(FSShareCrawler):
1317     """I examine the leases on all shares, determining which are still valid
1318     and which have expired. I can remove the expired leases (if so
1319     configured), and the share will be deleted when the last lease is
1320hunk ./src/allmydata/storage/backends/das/expirer.py 50
1321     slow_start = 360 # wait 6 minutes after startup
1322     minimum_cycle_time = 12*60*60 # not more than twice per day
1323 
1324-    def __init__(self, statefile, historyfile,
1325-                 expiration_enabled, mode,
1326-                 override_lease_duration, # used if expiration_mode=="age"
1327-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1328-                 sharetypes):
1329+    def __init__(self, statefile, historyfile, expiration_policy):
1330         self.historyfile = historyfile
1331hunk ./src/allmydata/storage/backends/das/expirer.py 52
1332-        self.expiration_enabled = expiration_enabled
1333-        self.mode = mode
1334+        self.expiration_enabled = expiration_policy['enabled']
1335+        self.mode = expiration_policy['mode']
1336         self.override_lease_duration = None
1337         self.cutoff_date = None
1338         if self.mode == "age":
1339hunk ./src/allmydata/storage/backends/das/expirer.py 57
1340-            assert isinstance(override_lease_duration, (int, type(None)))
1341-            self.override_lease_duration = override_lease_duration # seconds
1342+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1343+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1344         elif self.mode == "cutoff-date":
1345hunk ./src/allmydata/storage/backends/das/expirer.py 60
1346-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1347+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1348             assert cutoff_date is not None
1349hunk ./src/allmydata/storage/backends/das/expirer.py 62
1350-            self.cutoff_date = cutoff_date
1351+            self.cutoff_date = expiration_policy['cutoff_date']
1352         else:
1353hunk ./src/allmydata/storage/backends/das/expirer.py 64
1354-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1355-        self.sharetypes_to_expire = sharetypes
1356-        ShareCrawler.__init__(self, statefile)
1357+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1358+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1359+        FSShareCrawler.__init__(self, statefile)
1360 
1361     def add_initial_state(self):
1362         # we fill ["cycle-to-date"] here (even though they will be reset in
1363hunk ./src/allmydata/storage/backends/das/expirer.py 156
1364 
1365     def process_share(self, sharefilename):
1366         # first, find out what kind of a share it is
1367-        sf = get_share_file(sharefilename)
1368+        f = open(sharefilename, "rb")
1369+        prefix = f.read(32)
1370+        f.close()
1371+        if prefix == MutableShareFile.MAGIC:
1372+            sf = MutableShareFile(sharefilename)
1373+        else:
1374+            # otherwise assume it's immutable
1375+            sf = FSBShare(sharefilename)
1376         sharetype = sf.sharetype
1377         now = time.time()
1378         s = self.stat(sharefilename)
1379addfile ./src/allmydata/storage/backends/null/__init__.py
1380addfile ./src/allmydata/storage/backends/null/core.py
1381hunk ./src/allmydata/storage/backends/null/core.py 1
1382+from allmydata.storage.backends.base import Backend
1383+
1384+class NullCore(Backend):
1385+    def __init__(self):
1386+        Backend.__init__(self)
1387+
1388+    def get_available_space(self):
1389+        return None
1390+
1391+    def get_shares(self, storage_index):
1392+        return set()
1393+
1394+    def get_share(self, storage_index, sharenum):
1395+        return None
1396+
1397+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1398+        return NullBucketWriter()
1399hunk ./src/allmydata/storage/crawler.py 12
1400 class TimeSliceExceeded(Exception):
1401     pass
1402 
1403-class ShareCrawler(service.MultiService):
1404+class FSShareCrawler(service.MultiService):
1405     """A subcless of ShareCrawler is attached to a StorageServer, and
1406     periodically walks all of its shares, processing each one in some
1407     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1408hunk ./src/allmydata/storage/crawler.py 68
1409     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1410     minimum_cycle_time = 300 # don't run a cycle faster than this
1411 
1412-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1413+    def __init__(self, statefname, allowed_cpu_percentage=None):
1414         service.MultiService.__init__(self)
1415         if allowed_cpu_percentage is not None:
1416             self.allowed_cpu_percentage = allowed_cpu_percentage
1417hunk ./src/allmydata/storage/crawler.py 72
1418-        self.backend = backend
1419+        self.statefname = statefname
1420         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1421                          for i in range(2**10)]
1422         self.prefixes.sort()
1423hunk ./src/allmydata/storage/crawler.py 192
1424         #                            of the last bucket to be processed, or
1425         #                            None if we are sleeping between cycles
1426         try:
1427-            f = open(self.statefile, "rb")
1428+            f = open(self.statefname, "rb")
1429             state = pickle.load(f)
1430             f.close()
1431         except EnvironmentError:
1432hunk ./src/allmydata/storage/crawler.py 230
1433         else:
1434             last_complete_prefix = self.prefixes[lcpi]
1435         self.state["last-complete-prefix"] = last_complete_prefix
1436-        tmpfile = self.statefile + ".tmp"
1437+        tmpfile = self.statefname + ".tmp"
1438         f = open(tmpfile, "wb")
1439         pickle.dump(self.state, f)
1440         f.close()
1441hunk ./src/allmydata/storage/crawler.py 433
1442         pass
1443 
1444 
1445-class BucketCountingCrawler(ShareCrawler):
1446+class FSBucketCountingCrawler(FSShareCrawler):
1447     """I keep track of how many buckets are being managed by this server.
1448     This is equivalent to the number of distributed files and directories for
1449     which I am providing storage. The actual number of files+directories in
1450hunk ./src/allmydata/storage/crawler.py 446
1451 
1452     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1453 
1454-    def __init__(self, statefile, num_sample_prefixes=1):
1455-        ShareCrawler.__init__(self, statefile)
1456+    def __init__(self, statefname, num_sample_prefixes=1):
1457+        FSShareCrawler.__init__(self, statefname)
1458         self.num_sample_prefixes = num_sample_prefixes
1459 
1460     def add_initial_state(self):
1461hunk ./src/allmydata/storage/immutable.py 14
1462 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1463      DataTooLargeError
1464 
1465-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1466-# and share data. The share data is accessed by RIBucketWriter.write and
1467-# RIBucketReader.read . The lease information is not accessible through these
1468-# interfaces.
1469-
1470-# The share file has the following layout:
1471-#  0x00: share file version number, four bytes, current version is 1
1472-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1473-#  0x08: number of leases, four bytes big-endian
1474-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1475-#  A+0x0c = B: first lease. Lease format is:
1476-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1477-#   B+0x04: renew secret, 32 bytes (SHA256)
1478-#   B+0x24: cancel secret, 32 bytes (SHA256)
1479-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1480-#   B+0x48: next lease, or end of record
1481-
1482-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1483-# but it is still filled in by storage servers in case the storage server
1484-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1485-# share file is moved from one storage server to another. The value stored in
1486-# this field is truncated, so if the actual share data length is >= 2**32,
1487-# then the value stored in this field will be the actual share data length
1488-# modulo 2**32.
1489-
1490-class ShareFile:
1491-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1492-    sharetype = "immutable"
1493-
1494-    def __init__(self, filename, max_size=None, create=False):
1495-        """ If max_size is not None then I won't allow more than
1496-        max_size to be written to me. If create=True then max_size
1497-        must not be None. """
1498-        precondition((max_size is not None) or (not create), max_size, create)
1499-        self.home = filename
1500-        self._max_size = max_size
1501-        if create:
1502-            # touch the file, so later callers will see that we're working on
1503-            # it. Also construct the metadata.
1504-            assert not os.path.exists(self.home)
1505-            fileutil.make_dirs(os.path.dirname(self.home))
1506-            f = open(self.home, 'wb')
1507-            # The second field -- the four-byte share data length -- is no
1508-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1509-            # there in case someone downgrades a storage server from >=
1510-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1511-            # server to another, etc. We do saturation -- a share data length
1512-            # larger than 2**32-1 (what can fit into the field) is marked as
1513-            # the largest length that can fit into the field. That way, even
1514-            # if this does happen, the old < v1.3.0 server will still allow
1515-            # clients to read the first part of the share.
1516-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1517-            f.close()
1518-            self._lease_offset = max_size + 0x0c
1519-            self._num_leases = 0
1520-        else:
1521-            f = open(self.home, 'rb')
1522-            filesize = os.path.getsize(self.home)
1523-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1524-            f.close()
1525-            if version != 1:
1526-                msg = "sharefile %s had version %d but we wanted 1" % \
1527-                      (filename, version)
1528-                raise UnknownImmutableContainerVersionError(msg)
1529-            self._num_leases = num_leases
1530-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1531-        self._data_offset = 0xc
1532-
1533-    def unlink(self):
1534-        os.unlink(self.home)
1535-
1536-    def read_share_data(self, offset, length):
1537-        precondition(offset >= 0)
1538-        # Reads beyond the end of the data are truncated. Reads that start
1539-        # beyond the end of the data return an empty string.
1540-        seekpos = self._data_offset+offset
1541-        fsize = os.path.getsize(self.home)
1542-        actuallength = max(0, min(length, fsize-seekpos))
1543-        if actuallength == 0:
1544-            return ""
1545-        f = open(self.home, 'rb')
1546-        f.seek(seekpos)
1547-        return f.read(actuallength)
1548-
1549-    def write_share_data(self, offset, data):
1550-        length = len(data)
1551-        precondition(offset >= 0, offset)
1552-        if self._max_size is not None and offset+length > self._max_size:
1553-            raise DataTooLargeError(self._max_size, offset, length)
1554-        f = open(self.home, 'rb+')
1555-        real_offset = self._data_offset+offset
1556-        f.seek(real_offset)
1557-        assert f.tell() == real_offset
1558-        f.write(data)
1559-        f.close()
1560-
1561-    def _write_lease_record(self, f, lease_number, lease_info):
1562-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1563-        f.seek(offset)
1564-        assert f.tell() == offset
1565-        f.write(lease_info.to_immutable_data())
1566-
1567-    def _read_num_leases(self, f):
1568-        f.seek(0x08)
1569-        (num_leases,) = struct.unpack(">L", f.read(4))
1570-        return num_leases
1571-
1572-    def _write_num_leases(self, f, num_leases):
1573-        f.seek(0x08)
1574-        f.write(struct.pack(">L", num_leases))
1575-
1576-    def _truncate_leases(self, f, num_leases):
1577-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1578-
1579-    def get_leases(self):
1580-        """Yields a LeaseInfo instance for all leases."""
1581-        f = open(self.home, 'rb')
1582-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1583-        f.seek(self._lease_offset)
1584-        for i in range(num_leases):
1585-            data = f.read(self.LEASE_SIZE)
1586-            if data:
1587-                yield LeaseInfo().from_immutable_data(data)
1588-
1589-    def add_lease(self, lease_info):
1590-        f = open(self.home, 'rb+')
1591-        num_leases = self._read_num_leases(f)
1592-        self._write_lease_record(f, num_leases, lease_info)
1593-        self._write_num_leases(f, num_leases+1)
1594-        f.close()
1595-
1596-    def renew_lease(self, renew_secret, new_expire_time):
1597-        for i,lease in enumerate(self.get_leases()):
1598-            if constant_time_compare(lease.renew_secret, renew_secret):
1599-                # yup. See if we need to update the owner time.
1600-                if new_expire_time > lease.expiration_time:
1601-                    # yes
1602-                    lease.expiration_time = new_expire_time
1603-                    f = open(self.home, 'rb+')
1604-                    self._write_lease_record(f, i, lease)
1605-                    f.close()
1606-                return
1607-        raise IndexError("unable to renew non-existent lease")
1608-
1609-    def add_or_renew_lease(self, lease_info):
1610-        try:
1611-            self.renew_lease(lease_info.renew_secret,
1612-                             lease_info.expiration_time)
1613-        except IndexError:
1614-            self.add_lease(lease_info)
1615-
1616-
1617-    def cancel_lease(self, cancel_secret):
1618-        """Remove a lease with the given cancel_secret. If the last lease is
1619-        cancelled, the file will be removed. Return the number of bytes that
1620-        were freed (by truncating the list of leases, and possibly by
1621-        deleting the file. Raise IndexError if there was no lease with the
1622-        given cancel_secret.
1623-        """
1624-
1625-        leases = list(self.get_leases())
1626-        num_leases_removed = 0
1627-        for i,lease in enumerate(leases):
1628-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1629-                leases[i] = None
1630-                num_leases_removed += 1
1631-        if not num_leases_removed:
1632-            raise IndexError("unable to find matching lease to cancel")
1633-        if num_leases_removed:
1634-            # pack and write out the remaining leases. We write these out in
1635-            # the same order as they were added, so that if we crash while
1636-            # doing this, we won't lose any non-cancelled leases.
1637-            leases = [l for l in leases if l] # remove the cancelled leases
1638-            f = open(self.home, 'rb+')
1639-            for i,lease in enumerate(leases):
1640-                self._write_lease_record(f, i, lease)
1641-            self._write_num_leases(f, len(leases))
1642-            self._truncate_leases(f, len(leases))
1643-            f.close()
1644-        space_freed = self.LEASE_SIZE * num_leases_removed
1645-        if not len(leases):
1646-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1647-            self.unlink()
1648-        return space_freed
1649-class NullBucketWriter(Referenceable):
1650-    implements(RIBucketWriter)
1651-
1652-    def remote_write(self, offset, data):
1653-        return
1654-
1655 class BucketWriter(Referenceable):
1656     implements(RIBucketWriter)
1657 
1658hunk ./src/allmydata/storage/immutable.py 17
1659-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1660+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1661         self.ss = ss
1662hunk ./src/allmydata/storage/immutable.py 19
1663-        self.incominghome = incominghome
1664-        self.finalhome = finalhome
1665         self._max_size = max_size # don't allow the client to write more than this
1666         self._canary = canary
1667         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1668hunk ./src/allmydata/storage/immutable.py 24
1669         self.closed = False
1670         self.throw_out_all_data = False
1671-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1672+        self._sharefile = immutableshare
1673         # also, add our lease to the file now, so that other ones can be
1674         # added by simultaneous uploaders
1675         self._sharefile.add_lease(lease_info)
1676hunk ./src/allmydata/storage/server.py 16
1677 from allmydata.storage.lease import LeaseInfo
1678 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1679      create_mutable_sharefile
1680-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1681-from allmydata.storage.crawler import BucketCountingCrawler
1682-from allmydata.storage.expirer import LeaseCheckingCrawler
1683 
1684 from zope.interface import implements
1685 
1686hunk ./src/allmydata/storage/server.py 19
1687-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1688-# be started and stopped.
1689-class Backend(service.MultiService):
1690-    implements(IStatsProducer)
1691-    def __init__(self):
1692-        service.MultiService.__init__(self)
1693-
1694-    def get_bucket_shares(self):
1695-        """XXX"""
1696-        raise NotImplementedError
1697-
1698-    def get_share(self):
1699-        """XXX"""
1700-        raise NotImplementedError
1701-
1702-    def make_bucket_writer(self):
1703-        """XXX"""
1704-        raise NotImplementedError
1705-
1706-class NullBackend(Backend):
1707-    def __init__(self):
1708-        Backend.__init__(self)
1709-
1710-    def get_available_space(self):
1711-        return None
1712-
1713-    def get_bucket_shares(self, storage_index):
1714-        return set()
1715-
1716-    def get_share(self, storage_index, sharenum):
1717-        return None
1718-
1719-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1720-        return NullBucketWriter()
1721-
1722-class FSBackend(Backend):
1723-    def __init__(self, storedir, readonly=False, reserved_space=0):
1724-        Backend.__init__(self)
1725-
1726-        self._setup_storage(storedir, readonly, reserved_space)
1727-        self._setup_corruption_advisory()
1728-        self._setup_bucket_counter()
1729-        self._setup_lease_checkerf()
1730-
1731-    def _setup_storage(self, storedir, readonly, reserved_space):
1732-        self.storedir = storedir
1733-        self.readonly = readonly
1734-        self.reserved_space = int(reserved_space)
1735-        if self.reserved_space:
1736-            if self.get_available_space() is None:
1737-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1738-                        umid="0wZ27w", level=log.UNUSUAL)
1739-
1740-        self.sharedir = os.path.join(self.storedir, "shares")
1741-        fileutil.make_dirs(self.sharedir)
1742-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1743-        self._clean_incomplete()
1744-
1745-    def _clean_incomplete(self):
1746-        fileutil.rm_dir(self.incomingdir)
1747-        fileutil.make_dirs(self.incomingdir)
1748-
1749-    def _setup_corruption_advisory(self):
1750-        # we don't actually create the corruption-advisory dir until necessary
1751-        self.corruption_advisory_dir = os.path.join(self.storedir,
1752-                                                    "corruption-advisories")
1753-
1754-    def _setup_bucket_counter(self):
1755-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1756-        self.bucket_counter = BucketCountingCrawler(statefile)
1757-        self.bucket_counter.setServiceParent(self)
1758-
1759-    def _setup_lease_checkerf(self):
1760-        statefile = os.path.join(self.storedir, "lease_checker.state")
1761-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1762-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1763-                                   expiration_enabled, expiration_mode,
1764-                                   expiration_override_lease_duration,
1765-                                   expiration_cutoff_date,
1766-                                   expiration_sharetypes)
1767-        self.lease_checker.setServiceParent(self)
1768-
1769-    def get_available_space(self):
1770-        if self.readonly:
1771-            return 0
1772-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1773-
1774-    def get_bucket_shares(self, storage_index):
1775-        """Return a list of (shnum, pathname) tuples for files that hold
1776-        shares for this storage_index. In each tuple, 'shnum' will always be
1777-        the integer form of the last component of 'pathname'."""
1778-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1779-        try:
1780-            for f in os.listdir(storagedir):
1781-                if NUM_RE.match(f):
1782-                    filename = os.path.join(storagedir, f)
1783-                    yield (int(f), filename)
1784-        except OSError:
1785-            # Commonly caused by there being no buckets at all.
1786-            pass
1787-
1788 # storage/
1789 # storage/shares/incoming
1790 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1791hunk ./src/allmydata/storage/server.py 32
1792 # $SHARENUM matches this regex:
1793 NUM_RE=re.compile("^[0-9]+$")
1794 
1795-
1796-
1797 class StorageServer(service.MultiService, Referenceable):
1798     implements(RIStorageServer, IStatsProducer)
1799     name = 'storage'
1800hunk ./src/allmydata/storage/server.py 35
1801-    LeaseCheckerClass = LeaseCheckingCrawler
1802 
1803     def __init__(self, nodeid, backend, reserved_space=0,
1804                  readonly_storage=False,
1805hunk ./src/allmydata/storage/server.py 38
1806-                 stats_provider=None,
1807-                 expiration_enabled=False,
1808-                 expiration_mode="age",
1809-                 expiration_override_lease_duration=None,
1810-                 expiration_cutoff_date=None,
1811-                 expiration_sharetypes=("mutable", "immutable")):
1812+                 stats_provider=None ):
1813         service.MultiService.__init__(self)
1814         assert isinstance(nodeid, str)
1815         assert len(nodeid) == 20
1816hunk ./src/allmydata/storage/server.py 217
1817         # they asked about: this will save them a lot of work. Add or update
1818         # leases for all of them: if they want us to hold shares for this
1819         # file, they'll want us to hold leases for this file.
1820-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1821-            alreadygot.add(shnum)
1822-            sf = ShareFile(fn)
1823-            sf.add_or_renew_lease(lease_info)
1824-
1825-        for shnum in sharenums:
1826-            share = self.backend.get_share(storage_index, shnum)
1827+        for share in self.backend.get_shares(storage_index):
1828+            alreadygot.add(share.shnum)
1829+            share.add_or_renew_lease(lease_info)
1830 
1831hunk ./src/allmydata/storage/server.py 221
1832-            if not share:
1833-                if (not limited) or (remaining_space >= max_space_per_bucket):
1834-                    # ok! we need to create the new share file.
1835-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1836-                                      max_space_per_bucket, lease_info, canary)
1837-                    bucketwriters[shnum] = bw
1838-                    self._active_writers[bw] = 1
1839-                    if limited:
1840-                        remaining_space -= max_space_per_bucket
1841-                else:
1842-                    # bummer! not enough space to accept this bucket
1843-                    pass
1844+        for shnum in (sharenums - alreadygot):
1845+            if (not limited) or (remaining_space >= max_space_per_bucket):
1846+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1847+                self.backend.set_storage_server(self)
1848+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1849+                                                     max_space_per_bucket, lease_info, canary)
1850+                bucketwriters[shnum] = bw
1851+                self._active_writers[bw] = 1
1852+                if limited:
1853+                    remaining_space -= max_space_per_bucket
1854 
1855hunk ./src/allmydata/storage/server.py 232
1856-            elif share.is_complete():
1857-                # great! we already have it. easy.
1858-                pass
1859-            elif not share.is_complete():
1860-                # Note that we don't create BucketWriters for shnums that
1861-                # have a partial share (in incoming/), so if a second upload
1862-                # occurs while the first is still in progress, the second
1863-                # uploader will use different storage servers.
1864-                pass
1865+        #XXX We SHOULD DOCUMENT LATER.
1866 
1867         self.add_latency("allocate", time.time() - start)
1868         return alreadygot, bucketwriters
1869hunk ./src/allmydata/storage/server.py 238
1870 
1871     def _iter_share_files(self, storage_index):
1872-        for shnum, filename in self._get_bucket_shares(storage_index):
1873+        for shnum, filename in self._get_shares(storage_index):
1874             f = open(filename, 'rb')
1875             header = f.read(32)
1876             f.close()
1877hunk ./src/allmydata/storage/server.py 318
1878         si_s = si_b2a(storage_index)
1879         log.msg("storage: get_buckets %s" % si_s)
1880         bucketreaders = {} # k: sharenum, v: BucketReader
1881-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1882+        for shnum, filename in self.backend.get_shares(storage_index):
1883             bucketreaders[shnum] = BucketReader(self, filename,
1884                                                 storage_index, shnum)
1885         self.add_latency("get", time.time() - start)
1886hunk ./src/allmydata/storage/server.py 334
1887         # since all shares get the same lease data, we just grab the leases
1888         # from the first share
1889         try:
1890-            shnum, filename = self._get_bucket_shares(storage_index).next()
1891+            shnum, filename = self._get_shares(storage_index).next()
1892             sf = ShareFile(filename)
1893             return sf.get_leases()
1894         except StopIteration:
1895hunk ./src/allmydata/storage/shares.py 1
1896-#! /usr/bin/python
1897-
1898-from allmydata.storage.mutable import MutableShareFile
1899-from allmydata.storage.immutable import ShareFile
1900-
1901-def get_share_file(filename):
1902-    f = open(filename, "rb")
1903-    prefix = f.read(32)
1904-    f.close()
1905-    if prefix == MutableShareFile.MAGIC:
1906-        return MutableShareFile(filename)
1907-    # otherwise assume it's immutable
1908-    return ShareFile(filename)
1909-
1910rmfile ./src/allmydata/storage/shares.py
1911hunk ./src/allmydata/test/common_util.py 20
1912 
1913 def flip_one_bit(s, offset=0, size=None):
1914     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1915-    than offset+size. """
1916+    than offset+size. Return the new string. """
1917     if size is None:
1918         size=len(s)-offset
1919     i = randrange(offset, offset+size)
1920hunk ./src/allmydata/test/test_backends.py 7
1921 
1922 from allmydata.test.common_util import ReallyEqualMixin
1923 
1924-import mock
1925+import mock, os
1926 
1927 # This is the code that we're going to be testing.
1928hunk ./src/allmydata/test/test_backends.py 10
1929-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1930+from allmydata.storage.server import StorageServer
1931+
1932+from allmydata.storage.backends.das.core import DASCore
1933+from allmydata.storage.backends.null.core import NullCore
1934+
1935 
1936 # The following share file contents was generated with
1937 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1938hunk ./src/allmydata/test/test_backends.py 22
1939 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1940 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1941 
1942-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1943+tempdir = 'teststoredir'
1944+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1945+sharefname = os.path.join(sharedirname, '0')
1946 
1947 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1948     @mock.patch('time.time')
1949hunk ./src/allmydata/test/test_backends.py 58
1950         filesystem in only the prescribed ways. """
1951 
1952         def call_open(fname, mode):
1953-            if fname == 'testdir/bucket_counter.state':
1954-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1955-            elif fname == 'testdir/lease_checker.state':
1956-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1957-            elif fname == 'testdir/lease_checker.history':
1958+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1959+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1960+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1961+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1962+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1963                 return StringIO()
1964             else:
1965                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1966hunk ./src/allmydata/test/test_backends.py 124
1967     @mock.patch('__builtin__.open')
1968     def setUp(self, mockopen):
1969         def call_open(fname, mode):
1970-            if fname == 'testdir/bucket_counter.state':
1971-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1972-            elif fname == 'testdir/lease_checker.state':
1973-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1974-            elif fname == 'testdir/lease_checker.history':
1975+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1976+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1977+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1978+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1979+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1980                 return StringIO()
1981         mockopen.side_effect = call_open
1982hunk ./src/allmydata/test/test_backends.py 131
1983-
1984-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1985+        expiration_policy = {'enabled' : False,
1986+                             'mode' : 'age',
1987+                             'override_lease_duration' : None,
1988+                             'cutoff_date' : None,
1989+                             'sharetypes' : None}
1990+        testbackend = DASCore(tempdir, expiration_policy)
1991+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1992 
1993     @mock.patch('time.time')
1994     @mock.patch('os.mkdir')
1995hunk ./src/allmydata/test/test_backends.py 148
1996         """ Write a new share. """
1997 
1998         def call_listdir(dirname):
1999-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
2000-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
2001+            self.failUnlessReallyEqual(dirname, sharedirname)
2002+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2003 
2004         mocklistdir.side_effect = call_listdir
2005 
2006hunk ./src/allmydata/test/test_backends.py 178
2007 
2008         sharefile = MockFile()
2009         def call_open(fname, mode):
2010-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
2011+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
2012             return sharefile
2013 
2014         mockopen.side_effect = call_open
2015hunk ./src/allmydata/test/test_backends.py 200
2016         StorageServer object. """
2017 
2018         def call_listdir(dirname):
2019-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
2020+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2021             return ['0']
2022 
2023         mocklistdir.side_effect = call_listdir
2024}
2025[checkpoint patch
2026wilcoxjg@gmail.com**20110626165715
2027 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
2028] {
2029hunk ./src/allmydata/storage/backends/das/core.py 21
2030 from allmydata.storage.lease import LeaseInfo
2031 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2032      create_mutable_sharefile
2033-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
2034+from allmydata.storage.immutable import BucketWriter, BucketReader
2035 from allmydata.storage.crawler import FSBucketCountingCrawler
2036 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
2037 
2038hunk ./src/allmydata/storage/backends/das/core.py 27
2039 from zope.interface import implements
2040 
2041+# $SHARENUM matches this regex:
2042+NUM_RE=re.compile("^[0-9]+$")
2043+
2044 class DASCore(Backend):
2045     implements(IStorageBackend)
2046     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
2047hunk ./src/allmydata/storage/backends/das/core.py 80
2048         return fileutil.get_available_space(self.storedir, self.reserved_space)
2049 
2050     def get_shares(self, storage_index):
2051-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
2052+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
2053         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
2054         try:
2055             for f in os.listdir(finalstoragedir):
2056hunk ./src/allmydata/storage/backends/das/core.py 86
2057                 if NUM_RE.match(f):
2058                     filename = os.path.join(finalstoragedir, f)
2059-                    yield FSBShare(filename, int(f))
2060+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
2061         except OSError:
2062             # Commonly caused by there being no buckets at all.
2063             pass
2064hunk ./src/allmydata/storage/backends/das/core.py 95
2065         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
2066         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2067         return bw
2068+
2069+    def set_storage_server(self, ss):
2070+        self.ss = ss
2071         
2072 
2073 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2074hunk ./src/allmydata/storage/server.py 29
2075 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2076 # base-32 chars).
2077 
2078-# $SHARENUM matches this regex:
2079-NUM_RE=re.compile("^[0-9]+$")
2080 
2081 class StorageServer(service.MultiService, Referenceable):
2082     implements(RIStorageServer, IStatsProducer)
2083}
2084[checkpoint4
2085wilcoxjg@gmail.com**20110628202202
2086 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2087] {
2088hunk ./src/allmydata/storage/backends/das/core.py 96
2089         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2090         return bw
2091 
2092+    def make_bucket_reader(self, share):
2093+        return BucketReader(self.ss, share)
2094+
2095     def set_storage_server(self, ss):
2096         self.ss = ss
2097         
2098hunk ./src/allmydata/storage/backends/das/core.py 138
2099         must not be None. """
2100         precondition((max_size is not None) or (not create), max_size, create)
2101         self.shnum = shnum
2102+        self.storage_index = storageindex
2103         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2104         self._max_size = max_size
2105         if create:
2106hunk ./src/allmydata/storage/backends/das/core.py 173
2107             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2108         self._data_offset = 0xc
2109 
2110+    def get_shnum(self):
2111+        return self.shnum
2112+
2113     def unlink(self):
2114         os.unlink(self.fname)
2115 
2116hunk ./src/allmydata/storage/backends/null/core.py 2
2117 from allmydata.storage.backends.base import Backend
2118+from allmydata.storage.immutable import BucketWriter, BucketReader
2119 
2120 class NullCore(Backend):
2121     def __init__(self):
2122hunk ./src/allmydata/storage/backends/null/core.py 17
2123     def get_share(self, storage_index, sharenum):
2124         return None
2125 
2126-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2127-        return NullBucketWriter()
2128+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2129+       
2130+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2131+
2132+    def set_storage_server(self, ss):
2133+        self.ss = ss
2134+
2135+class ImmutableShare:
2136+    sharetype = "immutable"
2137+
2138+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2139+        """ If max_size is not None then I won't allow more than
2140+        max_size to be written to me. If create=True then max_size
2141+        must not be None. """
2142+        precondition((max_size is not None) or (not create), max_size, create)
2143+        self.shnum = shnum
2144+        self.storage_index = storageindex
2145+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2146+        self._max_size = max_size
2147+        if create:
2148+            # touch the file, so later callers will see that we're working on
2149+            # it. Also construct the metadata.
2150+            assert not os.path.exists(self.fname)
2151+            fileutil.make_dirs(os.path.dirname(self.fname))
2152+            f = open(self.fname, 'wb')
2153+            # The second field -- the four-byte share data length -- is no
2154+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2155+            # there in case someone downgrades a storage server from >=
2156+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2157+            # server to another, etc. We do saturation -- a share data length
2158+            # larger than 2**32-1 (what can fit into the field) is marked as
2159+            # the largest length that can fit into the field. That way, even
2160+            # if this does happen, the old < v1.3.0 server will still allow
2161+            # clients to read the first part of the share.
2162+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2163+            f.close()
2164+            self._lease_offset = max_size + 0x0c
2165+            self._num_leases = 0
2166+        else:
2167+            f = open(self.fname, 'rb')
2168+            filesize = os.path.getsize(self.fname)
2169+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2170+            f.close()
2171+            if version != 1:
2172+                msg = "sharefile %s had version %d but we wanted 1" % \
2173+                      (self.fname, version)
2174+                raise UnknownImmutableContainerVersionError(msg)
2175+            self._num_leases = num_leases
2176+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2177+        self._data_offset = 0xc
2178+
2179+    def get_shnum(self):
2180+        return self.shnum
2181+
2182+    def unlink(self):
2183+        os.unlink(self.fname)
2184+
2185+    def read_share_data(self, offset, length):
2186+        precondition(offset >= 0)
2187+        # Reads beyond the end of the data are truncated. Reads that start
2188+        # beyond the end of the data return an empty string.
2189+        seekpos = self._data_offset+offset
2190+        fsize = os.path.getsize(self.fname)
2191+        actuallength = max(0, min(length, fsize-seekpos))
2192+        if actuallength == 0:
2193+            return ""
2194+        f = open(self.fname, 'rb')
2195+        f.seek(seekpos)
2196+        return f.read(actuallength)
2197+
2198+    def write_share_data(self, offset, data):
2199+        length = len(data)
2200+        precondition(offset >= 0, offset)
2201+        if self._max_size is not None and offset+length > self._max_size:
2202+            raise DataTooLargeError(self._max_size, offset, length)
2203+        f = open(self.fname, 'rb+')
2204+        real_offset = self._data_offset+offset
2205+        f.seek(real_offset)
2206+        assert f.tell() == real_offset
2207+        f.write(data)
2208+        f.close()
2209+
2210+    def _write_lease_record(self, f, lease_number, lease_info):
2211+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2212+        f.seek(offset)
2213+        assert f.tell() == offset
2214+        f.write(lease_info.to_immutable_data())
2215+
2216+    def _read_num_leases(self, f):
2217+        f.seek(0x08)
2218+        (num_leases,) = struct.unpack(">L", f.read(4))
2219+        return num_leases
2220+
2221+    def _write_num_leases(self, f, num_leases):
2222+        f.seek(0x08)
2223+        f.write(struct.pack(">L", num_leases))
2224+
2225+    def _truncate_leases(self, f, num_leases):
2226+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2227+
2228+    def get_leases(self):
2229+        """Yields a LeaseInfo instance for all leases."""
2230+        f = open(self.fname, 'rb')
2231+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2232+        f.seek(self._lease_offset)
2233+        for i in range(num_leases):
2234+            data = f.read(self.LEASE_SIZE)
2235+            if data:
2236+                yield LeaseInfo().from_immutable_data(data)
2237+
2238+    def add_lease(self, lease_info):
2239+        f = open(self.fname, 'rb+')
2240+        num_leases = self._read_num_leases(f)
2241+        self._write_lease_record(f, num_leases, lease_info)
2242+        self._write_num_leases(f, num_leases+1)
2243+        f.close()
2244+
2245+    def renew_lease(self, renew_secret, new_expire_time):
2246+        for i,lease in enumerate(self.get_leases()):
2247+            if constant_time_compare(lease.renew_secret, renew_secret):
2248+                # yup. See if we need to update the owner time.
2249+                if new_expire_time > lease.expiration_time:
2250+                    # yes
2251+                    lease.expiration_time = new_expire_time
2252+                    f = open(self.fname, 'rb+')
2253+                    self._write_lease_record(f, i, lease)
2254+                    f.close()
2255+                return
2256+        raise IndexError("unable to renew non-existent lease")
2257+
2258+    def add_or_renew_lease(self, lease_info):
2259+        try:
2260+            self.renew_lease(lease_info.renew_secret,
2261+                             lease_info.expiration_time)
2262+        except IndexError:
2263+            self.add_lease(lease_info)
2264+
2265+
2266+    def cancel_lease(self, cancel_secret):
2267+        """Remove a lease with the given cancel_secret. If the last lease is
2268+        cancelled, the file will be removed. Return the number of bytes that
2269+        were freed (by truncating the list of leases, and possibly by
2270+        deleting the file. Raise IndexError if there was no lease with the
2271+        given cancel_secret.
2272+        """
2273+
2274+        leases = list(self.get_leases())
2275+        num_leases_removed = 0
2276+        for i,lease in enumerate(leases):
2277+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2278+                leases[i] = None
2279+                num_leases_removed += 1
2280+        if not num_leases_removed:
2281+            raise IndexError("unable to find matching lease to cancel")
2282+        if num_leases_removed:
2283+            # pack and write out the remaining leases. We write these out in
2284+            # the same order as they were added, so that if we crash while
2285+            # doing this, we won't lose any non-cancelled leases.
2286+            leases = [l for l in leases if l] # remove the cancelled leases
2287+            f = open(self.fname, 'rb+')
2288+            for i,lease in enumerate(leases):
2289+                self._write_lease_record(f, i, lease)
2290+            self._write_num_leases(f, len(leases))
2291+            self._truncate_leases(f, len(leases))
2292+            f.close()
2293+        space_freed = self.LEASE_SIZE * num_leases_removed
2294+        if not len(leases):
2295+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2296+            self.unlink()
2297+        return space_freed
2298hunk ./src/allmydata/storage/immutable.py 114
2299 class BucketReader(Referenceable):
2300     implements(RIBucketReader)
2301 
2302-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2303+    def __init__(self, ss, share):
2304         self.ss = ss
2305hunk ./src/allmydata/storage/immutable.py 116
2306-        self._share_file = ShareFile(sharefname)
2307-        self.storage_index = storage_index
2308-        self.shnum = shnum
2309+        self._share_file = share
2310+        self.storage_index = share.storage_index
2311+        self.shnum = share.shnum
2312 
2313     def __repr__(self):
2314         return "<%s %s %s>" % (self.__class__.__name__,
2315hunk ./src/allmydata/storage/server.py 316
2316         si_s = si_b2a(storage_index)
2317         log.msg("storage: get_buckets %s" % si_s)
2318         bucketreaders = {} # k: sharenum, v: BucketReader
2319-        for shnum, filename in self.backend.get_shares(storage_index):
2320-            bucketreaders[shnum] = BucketReader(self, filename,
2321-                                                storage_index, shnum)
2322+        self.backend.set_storage_server(self)
2323+        for share in self.backend.get_shares(storage_index):
2324+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2325         self.add_latency("get", time.time() - start)
2326         return bucketreaders
2327 
2328hunk ./src/allmydata/test/test_backends.py 25
2329 tempdir = 'teststoredir'
2330 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2331 sharefname = os.path.join(sharedirname, '0')
2332+expiration_policy = {'enabled' : False,
2333+                     'mode' : 'age',
2334+                     'override_lease_duration' : None,
2335+                     'cutoff_date' : None,
2336+                     'sharetypes' : None}
2337 
2338 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2339     @mock.patch('time.time')
2340hunk ./src/allmydata/test/test_backends.py 43
2341         tries to read or write to the file system. """
2342 
2343         # Now begin the test.
2344-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2345+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2346 
2347         self.failIf(mockisdir.called)
2348         self.failIf(mocklistdir.called)
2349hunk ./src/allmydata/test/test_backends.py 74
2350         mockopen.side_effect = call_open
2351 
2352         # Now begin the test.
2353-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2354+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2355 
2356         self.failIf(mockisdir.called)
2357         self.failIf(mocklistdir.called)
2358hunk ./src/allmydata/test/test_backends.py 86
2359 
2360 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2361     def setUp(self):
2362-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2363+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2364 
2365     @mock.patch('os.mkdir')
2366     @mock.patch('__builtin__.open')
2367hunk ./src/allmydata/test/test_backends.py 136
2368             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2369                 return StringIO()
2370         mockopen.side_effect = call_open
2371-        expiration_policy = {'enabled' : False,
2372-                             'mode' : 'age',
2373-                             'override_lease_duration' : None,
2374-                             'cutoff_date' : None,
2375-                             'sharetypes' : None}
2376         testbackend = DASCore(tempdir, expiration_policy)
2377         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2378 
2379}
2380[checkpoint5
2381wilcoxjg@gmail.com**20110705034626
2382 Ignore-this: 255780bd58299b0aa33c027e9d008262
2383] {
2384addfile ./src/allmydata/storage/backends/base.py
2385hunk ./src/allmydata/storage/backends/base.py 1
2386+from twisted.application import service
2387+
2388+class Backend(service.MultiService):
2389+    def __init__(self):
2390+        service.MultiService.__init__(self)
2391hunk ./src/allmydata/storage/backends/null/core.py 19
2392 
2393     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2394         
2395+        immutableshare = ImmutableShare()
2396         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2397 
2398     def set_storage_server(self, ss):
2399hunk ./src/allmydata/storage/backends/null/core.py 28
2400 class ImmutableShare:
2401     sharetype = "immutable"
2402 
2403-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2404+    def __init__(self):
2405         """ If max_size is not None then I won't allow more than
2406         max_size to be written to me. If create=True then max_size
2407         must not be None. """
2408hunk ./src/allmydata/storage/backends/null/core.py 32
2409-        precondition((max_size is not None) or (not create), max_size, create)
2410-        self.shnum = shnum
2411-        self.storage_index = storageindex
2412-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2413-        self._max_size = max_size
2414-        if create:
2415-            # touch the file, so later callers will see that we're working on
2416-            # it. Also construct the metadata.
2417-            assert not os.path.exists(self.fname)
2418-            fileutil.make_dirs(os.path.dirname(self.fname))
2419-            f = open(self.fname, 'wb')
2420-            # The second field -- the four-byte share data length -- is no
2421-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2422-            # there in case someone downgrades a storage server from >=
2423-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2424-            # server to another, etc. We do saturation -- a share data length
2425-            # larger than 2**32-1 (what can fit into the field) is marked as
2426-            # the largest length that can fit into the field. That way, even
2427-            # if this does happen, the old < v1.3.0 server will still allow
2428-            # clients to read the first part of the share.
2429-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2430-            f.close()
2431-            self._lease_offset = max_size + 0x0c
2432-            self._num_leases = 0
2433-        else:
2434-            f = open(self.fname, 'rb')
2435-            filesize = os.path.getsize(self.fname)
2436-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2437-            f.close()
2438-            if version != 1:
2439-                msg = "sharefile %s had version %d but we wanted 1" % \
2440-                      (self.fname, version)
2441-                raise UnknownImmutableContainerVersionError(msg)
2442-            self._num_leases = num_leases
2443-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2444-        self._data_offset = 0xc
2445+        pass
2446 
2447     def get_shnum(self):
2448         return self.shnum
2449hunk ./src/allmydata/storage/backends/null/core.py 54
2450         return f.read(actuallength)
2451 
2452     def write_share_data(self, offset, data):
2453-        length = len(data)
2454-        precondition(offset >= 0, offset)
2455-        if self._max_size is not None and offset+length > self._max_size:
2456-            raise DataTooLargeError(self._max_size, offset, length)
2457-        f = open(self.fname, 'rb+')
2458-        real_offset = self._data_offset+offset
2459-        f.seek(real_offset)
2460-        assert f.tell() == real_offset
2461-        f.write(data)
2462-        f.close()
2463+        pass
2464 
2465     def _write_lease_record(self, f, lease_number, lease_info):
2466         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2467hunk ./src/allmydata/storage/backends/null/core.py 84
2468             if data:
2469                 yield LeaseInfo().from_immutable_data(data)
2470 
2471-    def add_lease(self, lease_info):
2472-        f = open(self.fname, 'rb+')
2473-        num_leases = self._read_num_leases(f)
2474-        self._write_lease_record(f, num_leases, lease_info)
2475-        self._write_num_leases(f, num_leases+1)
2476-        f.close()
2477+    def add_lease(self, lease):
2478+        pass
2479 
2480     def renew_lease(self, renew_secret, new_expire_time):
2481         for i,lease in enumerate(self.get_leases()):
2482hunk ./src/allmydata/test/test_backends.py 32
2483                      'sharetypes' : None}
2484 
2485 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2486-    @mock.patch('time.time')
2487-    @mock.patch('os.mkdir')
2488-    @mock.patch('__builtin__.open')
2489-    @mock.patch('os.listdir')
2490-    @mock.patch('os.path.isdir')
2491-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2492-        """ This tests whether a server instance can be constructed
2493-        with a null backend. The server instance fails the test if it
2494-        tries to read or write to the file system. """
2495-
2496-        # Now begin the test.
2497-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2498-
2499-        self.failIf(mockisdir.called)
2500-        self.failIf(mocklistdir.called)
2501-        self.failIf(mockopen.called)
2502-        self.failIf(mockmkdir.called)
2503-
2504-        # You passed!
2505-
2506     @mock.patch('time.time')
2507     @mock.patch('os.mkdir')
2508     @mock.patch('__builtin__.open')
2509hunk ./src/allmydata/test/test_backends.py 53
2510                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2511         mockopen.side_effect = call_open
2512 
2513-        # Now begin the test.
2514-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2515-
2516-        self.failIf(mockisdir.called)
2517-        self.failIf(mocklistdir.called)
2518-        self.failIf(mockopen.called)
2519-        self.failIf(mockmkdir.called)
2520-        self.failIf(mocktime.called)
2521-
2522-        # You passed!
2523-
2524-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2525-    def setUp(self):
2526-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2527-
2528-    @mock.patch('os.mkdir')
2529-    @mock.patch('__builtin__.open')
2530-    @mock.patch('os.listdir')
2531-    @mock.patch('os.path.isdir')
2532-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2533-        """ Write a new share. """
2534-
2535-        # Now begin the test.
2536-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2537-        bs[0].remote_write(0, 'a')
2538-        self.failIf(mockisdir.called)
2539-        self.failIf(mocklistdir.called)
2540-        self.failIf(mockopen.called)
2541-        self.failIf(mockmkdir.called)
2542+        def call_isdir(fname):
2543+            if fname == os.path.join(tempdir,'shares'):
2544+                return True
2545+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2546+                return True
2547+            else:
2548+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2549+        mockisdir.side_effect = call_isdir
2550 
2551hunk ./src/allmydata/test/test_backends.py 62
2552-    @mock.patch('os.path.exists')
2553-    @mock.patch('os.path.getsize')
2554-    @mock.patch('__builtin__.open')
2555-    @mock.patch('os.listdir')
2556-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2557-        """ This tests whether the code correctly finds and reads
2558-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2559-        servers. There is a similar test in test_download, but that one
2560-        is from the perspective of the client and exercises a deeper
2561-        stack of code. This one is for exercising just the
2562-        StorageServer object. """
2563+        def call_mkdir(fname, mode):
2564+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2565+            self.failUnlessEqual(0777, mode)
2566+            if fname == tempdir:
2567+                return None
2568+            elif fname == os.path.join(tempdir,'shares'):
2569+                return None
2570+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2571+                return None
2572+            else:
2573+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2574+        mockmkdir.side_effect = call_mkdir
2575 
2576         # Now begin the test.
2577hunk ./src/allmydata/test/test_backends.py 76
2578-        bs = self.s.remote_get_buckets('teststorage_index')
2579+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2580 
2581hunk ./src/allmydata/test/test_backends.py 78
2582-        self.failUnlessEqual(len(bs), 0)
2583-        self.failIf(mocklistdir.called)
2584-        self.failIf(mockopen.called)
2585-        self.failIf(mockgetsize.called)
2586-        self.failIf(mockexists.called)
2587+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2588 
2589 
2590 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2591hunk ./src/allmydata/test/test_backends.py 193
2592         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2593 
2594 
2595+
2596+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2597+    @mock.patch('time.time')
2598+    @mock.patch('os.mkdir')
2599+    @mock.patch('__builtin__.open')
2600+    @mock.patch('os.listdir')
2601+    @mock.patch('os.path.isdir')
2602+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2603+        """ This tests whether a file system backend instance can be
2604+        constructed. To pass the test, it has to use the
2605+        filesystem in only the prescribed ways. """
2606+
2607+        def call_open(fname, mode):
2608+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2609+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2610+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2611+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2612+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2613+                return StringIO()
2614+            else:
2615+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2616+        mockopen.side_effect = call_open
2617+
2618+        def call_isdir(fname):
2619+            if fname == os.path.join(tempdir,'shares'):
2620+                return True
2621+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2622+                return True
2623+            else:
2624+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2625+        mockisdir.side_effect = call_isdir
2626+
2627+        def call_mkdir(fname, mode):
2628+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2629+            self.failUnlessEqual(0777, mode)
2630+            if fname == tempdir:
2631+                return None
2632+            elif fname == os.path.join(tempdir,'shares'):
2633+                return None
2634+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2635+                return None
2636+            else:
2637+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2638+        mockmkdir.side_effect = call_mkdir
2639+
2640+        # Now begin the test.
2641+        DASCore('teststoredir', expiration_policy)
2642+
2643+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2644}
2645[checkpoint 6
2646wilcoxjg@gmail.com**20110706190824
2647 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2648] {
2649hunk ./src/allmydata/interfaces.py 100
2650                          renew_secret=LeaseRenewSecret,
2651                          cancel_secret=LeaseCancelSecret,
2652                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2653-                         allocated_size=Offset, canary=Referenceable):
2654+                         allocated_size=Offset,
2655+                         canary=Referenceable):
2656         """
2657hunk ./src/allmydata/interfaces.py 103
2658-        @param storage_index: the index of the bucket to be created or
2659+        @param storage_index: the index of the shares to be created or
2660                               increfed.
2661hunk ./src/allmydata/interfaces.py 105
2662-        @param sharenums: these are the share numbers (probably between 0 and
2663-                          99) that the sender is proposing to store on this
2664-                          server.
2665-        @param renew_secret: This is the secret used to protect bucket refresh
2666+        @param renew_secret: This is the secret used to protect shares refresh
2667                              This secret is generated by the client and
2668                              stored for later comparison by the server. Each
2669                              server is given a different secret.
2670hunk ./src/allmydata/interfaces.py 109
2671-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2672-        @param canary: If the canary is lost before close(), the bucket is
2673+        @param cancel_secret: Like renew_secret, but protects shares decref.
2674+        @param sharenums: these are the share numbers (probably between 0 and
2675+                          99) that the sender is proposing to store on this
2676+                          server.
2677+        @param allocated_size: XXX The size of the shares the client wishes to store.
2678+        @param canary: If the canary is lost before close(), the shares are
2679                        deleted.
2680hunk ./src/allmydata/interfaces.py 116
2681+
2682         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2683                  already have and allocated is what we hereby agree to accept.
2684                  New leases are added for shares in both lists.
2685hunk ./src/allmydata/interfaces.py 128
2686                   renew_secret=LeaseRenewSecret,
2687                   cancel_secret=LeaseCancelSecret):
2688         """
2689-        Add a new lease on the given bucket. If the renew_secret matches an
2690+        Add a new lease on the given shares. If the renew_secret matches an
2691         existing lease, that lease will be renewed instead. If there is no
2692         bucket for the given storage_index, return silently. (note that in
2693         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2694hunk ./src/allmydata/storage/server.py 17
2695 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2696      create_mutable_sharefile
2697 
2698-from zope.interface import implements
2699-
2700 # storage/
2701 # storage/shares/incoming
2702 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2703hunk ./src/allmydata/test/test_backends.py 6
2704 from StringIO import StringIO
2705 
2706 from allmydata.test.common_util import ReallyEqualMixin
2707+from allmydata.util.assertutil import _assert
2708 
2709 import mock, os
2710 
2711hunk ./src/allmydata/test/test_backends.py 92
2712                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2713             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2714                 return StringIO()
2715+            else:
2716+                _assert(False, "The tester code doesn't recognize this case.") 
2717+
2718         mockopen.side_effect = call_open
2719         testbackend = DASCore(tempdir, expiration_policy)
2720         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2721hunk ./src/allmydata/test/test_backends.py 109
2722 
2723         def call_listdir(dirname):
2724             self.failUnlessReallyEqual(dirname, sharedirname)
2725-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2726+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2727 
2728         mocklistdir.side_effect = call_listdir
2729 
2730hunk ./src/allmydata/test/test_backends.py 113
2731+        def call_isdir(dirname):
2732+            self.failUnlessReallyEqual(dirname, sharedirname)
2733+            return True
2734+
2735+        mockisdir.side_effect = call_isdir
2736+
2737+        def call_mkdir(dirname, permissions):
2738+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2739+                self.Fail
2740+            else:
2741+                return True
2742+
2743+        mockmkdir.side_effect = call_mkdir
2744+
2745         class MockFile:
2746             def __init__(self):
2747                 self.buffer = ''
2748hunk ./src/allmydata/test/test_backends.py 156
2749             return sharefile
2750 
2751         mockopen.side_effect = call_open
2752+
2753         # Now begin the test.
2754         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2755         bs[0].remote_write(0, 'a')
2756hunk ./src/allmydata/test/test_backends.py 161
2757         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2758+       
2759+        # Now test the allocated_size method.
2760+        spaceint = self.s.allocated_size()
2761 
2762     @mock.patch('os.path.exists')
2763     @mock.patch('os.path.getsize')
2764}
2765[checkpoint 7
2766wilcoxjg@gmail.com**20110706200820
2767 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2768] hunk ./src/allmydata/test/test_backends.py 164
2769         
2770         # Now test the allocated_size method.
2771         spaceint = self.s.allocated_size()
2772+        self.failUnlessReallyEqual(spaceint, 1)
2773 
2774     @mock.patch('os.path.exists')
2775     @mock.patch('os.path.getsize')
2776[checkpoint8
2777wilcoxjg@gmail.com**20110706223126
2778 Ignore-this: 97336180883cb798b16f15411179f827
2779   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2780] hunk ./src/allmydata/test/test_backends.py 32
2781                      'cutoff_date' : None,
2782                      'sharetypes' : None}
2783 
2784+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2785+    def setUp(self):
2786+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2787+
2788+    @mock.patch('os.mkdir')
2789+    @mock.patch('__builtin__.open')
2790+    @mock.patch('os.listdir')
2791+    @mock.patch('os.path.isdir')
2792+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2793+        """ Write a new share. """
2794+
2795+        # Now begin the test.
2796+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2797+        bs[0].remote_write(0, 'a')
2798+        self.failIf(mockisdir.called)
2799+        self.failIf(mocklistdir.called)
2800+        self.failIf(mockopen.called)
2801+        self.failIf(mockmkdir.called)
2802+
2803 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2804     @mock.patch('time.time')
2805     @mock.patch('os.mkdir')
2806[checkpoint 9
2807wilcoxjg@gmail.com**20110707042942
2808 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2809] {
2810hunk ./src/allmydata/storage/backends/das/core.py 88
2811                     filename = os.path.join(finalstoragedir, f)
2812                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2813         except OSError:
2814-            # Commonly caused by there being no buckets at all.
2815+            # Commonly caused by there being no shares at all.
2816             pass
2817         
2818     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2819hunk ./src/allmydata/storage/backends/das/core.py 141
2820         self.storage_index = storageindex
2821         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2822         self._max_size = max_size
2823+        self.incomingdir = os.path.join(sharedir, 'incoming')
2824+        si_dir = storage_index_to_dir(storageindex)
2825+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2826+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2827         if create:
2828             # touch the file, so later callers will see that we're working on
2829             # it. Also construct the metadata.
2830hunk ./src/allmydata/storage/backends/das/core.py 177
2831             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2832         self._data_offset = 0xc
2833 
2834+    def close(self):
2835+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2836+        fileutil.rename(self.incominghome, self.finalhome)
2837+        try:
2838+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2839+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2840+            # these directories lying around forever, but the delete might
2841+            # fail if we're working on another share for the same storage
2842+            # index (like ab/abcde/5). The alternative approach would be to
2843+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2844+            # ShareWriter), each of which is responsible for a single
2845+            # directory on disk, and have them use reference counting of
2846+            # their children to know when they should do the rmdir. This
2847+            # approach is simpler, but relies on os.rmdir refusing to delete
2848+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2849+            os.rmdir(os.path.dirname(self.incominghome))
2850+            # we also delete the grandparent (prefix) directory, .../ab ,
2851+            # again to avoid leaving directories lying around. This might
2852+            # fail if there is another bucket open that shares a prefix (like
2853+            # ab/abfff).
2854+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2855+            # we leave the great-grandparent (incoming/) directory in place.
2856+        except EnvironmentError:
2857+            # ignore the "can't rmdir because the directory is not empty"
2858+            # exceptions, those are normal consequences of the
2859+            # above-mentioned conditions.
2860+            pass
2861+        pass
2862+       
2863+    def stat(self):
2864+        return os.stat(self.finalhome)[stat.ST_SIZE]
2865+
2866     def get_shnum(self):
2867         return self.shnum
2868 
2869hunk ./src/allmydata/storage/immutable.py 7
2870 
2871 from zope.interface import implements
2872 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2873-from allmydata.util import base32, fileutil, log
2874+from allmydata.util import base32, log
2875 from allmydata.util.assertutil import precondition
2876 from allmydata.util.hashutil import constant_time_compare
2877 from allmydata.storage.lease import LeaseInfo
2878hunk ./src/allmydata/storage/immutable.py 44
2879     def remote_close(self):
2880         precondition(not self.closed)
2881         start = time.time()
2882-
2883-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2884-        fileutil.rename(self.incominghome, self.finalhome)
2885-        try:
2886-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2887-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2888-            # these directories lying around forever, but the delete might
2889-            # fail if we're working on another share for the same storage
2890-            # index (like ab/abcde/5). The alternative approach would be to
2891-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2892-            # ShareWriter), each of which is responsible for a single
2893-            # directory on disk, and have them use reference counting of
2894-            # their children to know when they should do the rmdir. This
2895-            # approach is simpler, but relies on os.rmdir refusing to delete
2896-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2897-            os.rmdir(os.path.dirname(self.incominghome))
2898-            # we also delete the grandparent (prefix) directory, .../ab ,
2899-            # again to avoid leaving directories lying around. This might
2900-            # fail if there is another bucket open that shares a prefix (like
2901-            # ab/abfff).
2902-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2903-            # we leave the great-grandparent (incoming/) directory in place.
2904-        except EnvironmentError:
2905-            # ignore the "can't rmdir because the directory is not empty"
2906-            # exceptions, those are normal consequences of the
2907-            # above-mentioned conditions.
2908-            pass
2909+        self._sharefile.close()
2910         self._sharefile = None
2911         self.closed = True
2912         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2913hunk ./src/allmydata/storage/immutable.py 49
2914 
2915-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2916+        filelen = self._sharefile.stat()
2917         self.ss.bucket_writer_closed(self, filelen)
2918         self.ss.add_latency("close", time.time() - start)
2919         self.ss.count("close")
2920hunk ./src/allmydata/storage/server.py 45
2921         self._active_writers = weakref.WeakKeyDictionary()
2922         self.backend = backend
2923         self.backend.setServiceParent(self)
2924+        self.backend.set_storage_server(self)
2925         log.msg("StorageServer created", facility="tahoe.storage")
2926 
2927         self.latencies = {"allocate": [], # immutable
2928hunk ./src/allmydata/storage/server.py 220
2929 
2930         for shnum in (sharenums - alreadygot):
2931             if (not limited) or (remaining_space >= max_space_per_bucket):
2932-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2933-                self.backend.set_storage_server(self)
2934                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2935                                                      max_space_per_bucket, lease_info, canary)
2936                 bucketwriters[shnum] = bw
2937hunk ./src/allmydata/test/test_backends.py 117
2938         mockopen.side_effect = call_open
2939         testbackend = DASCore(tempdir, expiration_policy)
2940         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2941-
2942+   
2943+    @mock.patch('allmydata.util.fileutil.get_available_space')
2944     @mock.patch('time.time')
2945     @mock.patch('os.mkdir')
2946     @mock.patch('__builtin__.open')
2947hunk ./src/allmydata/test/test_backends.py 124
2948     @mock.patch('os.listdir')
2949     @mock.patch('os.path.isdir')
2950-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2951+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2952+                             mockget_available_space):
2953         """ Write a new share. """
2954 
2955         def call_listdir(dirname):
2956hunk ./src/allmydata/test/test_backends.py 148
2957 
2958         mockmkdir.side_effect = call_mkdir
2959 
2960+        def call_get_available_space(storedir, reserved_space):
2961+            self.failUnlessReallyEqual(storedir, tempdir)
2962+            return 1
2963+
2964+        mockget_available_space.side_effect = call_get_available_space
2965+
2966         class MockFile:
2967             def __init__(self):
2968                 self.buffer = ''
2969hunk ./src/allmydata/test/test_backends.py 188
2970         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2971         bs[0].remote_write(0, 'a')
2972         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2973-       
2974+
2975+        # What happens when there's not enough space for the client's request?
2976+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2977+
2978         # Now test the allocated_size method.
2979         spaceint = self.s.allocated_size()
2980         self.failUnlessReallyEqual(spaceint, 1)
2981}
2982[checkpoint10
2983wilcoxjg@gmail.com**20110707172049
2984 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2985] {
2986hunk ./src/allmydata/test/test_backends.py 20
2987 # The following share file contents was generated with
2988 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2989 # with share data == 'a'.
2990-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2991+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2992+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2993+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2994 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2995 
2996hunk ./src/allmydata/test/test_backends.py 25
2997+testnodeid = 'testnodeidxxxxxxxxxx'
2998 tempdir = 'teststoredir'
2999 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3000 sharefname = os.path.join(sharedirname, '0')
3001hunk ./src/allmydata/test/test_backends.py 37
3002 
3003 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
3004     def setUp(self):
3005-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
3006+        self.s = StorageServer(testnodeid, backend=NullCore())
3007 
3008     @mock.patch('os.mkdir')
3009     @mock.patch('__builtin__.open')
3010hunk ./src/allmydata/test/test_backends.py 99
3011         mockmkdir.side_effect = call_mkdir
3012 
3013         # Now begin the test.
3014-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
3015+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
3016 
3017         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
3018 
3019hunk ./src/allmydata/test/test_backends.py 119
3020 
3021         mockopen.side_effect = call_open
3022         testbackend = DASCore(tempdir, expiration_policy)
3023-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
3024-   
3025+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3026+       
3027+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3028     @mock.patch('allmydata.util.fileutil.get_available_space')
3029     @mock.patch('time.time')
3030     @mock.patch('os.mkdir')
3031hunk ./src/allmydata/test/test_backends.py 129
3032     @mock.patch('os.listdir')
3033     @mock.patch('os.path.isdir')
3034     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3035-                             mockget_available_space):
3036+                             mockget_available_space, mockget_shares):
3037         """ Write a new share. """
3038 
3039         def call_listdir(dirname):
3040hunk ./src/allmydata/test/test_backends.py 139
3041         mocklistdir.side_effect = call_listdir
3042 
3043         def call_isdir(dirname):
3044+            #XXX Should there be any other tests here?
3045             self.failUnlessReallyEqual(dirname, sharedirname)
3046             return True
3047 
3048hunk ./src/allmydata/test/test_backends.py 159
3049 
3050         mockget_available_space.side_effect = call_get_available_space
3051 
3052+        mocktime.return_value = 0
3053+        class MockShare:
3054+            def __init__(self):
3055+                self.shnum = 1
3056+               
3057+            def add_or_renew_lease(elf, lease_info):
3058+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3059+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3060+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3061+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3062+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3063+               
3064+
3065+        share = MockShare()
3066+        def call_get_shares(storageindex):
3067+            return [share]
3068+
3069+        mockget_shares.side_effect = call_get_shares
3070+
3071         class MockFile:
3072             def __init__(self):
3073                 self.buffer = ''
3074hunk ./src/allmydata/test/test_backends.py 199
3075             def tell(self):
3076                 return self.pos
3077 
3078-        mocktime.return_value = 0
3079 
3080         sharefile = MockFile()
3081         def call_open(fname, mode):
3082}
3083[jacp 11
3084wilcoxjg@gmail.com**20110708213919
3085 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3086] {
3087hunk ./src/allmydata/storage/backends/das/core.py 144
3088         self.incomingdir = os.path.join(sharedir, 'incoming')
3089         si_dir = storage_index_to_dir(storageindex)
3090         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3091+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3092         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3093         if create:
3094             # touch the file, so later callers will see that we're working on
3095hunk ./src/allmydata/storage/backends/das/core.py 208
3096         pass
3097         
3098     def stat(self):
3099-        return os.stat(self.finalhome)[stat.ST_SIZE]
3100+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3101 
3102     def get_shnum(self):
3103         return self.shnum
3104hunk ./src/allmydata/storage/immutable.py 44
3105     def remote_close(self):
3106         precondition(not self.closed)
3107         start = time.time()
3108+
3109         self._sharefile.close()
3110hunk ./src/allmydata/storage/immutable.py 46
3111+        filelen = self._sharefile.stat()
3112         self._sharefile = None
3113hunk ./src/allmydata/storage/immutable.py 48
3114+
3115         self.closed = True
3116         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3117 
3118hunk ./src/allmydata/storage/immutable.py 52
3119-        filelen = self._sharefile.stat()
3120         self.ss.bucket_writer_closed(self, filelen)
3121         self.ss.add_latency("close", time.time() - start)
3122         self.ss.count("close")
3123hunk ./src/allmydata/storage/server.py 220
3124 
3125         for shnum in (sharenums - alreadygot):
3126             if (not limited) or (remaining_space >= max_space_per_bucket):
3127-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3128-                                                     max_space_per_bucket, lease_info, canary)
3129+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3130                 bucketwriters[shnum] = bw
3131                 self._active_writers[bw] = 1
3132                 if limited:
3133hunk ./src/allmydata/test/test_backends.py 20
3134 # The following share file contents was generated with
3135 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3136 # with share data == 'a'.
3137-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3138-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3139+renew_secret  = 'x'*32
3140+cancel_secret = 'y'*32
3141 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3142 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3143 
3144hunk ./src/allmydata/test/test_backends.py 27
3145 testnodeid = 'testnodeidxxxxxxxxxx'
3146 tempdir = 'teststoredir'
3147-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3148-sharefname = os.path.join(sharedirname, '0')
3149+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3150+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3151+shareincomingname = os.path.join(sharedirincomingname, '0')
3152+sharefname = os.path.join(sharedirfinalname, '0')
3153+
3154 expiration_policy = {'enabled' : False,
3155                      'mode' : 'age',
3156                      'override_lease_duration' : None,
3157hunk ./src/allmydata/test/test_backends.py 123
3158         mockopen.side_effect = call_open
3159         testbackend = DASCore(tempdir, expiration_policy)
3160         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3161-       
3162+
3163+    @mock.patch('allmydata.util.fileutil.rename')
3164+    @mock.patch('allmydata.util.fileutil.make_dirs')
3165+    @mock.patch('os.path.exists')
3166+    @mock.patch('os.stat')
3167     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3168     @mock.patch('allmydata.util.fileutil.get_available_space')
3169     @mock.patch('time.time')
3170hunk ./src/allmydata/test/test_backends.py 136
3171     @mock.patch('os.listdir')
3172     @mock.patch('os.path.isdir')
3173     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3174-                             mockget_available_space, mockget_shares):
3175+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3176+                             mockmake_dirs, mockrename):
3177         """ Write a new share. """
3178 
3179         def call_listdir(dirname):
3180hunk ./src/allmydata/test/test_backends.py 141
3181-            self.failUnlessReallyEqual(dirname, sharedirname)
3182+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3183             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3184 
3185         mocklistdir.side_effect = call_listdir
3186hunk ./src/allmydata/test/test_backends.py 148
3187 
3188         def call_isdir(dirname):
3189             #XXX Should there be any other tests here?
3190-            self.failUnlessReallyEqual(dirname, sharedirname)
3191+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3192             return True
3193 
3194         mockisdir.side_effect = call_isdir
3195hunk ./src/allmydata/test/test_backends.py 154
3196 
3197         def call_mkdir(dirname, permissions):
3198-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3199+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3200                 self.Fail
3201             else:
3202                 return True
3203hunk ./src/allmydata/test/test_backends.py 208
3204                 return self.pos
3205 
3206 
3207-        sharefile = MockFile()
3208+        fobj = MockFile()
3209         def call_open(fname, mode):
3210             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3211hunk ./src/allmydata/test/test_backends.py 211
3212-            return sharefile
3213+            return fobj
3214 
3215         mockopen.side_effect = call_open
3216 
3217hunk ./src/allmydata/test/test_backends.py 215
3218+        def call_make_dirs(dname):
3219+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3220+           
3221+        mockmake_dirs.side_effect = call_make_dirs
3222+
3223+        def call_rename(src, dst):
3224+           self.failUnlessReallyEqual(src, shareincomingname)
3225+           self.failUnlessReallyEqual(dst, sharefname)
3226+           
3227+        mockrename.side_effect = call_rename
3228+
3229+        def call_exists(fname):
3230+            self.failUnlessReallyEqual(fname, sharefname)
3231+
3232+        mockexists.side_effect = call_exists
3233+
3234         # Now begin the test.
3235         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3236         bs[0].remote_write(0, 'a')
3237hunk ./src/allmydata/test/test_backends.py 234
3238-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3239+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3240+        spaceint = self.s.allocated_size()
3241+        self.failUnlessReallyEqual(spaceint, 1)
3242+
3243+        bs[0].remote_close()
3244 
3245         # What happens when there's not enough space for the client's request?
3246hunk ./src/allmydata/test/test_backends.py 241
3247-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3248+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3249 
3250         # Now test the allocated_size method.
3251hunk ./src/allmydata/test/test_backends.py 244
3252-        spaceint = self.s.allocated_size()
3253-        self.failUnlessReallyEqual(spaceint, 1)
3254+        #self.failIf(mockexists.called, mockexists.call_args_list)
3255+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3256+        #self.failIf(mockrename.called, mockrename.call_args_list)
3257+        #self.failIf(mockstat.called, mockstat.call_args_list)
3258 
3259     @mock.patch('os.path.exists')
3260     @mock.patch('os.path.getsize')
3261}
3262[checkpoint12 testing correct behavior with regard to incoming and final
3263wilcoxjg@gmail.com**20110710191915
3264 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3265] {
3266hunk ./src/allmydata/storage/backends/das/core.py 74
3267         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3268         self.lease_checker.setServiceParent(self)
3269 
3270+    def get_incoming(self, storageindex):
3271+        return set((1,))
3272+
3273     def get_available_space(self):
3274         if self.readonly:
3275             return 0
3276hunk ./src/allmydata/storage/server.py 77
3277         """Return a dict, indexed by category, that contains a dict of
3278         latency numbers for each category. If there are sufficient samples
3279         for unambiguous interpretation, each dict will contain the
3280-        following keys: mean, 01_0_percentile, 10_0_percentile,
3281+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3282         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3283         99_0_percentile, 99_9_percentile.  If there are insufficient
3284         samples for a given percentile to be interpreted unambiguously
3285hunk ./src/allmydata/storage/server.py 120
3286 
3287     def get_stats(self):
3288         # remember: RIStatsProvider requires that our return dict
3289-        # contains numeric values.
3290+        # contains numeric, or None values.
3291         stats = { 'storage_server.allocated': self.allocated_size(), }
3292         stats['storage_server.reserved_space'] = self.reserved_space
3293         for category,ld in self.get_latencies().items():
3294hunk ./src/allmydata/storage/server.py 185
3295         start = time.time()
3296         self.count("allocate")
3297         alreadygot = set()
3298+        incoming = set()
3299         bucketwriters = {} # k: shnum, v: BucketWriter
3300 
3301         si_s = si_b2a(storage_index)
3302hunk ./src/allmydata/storage/server.py 219
3303             alreadygot.add(share.shnum)
3304             share.add_or_renew_lease(lease_info)
3305 
3306-        for shnum in (sharenums - alreadygot):
3307+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3308+        incoming = self.backend.get_incoming(storageindex)
3309+
3310+        for shnum in ((sharenums - alreadygot) - incoming):
3311             if (not limited) or (remaining_space >= max_space_per_bucket):
3312                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3313                 bucketwriters[shnum] = bw
3314hunk ./src/allmydata/storage/server.py 229
3315                 self._active_writers[bw] = 1
3316                 if limited:
3317                     remaining_space -= max_space_per_bucket
3318-
3319-        #XXX We SHOULD DOCUMENT LATER.
3320+            else:
3321+                # Bummer not enough space to accept this share.
3322+                pass
3323 
3324         self.add_latency("allocate", time.time() - start)
3325         return alreadygot, bucketwriters
3326hunk ./src/allmydata/storage/server.py 323
3327         self.add_latency("get", time.time() - start)
3328         return bucketreaders
3329 
3330-    def get_leases(self, storage_index):
3331+    def remote_get_incoming(self, storageindex):
3332+        incoming_share_set = self.backend.get_incoming(storageindex)
3333+        return incoming_share_set
3334+
3335+    def get_leases(self, storageindex):
3336         """Provide an iterator that yields all of the leases attached to this
3337         bucket. Each lease is returned as a LeaseInfo instance.
3338 
3339hunk ./src/allmydata/storage/server.py 337
3340         # since all shares get the same lease data, we just grab the leases
3341         # from the first share
3342         try:
3343-            shnum, filename = self._get_shares(storage_index).next()
3344+            shnum, filename = self._get_shares(storageindex).next()
3345             sf = ShareFile(filename)
3346             return sf.get_leases()
3347         except StopIteration:
3348hunk ./src/allmydata/test/test_backends.py 182
3349 
3350         share = MockShare()
3351         def call_get_shares(storageindex):
3352-            return [share]
3353+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3354+            return []#share]
3355 
3356         mockget_shares.side_effect = call_get_shares
3357 
3358hunk ./src/allmydata/test/test_backends.py 222
3359         mockmake_dirs.side_effect = call_make_dirs
3360 
3361         def call_rename(src, dst):
3362-           self.failUnlessReallyEqual(src, shareincomingname)
3363-           self.failUnlessReallyEqual(dst, sharefname)
3364+            self.failUnlessReallyEqual(src, shareincomingname)
3365+            self.failUnlessReallyEqual(dst, sharefname)
3366             
3367         mockrename.side_effect = call_rename
3368 
3369hunk ./src/allmydata/test/test_backends.py 233
3370         mockexists.side_effect = call_exists
3371 
3372         # Now begin the test.
3373+
3374+        # XXX (0) ???  Fail unless something is not properly set-up?
3375         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3376hunk ./src/allmydata/test/test_backends.py 236
3377+
3378+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3379+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3380+
3381+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3382+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3383+        # with the same si, until BucketWriter.remote_close() has been called.
3384+        # self.failIf(bsa)
3385+
3386+        # XXX (3) Inspect final and fail unless there's nothing there.
3387         bs[0].remote_write(0, 'a')
3388hunk ./src/allmydata/test/test_backends.py 247
3389+        # XXX (4a) Inspect final and fail unless share 0 is there.
3390+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3391         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3392         spaceint = self.s.allocated_size()
3393         self.failUnlessReallyEqual(spaceint, 1)
3394hunk ./src/allmydata/test/test_backends.py 253
3395 
3396+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3397         bs[0].remote_close()
3398 
3399         # What happens when there's not enough space for the client's request?
3400hunk ./src/allmydata/test/test_backends.py 260
3401         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3402 
3403         # Now test the allocated_size method.
3404-        #self.failIf(mockexists.called, mockexists.call_args_list)
3405+        # self.failIf(mockexists.called, mockexists.call_args_list)
3406         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3407         #self.failIf(mockrename.called, mockrename.call_args_list)
3408         #self.failIf(mockstat.called, mockstat.call_args_list)
3409}
3410[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3411wilcoxjg@gmail.com**20110710195139
3412 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3413] {
3414hunk ./src/allmydata/storage/server.py 220
3415             share.add_or_renew_lease(lease_info)
3416 
3417         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3418-        incoming = self.backend.get_incoming(storageindex)
3419+        incoming = self.backend.get_incoming(storage_index)
3420 
3421         for shnum in ((sharenums - alreadygot) - incoming):
3422             if (not limited) or (remaining_space >= max_space_per_bucket):
3423hunk ./src/allmydata/storage/server.py 323
3424         self.add_latency("get", time.time() - start)
3425         return bucketreaders
3426 
3427-    def remote_get_incoming(self, storageindex):
3428-        incoming_share_set = self.backend.get_incoming(storageindex)
3429+    def remote_get_incoming(self, storage_index):
3430+        incoming_share_set = self.backend.get_incoming(storage_index)
3431         return incoming_share_set
3432 
3433hunk ./src/allmydata/storage/server.py 327
3434-    def get_leases(self, storageindex):
3435+    def get_leases(self, storage_index):
3436         """Provide an iterator that yields all of the leases attached to this
3437         bucket. Each lease is returned as a LeaseInfo instance.
3438 
3439hunk ./src/allmydata/storage/server.py 337
3440         # since all shares get the same lease data, we just grab the leases
3441         # from the first share
3442         try:
3443-            shnum, filename = self._get_shares(storageindex).next()
3444+            shnum, filename = self._get_shares(storage_index).next()
3445             sf = ShareFile(filename)
3446             return sf.get_leases()
3447         except StopIteration:
3448replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3449}
3450[adding comments to clarify what I'm about to do.
3451wilcoxjg@gmail.com**20110710220623
3452 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3453] {
3454hunk ./src/allmydata/storage/backends/das/core.py 8
3455 
3456 import os, re, weakref, struct, time
3457 
3458-from foolscap.api import Referenceable
3459+#from foolscap.api import Referenceable
3460 from twisted.application import service
3461 
3462 from zope.interface import implements
3463hunk ./src/allmydata/storage/backends/das/core.py 12
3464-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3465+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3466 from allmydata.util import fileutil, idlib, log, time_format
3467 import allmydata # for __full_version__
3468 
3469hunk ./src/allmydata/storage/server.py 219
3470             alreadygot.add(share.shnum)
3471             share.add_or_renew_lease(lease_info)
3472 
3473-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3474+        # fill incoming with all shares that are incoming use a set operation
3475+        # since there's no need to operate on individual pieces
3476         incoming = self.backend.get_incoming(storageindex)
3477 
3478         for shnum in ((sharenums - alreadygot) - incoming):
3479hunk ./src/allmydata/test/test_backends.py 245
3480         # with the same si, until BucketWriter.remote_close() has been called.
3481         # self.failIf(bsa)
3482 
3483-        # XXX (3) Inspect final and fail unless there's nothing there.
3484         bs[0].remote_write(0, 'a')
3485hunk ./src/allmydata/test/test_backends.py 246
3486-        # XXX (4a) Inspect final and fail unless share 0 is there.
3487-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3488         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3489         spaceint = self.s.allocated_size()
3490         self.failUnlessReallyEqual(spaceint, 1)
3491hunk ./src/allmydata/test/test_backends.py 250
3492 
3493-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3494+        # XXX (3) Inspect final and fail unless there's nothing there.
3495         bs[0].remote_close()
3496hunk ./src/allmydata/test/test_backends.py 252
3497+        # XXX (4a) Inspect final and fail unless share 0 is there.
3498+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3499 
3500         # What happens when there's not enough space for the client's request?
3501         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3502}
3503[branching back, no longer attempting to mock inside TestServerFSBackend
3504wilcoxjg@gmail.com**20110711190849
3505 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3506] {
3507hunk ./src/allmydata/storage/backends/das/core.py 75
3508         self.lease_checker.setServiceParent(self)
3509 
3510     def get_incoming(self, storageindex):
3511-        return set((1,))
3512-
3513-    def get_available_space(self):
3514-        if self.readonly:
3515-            return 0
3516-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3517+        """Return the set of incoming shnums."""
3518+        return set(os.listdir(self.incomingdir))
3519 
3520     def get_shares(self, storage_index):
3521         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3522hunk ./src/allmydata/storage/backends/das/core.py 90
3523             # Commonly caused by there being no shares at all.
3524             pass
3525         
3526+    def get_available_space(self):
3527+        if self.readonly:
3528+            return 0
3529+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3530+
3531     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3532         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3533         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3534hunk ./src/allmydata/test/test_backends.py 27
3535 
3536 testnodeid = 'testnodeidxxxxxxxxxx'
3537 tempdir = 'teststoredir'
3538-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3539-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3540+basedir = os.path.join(tempdir, 'shares')
3541+baseincdir = os.path.join(basedir, 'incoming')
3542+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3543+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3544 shareincomingname = os.path.join(sharedirincomingname, '0')
3545 sharefname = os.path.join(sharedirfinalname, '0')
3546 
3547hunk ./src/allmydata/test/test_backends.py 142
3548                              mockmake_dirs, mockrename):
3549         """ Write a new share. """
3550 
3551-        def call_listdir(dirname):
3552-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3553-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3554-
3555-        mocklistdir.side_effect = call_listdir
3556-
3557-        def call_isdir(dirname):
3558-            #XXX Should there be any other tests here?
3559-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3560-            return True
3561-
3562-        mockisdir.side_effect = call_isdir
3563-
3564-        def call_mkdir(dirname, permissions):
3565-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3566-                self.Fail
3567-            else:
3568-                return True
3569-
3570-        mockmkdir.side_effect = call_mkdir
3571-
3572-        def call_get_available_space(storedir, reserved_space):
3573-            self.failUnlessReallyEqual(storedir, tempdir)
3574-            return 1
3575-
3576-        mockget_available_space.side_effect = call_get_available_space
3577-
3578-        mocktime.return_value = 0
3579         class MockShare:
3580             def __init__(self):
3581                 self.shnum = 1
3582hunk ./src/allmydata/test/test_backends.py 152
3583                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3584                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3585                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3586-               
3587 
3588         share = MockShare()
3589hunk ./src/allmydata/test/test_backends.py 154
3590-        def call_get_shares(storageindex):
3591-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3592-            return []#share]
3593-
3594-        mockget_shares.side_effect = call_get_shares
3595 
3596         class MockFile:
3597             def __init__(self):
3598hunk ./src/allmydata/test/test_backends.py 176
3599             def tell(self):
3600                 return self.pos
3601 
3602-
3603         fobj = MockFile()
3604hunk ./src/allmydata/test/test_backends.py 177
3605+
3606+        directories = {}
3607+        def call_listdir(dirname):
3608+            if dirname not in directories:
3609+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3610+            else:
3611+                return directories[dirname].get_contents()
3612+
3613+        mocklistdir.side_effect = call_listdir
3614+
3615+        class MockDir:
3616+            def __init__(self, dirname):
3617+                self.name = dirname
3618+                self.contents = []
3619+   
3620+            def get_contents(self):
3621+                return self.contents
3622+
3623+        def call_isdir(dirname):
3624+            #XXX Should there be any other tests here?
3625+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3626+            return True
3627+
3628+        mockisdir.side_effect = call_isdir
3629+
3630+        def call_mkdir(dirname, permissions):
3631+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3632+                self.Fail
3633+            if dirname in directories:
3634+                raise OSError(17, "File exists: '%s'" % dirname)
3635+                self.Fail
3636+            elif dirname not in directories:
3637+                directories[dirname] = MockDir(dirname)
3638+                return True
3639+
3640+        mockmkdir.side_effect = call_mkdir
3641+
3642+        def call_get_available_space(storedir, reserved_space):
3643+            self.failUnlessReallyEqual(storedir, tempdir)
3644+            return 1
3645+
3646+        mockget_available_space.side_effect = call_get_available_space
3647+
3648+        mocktime.return_value = 0
3649+        def call_get_shares(storageindex):
3650+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3651+            return []#share]
3652+
3653+        mockget_shares.side_effect = call_get_shares
3654+
3655         def call_open(fname, mode):
3656             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3657             return fobj
3658}
3659[checkpoint12 TestServerFSBackend no longer mocks filesystem
3660wilcoxjg@gmail.com**20110711193357
3661 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3662] {
3663hunk ./src/allmydata/storage/backends/das/core.py 23
3664      create_mutable_sharefile
3665 from allmydata.storage.immutable import BucketWriter, BucketReader
3666 from allmydata.storage.crawler import FSBucketCountingCrawler
3667+from allmydata.util.hashutil import constant_time_compare
3668 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3669 
3670 from zope.interface import implements
3671hunk ./src/allmydata/storage/backends/das/core.py 28
3672 
3673+# storage/
3674+# storage/shares/incoming
3675+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3676+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3677+# storage/shares/$START/$STORAGEINDEX
3678+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3679+
3680+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3681+# base-32 chars).
3682 # $SHARENUM matches this regex:
3683 NUM_RE=re.compile("^[0-9]+$")
3684 
3685hunk ./src/allmydata/test/test_backends.py 126
3686         testbackend = DASCore(tempdir, expiration_policy)
3687         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3688 
3689-    @mock.patch('allmydata.util.fileutil.rename')
3690-    @mock.patch('allmydata.util.fileutil.make_dirs')
3691-    @mock.patch('os.path.exists')
3692-    @mock.patch('os.stat')
3693-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3694-    @mock.patch('allmydata.util.fileutil.get_available_space')
3695     @mock.patch('time.time')
3696hunk ./src/allmydata/test/test_backends.py 127
3697-    @mock.patch('os.mkdir')
3698-    @mock.patch('__builtin__.open')
3699-    @mock.patch('os.listdir')
3700-    @mock.patch('os.path.isdir')
3701-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3702-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3703-                             mockmake_dirs, mockrename):
3704+    def test_write_share(self, mocktime):
3705         """ Write a new share. """
3706 
3707         class MockShare:
3708hunk ./src/allmydata/test/test_backends.py 143
3709 
3710         share = MockShare()
3711 
3712-        class MockFile:
3713-            def __init__(self):
3714-                self.buffer = ''
3715-                self.pos = 0
3716-            def write(self, instring):
3717-                begin = self.pos
3718-                padlen = begin - len(self.buffer)
3719-                if padlen > 0:
3720-                    self.buffer += '\x00' * padlen
3721-                end = self.pos + len(instring)
3722-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3723-                self.pos = end
3724-            def close(self):
3725-                pass
3726-            def seek(self, pos):
3727-                self.pos = pos
3728-            def read(self, numberbytes):
3729-                return self.buffer[self.pos:self.pos+numberbytes]
3730-            def tell(self):
3731-                return self.pos
3732-
3733-        fobj = MockFile()
3734-
3735-        directories = {}
3736-        def call_listdir(dirname):
3737-            if dirname not in directories:
3738-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3739-            else:
3740-                return directories[dirname].get_contents()
3741-
3742-        mocklistdir.side_effect = call_listdir
3743-
3744-        class MockDir:
3745-            def __init__(self, dirname):
3746-                self.name = dirname
3747-                self.contents = []
3748-   
3749-            def get_contents(self):
3750-                return self.contents
3751-
3752-        def call_isdir(dirname):
3753-            #XXX Should there be any other tests here?
3754-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3755-            return True
3756-
3757-        mockisdir.side_effect = call_isdir
3758-
3759-        def call_mkdir(dirname, permissions):
3760-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3761-                self.Fail
3762-            if dirname in directories:
3763-                raise OSError(17, "File exists: '%s'" % dirname)
3764-                self.Fail
3765-            elif dirname not in directories:
3766-                directories[dirname] = MockDir(dirname)
3767-                return True
3768-
3769-        mockmkdir.side_effect = call_mkdir
3770-
3771-        def call_get_available_space(storedir, reserved_space):
3772-            self.failUnlessReallyEqual(storedir, tempdir)
3773-            return 1
3774-
3775-        mockget_available_space.side_effect = call_get_available_space
3776-
3777-        mocktime.return_value = 0
3778-        def call_get_shares(storageindex):
3779-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3780-            return []#share]
3781-
3782-        mockget_shares.side_effect = call_get_shares
3783-
3784-        def call_open(fname, mode):
3785-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3786-            return fobj
3787-
3788-        mockopen.side_effect = call_open
3789-
3790-        def call_make_dirs(dname):
3791-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3792-           
3793-        mockmake_dirs.side_effect = call_make_dirs
3794-
3795-        def call_rename(src, dst):
3796-            self.failUnlessReallyEqual(src, shareincomingname)
3797-            self.failUnlessReallyEqual(dst, sharefname)
3798-           
3799-        mockrename.side_effect = call_rename
3800-
3801-        def call_exists(fname):
3802-            self.failUnlessReallyEqual(fname, sharefname)
3803-
3804-        mockexists.side_effect = call_exists
3805-
3806         # Now begin the test.
3807 
3808         # XXX (0) ???  Fail unless something is not properly set-up?
3809}
3810[JACP
3811wilcoxjg@gmail.com**20110711194407
3812 Ignore-this: b54745de777c4bb58d68d708f010bbb
3813] {
3814hunk ./src/allmydata/storage/backends/das/core.py 86
3815 
3816     def get_incoming(self, storageindex):
3817         """Return the set of incoming shnums."""
3818-        return set(os.listdir(self.incomingdir))
3819+        try:
3820+            incominglist = os.listdir(self.incomingdir)
3821+            print "incominglist: ", incominglist
3822+            return set(incominglist)
3823+        except OSError:
3824+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3825+            pass
3826 
3827     def get_shares(self, storage_index):
3828         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3829hunk ./src/allmydata/storage/server.py 17
3830 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3831      create_mutable_sharefile
3832 
3833-# storage/
3834-# storage/shares/incoming
3835-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3836-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3837-# storage/shares/$START/$STORAGEINDEX
3838-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3839-
3840-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3841-# base-32 chars).
3842-
3843-
3844 class StorageServer(service.MultiService, Referenceable):
3845     implements(RIStorageServer, IStatsProducer)
3846     name = 'storage'
3847}
3848[testing get incoming
3849wilcoxjg@gmail.com**20110711210224
3850 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3851] {
3852hunk ./src/allmydata/storage/backends/das/core.py 87
3853     def get_incoming(self, storageindex):
3854         """Return the set of incoming shnums."""
3855         try:
3856-            incominglist = os.listdir(self.incomingdir)
3857+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3858+            incominglist = os.listdir(incomingsharesdir)
3859             print "incominglist: ", incominglist
3860             return set(incominglist)
3861         except OSError:
3862hunk ./src/allmydata/storage/backends/das/core.py 92
3863-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3864-            pass
3865-
3866+            # XXX I'd like to make this more specific. If there are no shares at all.
3867+            return set()
3868+           
3869     def get_shares(self, storage_index):
3870         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3871         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3872hunk ./src/allmydata/test/test_backends.py 149
3873         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3874 
3875         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3876+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3877         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3878 
3879hunk ./src/allmydata/test/test_backends.py 152
3880-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3881         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3882         # with the same si, until BucketWriter.remote_close() has been called.
3883         # self.failIf(bsa)
3884}
3885[ImmutableShareFile does not know its StorageIndex
3886wilcoxjg@gmail.com**20110711211424
3887 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3888] {
3889hunk ./src/allmydata/storage/backends/das/core.py 112
3890             return 0
3891         return fileutil.get_available_space(self.storedir, self.reserved_space)
3892 
3893-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3894-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3895+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3896+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3897+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3898+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3899         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3900         return bw
3901 
3902hunk ./src/allmydata/storage/backends/das/core.py 155
3903     LEASE_SIZE = struct.calcsize(">L32s32sL")
3904     sharetype = "immutable"
3905 
3906-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3907+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3908         """ If max_size is not None then I won't allow more than
3909         max_size to be written to me. If create=True then max_size
3910         must not be None. """
3911}
3912[get_incoming correctly reports the 0 share after it has arrived
3913wilcoxjg@gmail.com**20110712025157
3914 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3915] {
3916hunk ./src/allmydata/storage/backends/das/core.py 1
3917+import os, re, weakref, struct, time, stat
3918+
3919 from allmydata.interfaces import IStorageBackend
3920 from allmydata.storage.backends.base import Backend
3921 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3922hunk ./src/allmydata/storage/backends/das/core.py 8
3923 from allmydata.util.assertutil import precondition
3924 
3925-import os, re, weakref, struct, time
3926-
3927 #from foolscap.api import Referenceable
3928 from twisted.application import service
3929 
3930hunk ./src/allmydata/storage/backends/das/core.py 89
3931         try:
3932             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3933             incominglist = os.listdir(incomingsharesdir)
3934-            print "incominglist: ", incominglist
3935-            return set(incominglist)
3936+            incomingshnums = [int(x) for x in incominglist]
3937+            return set(incomingshnums)
3938         except OSError:
3939             # XXX I'd like to make this more specific. If there are no shares at all.
3940             return set()
3941hunk ./src/allmydata/storage/backends/das/core.py 113
3942         return fileutil.get_available_space(self.storedir, self.reserved_space)
3943 
3944     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3945-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3946-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3947-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3948+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3949+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3950+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3951         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3952         return bw
3953 
3954hunk ./src/allmydata/storage/backends/das/core.py 160
3955         max_size to be written to me. If create=True then max_size
3956         must not be None. """
3957         precondition((max_size is not None) or (not create), max_size, create)
3958-        self.shnum = shnum
3959-        self.storage_index = storageindex
3960-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3961         self._max_size = max_size
3962hunk ./src/allmydata/storage/backends/das/core.py 161
3963-        self.incomingdir = os.path.join(sharedir, 'incoming')
3964-        si_dir = storage_index_to_dir(storageindex)
3965-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3966-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3967-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3968+        self.incominghome = incominghome
3969+        self.finalhome = finalhome
3970         if create:
3971             # touch the file, so later callers will see that we're working on
3972             # it. Also construct the metadata.
3973hunk ./src/allmydata/storage/backends/das/core.py 166
3974-            assert not os.path.exists(self.fname)
3975-            fileutil.make_dirs(os.path.dirname(self.fname))
3976-            f = open(self.fname, 'wb')
3977+            assert not os.path.exists(self.finalhome)
3978+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3979+            f = open(self.incominghome, 'wb')
3980             # The second field -- the four-byte share data length -- is no
3981             # longer used as of Tahoe v1.3.0, but we continue to write it in
3982             # there in case someone downgrades a storage server from >=
3983hunk ./src/allmydata/storage/backends/das/core.py 183
3984             self._lease_offset = max_size + 0x0c
3985             self._num_leases = 0
3986         else:
3987-            f = open(self.fname, 'rb')
3988-            filesize = os.path.getsize(self.fname)
3989+            f = open(self.finalhome, 'rb')
3990+            filesize = os.path.getsize(self.finalhome)
3991             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3992             f.close()
3993             if version != 1:
3994hunk ./src/allmydata/storage/backends/das/core.py 189
3995                 msg = "sharefile %s had version %d but we wanted 1" % \
3996-                      (self.fname, version)
3997+                      (self.finalhome, version)
3998                 raise UnknownImmutableContainerVersionError(msg)
3999             self._num_leases = num_leases
4000             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
4001hunk ./src/allmydata/storage/backends/das/core.py 225
4002         pass
4003         
4004     def stat(self):
4005-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
4006+        return os.stat(self.finalhome)[stat.ST_SIZE]
4007+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
4008 
4009     def get_shnum(self):
4010         return self.shnum
4011hunk ./src/allmydata/storage/backends/das/core.py 232
4012 
4013     def unlink(self):
4014-        os.unlink(self.fname)
4015+        os.unlink(self.finalhome)
4016 
4017     def read_share_data(self, offset, length):
4018         precondition(offset >= 0)
4019hunk ./src/allmydata/storage/backends/das/core.py 239
4020         # Reads beyond the end of the data are truncated. Reads that start
4021         # beyond the end of the data return an empty string.
4022         seekpos = self._data_offset+offset
4023-        fsize = os.path.getsize(self.fname)
4024+        fsize = os.path.getsize(self.finalhome)
4025         actuallength = max(0, min(length, fsize-seekpos))
4026         if actuallength == 0:
4027             return ""
4028hunk ./src/allmydata/storage/backends/das/core.py 243
4029-        f = open(self.fname, 'rb')
4030+        f = open(self.finalhome, 'rb')
4031         f.seek(seekpos)
4032         return f.read(actuallength)
4033 
4034hunk ./src/allmydata/storage/backends/das/core.py 252
4035         precondition(offset >= 0, offset)
4036         if self._max_size is not None and offset+length > self._max_size:
4037             raise DataTooLargeError(self._max_size, offset, length)
4038-        f = open(self.fname, 'rb+')
4039+        f = open(self.incominghome, 'rb+')
4040         real_offset = self._data_offset+offset
4041         f.seek(real_offset)
4042         assert f.tell() == real_offset
4043hunk ./src/allmydata/storage/backends/das/core.py 279
4044 
4045     def get_leases(self):
4046         """Yields a LeaseInfo instance for all leases."""
4047-        f = open(self.fname, 'rb')
4048+        f = open(self.finalhome, 'rb')
4049         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
4050         f.seek(self._lease_offset)
4051         for i in range(num_leases):
4052hunk ./src/allmydata/storage/backends/das/core.py 288
4053                 yield LeaseInfo().from_immutable_data(data)
4054 
4055     def add_lease(self, lease_info):
4056-        f = open(self.fname, 'rb+')
4057+        f = open(self.incominghome, 'rb+')
4058         num_leases = self._read_num_leases(f)
4059         self._write_lease_record(f, num_leases, lease_info)
4060         self._write_num_leases(f, num_leases+1)
4061hunk ./src/allmydata/storage/backends/das/core.py 301
4062                 if new_expire_time > lease.expiration_time:
4063                     # yes
4064                     lease.expiration_time = new_expire_time
4065-                    f = open(self.fname, 'rb+')
4066+                    f = open(self.finalhome, 'rb+')
4067                     self._write_lease_record(f, i, lease)
4068                     f.close()
4069                 return
4070hunk ./src/allmydata/storage/backends/das/core.py 336
4071             # the same order as they were added, so that if we crash while
4072             # doing this, we won't lose any non-cancelled leases.
4073             leases = [l for l in leases if l] # remove the cancelled leases
4074-            f = open(self.fname, 'rb+')
4075+            f = open(self.finalhome, 'rb+')
4076             for i,lease in enumerate(leases):
4077                 self._write_lease_record(f, i, lease)
4078             self._write_num_leases(f, len(leases))
4079hunk ./src/allmydata/storage/backends/das/core.py 344
4080             f.close()
4081         space_freed = self.LEASE_SIZE * num_leases_removed
4082         if not len(leases):
4083-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
4084+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
4085             self.unlink()
4086         return space_freed
4087hunk ./src/allmydata/test/test_backends.py 129
4088     @mock.patch('time.time')
4089     def test_write_share(self, mocktime):
4090         """ Write a new share. """
4091-
4092-        class MockShare:
4093-            def __init__(self):
4094-                self.shnum = 1
4095-               
4096-            def add_or_renew_lease(elf, lease_info):
4097-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4098-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4099-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4100-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4101-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4102-
4103-        share = MockShare()
4104-
4105         # Now begin the test.
4106 
4107         # XXX (0) ???  Fail unless something is not properly set-up?
4108hunk ./src/allmydata/test/test_backends.py 143
4109         # self.failIf(bsa)
4110 
4111         bs[0].remote_write(0, 'a')
4112-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4113+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4114         spaceint = self.s.allocated_size()
4115         self.failUnlessReallyEqual(spaceint, 1)
4116 
4117hunk ./src/allmydata/test/test_backends.py 161
4118         #self.failIf(mockrename.called, mockrename.call_args_list)
4119         #self.failIf(mockstat.called, mockstat.call_args_list)
4120 
4121+    def test_handle_incoming(self):
4122+        incomingset = self.s.backend.get_incoming('teststorage_index')
4123+        self.failUnlessReallyEqual(incomingset, set())
4124+
4125+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4126+       
4127+        incomingset = self.s.backend.get_incoming('teststorage_index')
4128+        self.failUnlessReallyEqual(incomingset, set((0,)))
4129+
4130+        bs[0].remote_close()
4131+        self.failUnlessReallyEqual(incomingset, set())
4132+
4133     @mock.patch('os.path.exists')
4134     @mock.patch('os.path.getsize')
4135     @mock.patch('__builtin__.open')
4136hunk ./src/allmydata/test/test_backends.py 223
4137         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4138 
4139 
4140-
4141 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4142     @mock.patch('time.time')
4143     @mock.patch('os.mkdir')
4144hunk ./src/allmydata/test/test_backends.py 271
4145         DASCore('teststoredir', expiration_policy)
4146 
4147         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4148+
4149}
4150[jacp14
4151wilcoxjg@gmail.com**20110712061211
4152 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4153] {
4154hunk ./src/allmydata/storage/backends/das/core.py 95
4155             # XXX I'd like to make this more specific. If there are no shares at all.
4156             return set()
4157             
4158-    def get_shares(self, storage_index):
4159+    def get_shares(self, storageindex):
4160         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4161hunk ./src/allmydata/storage/backends/das/core.py 97
4162-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4163+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4164         try:
4165             for f in os.listdir(finalstoragedir):
4166                 if NUM_RE.match(f):
4167hunk ./src/allmydata/storage/backends/das/core.py 102
4168                     filename = os.path.join(finalstoragedir, f)
4169-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4170+                    yield ImmutableShare(filename, storageindex, f)
4171         except OSError:
4172             # Commonly caused by there being no shares at all.
4173             pass
4174hunk ./src/allmydata/storage/backends/das/core.py 115
4175     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4176         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4177         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4178-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4179+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4180         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4181         return bw
4182 
4183hunk ./src/allmydata/storage/backends/das/core.py 155
4184     LEASE_SIZE = struct.calcsize(">L32s32sL")
4185     sharetype = "immutable"
4186 
4187-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4188+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4189         """ If max_size is not None then I won't allow more than
4190         max_size to be written to me. If create=True then max_size
4191         must not be None. """
4192hunk ./src/allmydata/storage/backends/das/core.py 160
4193         precondition((max_size is not None) or (not create), max_size, create)
4194+        self.storageindex = storageindex
4195         self._max_size = max_size
4196         self.incominghome = incominghome
4197         self.finalhome = finalhome
4198hunk ./src/allmydata/storage/backends/das/core.py 164
4199+        self.shnum = shnum
4200         if create:
4201             # touch the file, so later callers will see that we're working on
4202             # it. Also construct the metadata.
4203hunk ./src/allmydata/storage/backends/das/core.py 212
4204             # their children to know when they should do the rmdir. This
4205             # approach is simpler, but relies on os.rmdir refusing to delete
4206             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4207+            #print "os.path.dirname(self.incominghome): "
4208+            #print os.path.dirname(self.incominghome)
4209             os.rmdir(os.path.dirname(self.incominghome))
4210             # we also delete the grandparent (prefix) directory, .../ab ,
4211             # again to avoid leaving directories lying around. This might
4212hunk ./src/allmydata/storage/immutable.py 93
4213     def __init__(self, ss, share):
4214         self.ss = ss
4215         self._share_file = share
4216-        self.storage_index = share.storage_index
4217+        self.storageindex = share.storageindex
4218         self.shnum = share.shnum
4219 
4220     def __repr__(self):
4221hunk ./src/allmydata/storage/immutable.py 98
4222         return "<%s %s %s>" % (self.__class__.__name__,
4223-                               base32.b2a_l(self.storage_index[:8], 60),
4224+                               base32.b2a_l(self.storageindex[:8], 60),
4225                                self.shnum)
4226 
4227     def remote_read(self, offset, length):
4228hunk ./src/allmydata/storage/immutable.py 110
4229 
4230     def remote_advise_corrupt_share(self, reason):
4231         return self.ss.remote_advise_corrupt_share("immutable",
4232-                                                   self.storage_index,
4233+                                                   self.storageindex,
4234                                                    self.shnum,
4235                                                    reason)
4236hunk ./src/allmydata/test/test_backends.py 20
4237 # The following share file contents was generated with
4238 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4239 # with share data == 'a'.
4240-renew_secret  = 'x'*32
4241-cancel_secret = 'y'*32
4242-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4243-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4244+shareversionnumber = '\x00\x00\x00\x01'
4245+sharedatalength = '\x00\x00\x00\x01'
4246+numberofleases = '\x00\x00\x00\x01'
4247+shareinputdata = 'a'
4248+ownernumber = '\x00\x00\x00\x00'
4249+renewsecret  = 'x'*32
4250+cancelsecret = 'y'*32
4251+expirationtime = '\x00(\xde\x80'
4252+nextlease = ''
4253+containerdata = shareversionnumber + sharedatalength + numberofleases
4254+client_data = shareinputdata + ownernumber + renewsecret + \
4255+    cancelsecret + expirationtime + nextlease
4256+share_data = containerdata + client_data
4257+
4258 
4259 testnodeid = 'testnodeidxxxxxxxxxx'
4260 tempdir = 'teststoredir'
4261hunk ./src/allmydata/test/test_backends.py 52
4262 
4263 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4264     def setUp(self):
4265-        self.s = StorageServer(testnodeid, backend=NullCore())
4266+        self.ss = StorageServer(testnodeid, backend=NullCore())
4267 
4268     @mock.patch('os.mkdir')
4269     @mock.patch('__builtin__.open')
4270hunk ./src/allmydata/test/test_backends.py 62
4271         """ Write a new share. """
4272 
4273         # Now begin the test.
4274-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4275+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4276         bs[0].remote_write(0, 'a')
4277         self.failIf(mockisdir.called)
4278         self.failIf(mocklistdir.called)
4279hunk ./src/allmydata/test/test_backends.py 133
4280                 _assert(False, "The tester code doesn't recognize this case.") 
4281 
4282         mockopen.side_effect = call_open
4283-        testbackend = DASCore(tempdir, expiration_policy)
4284-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4285+        self.backend = DASCore(tempdir, expiration_policy)
4286+        self.ss = StorageServer(testnodeid, self.backend)
4287+        self.ssinf = StorageServer(testnodeid, self.backend)
4288 
4289     @mock.patch('time.time')
4290     def test_write_share(self, mocktime):
4291hunk ./src/allmydata/test/test_backends.py 142
4292         """ Write a new share. """
4293         # Now begin the test.
4294 
4295-        # XXX (0) ???  Fail unless something is not properly set-up?
4296-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4297+        mocktime.return_value = 0
4298+        # Inspect incoming and fail unless it's empty.
4299+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4300+        self.failUnlessReallyEqual(incomingset, set())
4301+       
4302+        # Among other things, populate incoming with the sharenum: 0.
4303+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4304 
4305hunk ./src/allmydata/test/test_backends.py 150
4306-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4307-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4308-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4309+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4310+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4311+       
4312+        # Attempt to create a second share writer with the same share.
4313+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4314 
4315hunk ./src/allmydata/test/test_backends.py 156
4316-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4317+        # Show that no sharewriter results from a remote_allocate_buckets
4318         # with the same si, until BucketWriter.remote_close() has been called.
4319hunk ./src/allmydata/test/test_backends.py 158
4320-        # self.failIf(bsa)
4321+        self.failIf(bsa)
4322 
4323hunk ./src/allmydata/test/test_backends.py 160
4324+        # Write 'a' to shnum 0. Only tested together with close and read.
4325         bs[0].remote_write(0, 'a')
4326hunk ./src/allmydata/test/test_backends.py 162
4327-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4328-        spaceint = self.s.allocated_size()
4329+
4330+        # Test allocated size.
4331+        spaceint = self.ss.allocated_size()
4332         self.failUnlessReallyEqual(spaceint, 1)
4333 
4334         # XXX (3) Inspect final and fail unless there's nothing there.
4335hunk ./src/allmydata/test/test_backends.py 168
4336+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4337         bs[0].remote_close()
4338         # XXX (4a) Inspect final and fail unless share 0 is there.
4339hunk ./src/allmydata/test/test_backends.py 171
4340+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4341+        #contents = sharesinfinal[0].read_share_data(0,999)
4342+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4343         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4344 
4345         # What happens when there's not enough space for the client's request?
4346hunk ./src/allmydata/test/test_backends.py 177
4347-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4348+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4349 
4350         # Now test the allocated_size method.
4351         # self.failIf(mockexists.called, mockexists.call_args_list)
4352hunk ./src/allmydata/test/test_backends.py 185
4353         #self.failIf(mockrename.called, mockrename.call_args_list)
4354         #self.failIf(mockstat.called, mockstat.call_args_list)
4355 
4356-    def test_handle_incoming(self):
4357-        incomingset = self.s.backend.get_incoming('teststorage_index')
4358-        self.failUnlessReallyEqual(incomingset, set())
4359-
4360-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4361-       
4362-        incomingset = self.s.backend.get_incoming('teststorage_index')
4363-        self.failUnlessReallyEqual(incomingset, set((0,)))
4364-
4365-        bs[0].remote_close()
4366-        self.failUnlessReallyEqual(incomingset, set())
4367-
4368     @mock.patch('os.path.exists')
4369     @mock.patch('os.path.getsize')
4370     @mock.patch('__builtin__.open')
4371hunk ./src/allmydata/test/test_backends.py 208
4372             self.failUnless('r' in mode, mode)
4373             self.failUnless('b' in mode, mode)
4374 
4375-            return StringIO(share_file_data)
4376+            return StringIO(share_data)
4377         mockopen.side_effect = call_open
4378 
4379hunk ./src/allmydata/test/test_backends.py 211
4380-        datalen = len(share_file_data)
4381+        datalen = len(share_data)
4382         def call_getsize(fname):
4383             self.failUnlessReallyEqual(fname, sharefname)
4384             return datalen
4385hunk ./src/allmydata/test/test_backends.py 223
4386         mockexists.side_effect = call_exists
4387 
4388         # Now begin the test.
4389-        bs = self.s.remote_get_buckets('teststorage_index')
4390+        bs = self.ss.remote_get_buckets('teststorage_index')
4391 
4392         self.failUnlessEqual(len(bs), 1)
4393hunk ./src/allmydata/test/test_backends.py 226
4394-        b = bs[0]
4395+        b = bs['0']
4396         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4397hunk ./src/allmydata/test/test_backends.py 228
4398-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4399+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4400         # If you try to read past the end you get the as much data as is there.
4401hunk ./src/allmydata/test/test_backends.py 230
4402-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4403+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4404         # If you start reading past the end of the file you get the empty string.
4405         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4406 
4407}
4408[jacp14 or so
4409wilcoxjg@gmail.com**20110713060346
4410 Ignore-this: 7026810f60879d65b525d450e43ff87a
4411] {
4412hunk ./src/allmydata/storage/backends/das/core.py 102
4413             for f in os.listdir(finalstoragedir):
4414                 if NUM_RE.match(f):
4415                     filename = os.path.join(finalstoragedir, f)
4416-                    yield ImmutableShare(filename, storageindex, f)
4417+                    yield ImmutableShare(filename, storageindex, int(f))
4418         except OSError:
4419             # Commonly caused by there being no shares at all.
4420             pass
4421hunk ./src/allmydata/storage/backends/null/core.py 25
4422     def set_storage_server(self, ss):
4423         self.ss = ss
4424 
4425+    def get_incoming(self, storageindex):
4426+        return set()
4427+
4428 class ImmutableShare:
4429     sharetype = "immutable"
4430 
4431hunk ./src/allmydata/storage/immutable.py 19
4432 
4433     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4434         self.ss = ss
4435-        self._max_size = max_size # don't allow the client to write more than this
4436+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4437+
4438         self._canary = canary
4439         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4440         self.closed = False
4441hunk ./src/allmydata/test/test_backends.py 135
4442         mockopen.side_effect = call_open
4443         self.backend = DASCore(tempdir, expiration_policy)
4444         self.ss = StorageServer(testnodeid, self.backend)
4445-        self.ssinf = StorageServer(testnodeid, self.backend)
4446+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4447+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4448 
4449     @mock.patch('time.time')
4450     def test_write_share(self, mocktime):
4451hunk ./src/allmydata/test/test_backends.py 161
4452         # with the same si, until BucketWriter.remote_close() has been called.
4453         self.failIf(bsa)
4454 
4455-        # Write 'a' to shnum 0. Only tested together with close and read.
4456-        bs[0].remote_write(0, 'a')
4457-
4458         # Test allocated size.
4459         spaceint = self.ss.allocated_size()
4460         self.failUnlessReallyEqual(spaceint, 1)
4461hunk ./src/allmydata/test/test_backends.py 165
4462 
4463-        # XXX (3) Inspect final and fail unless there's nothing there.
4464+        # Write 'a' to shnum 0. Only tested together with close and read.
4465+        bs[0].remote_write(0, 'a')
4466+       
4467+        # Preclose: Inspect final, failUnless nothing there.
4468         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4469         bs[0].remote_close()
4470hunk ./src/allmydata/test/test_backends.py 171
4471-        # XXX (4a) Inspect final and fail unless share 0 is there.
4472-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4473-        #contents = sharesinfinal[0].read_share_data(0,999)
4474-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4475-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4476 
4477hunk ./src/allmydata/test/test_backends.py 172
4478-        # What happens when there's not enough space for the client's request?
4479-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4480+        # Postclose: (Omnibus) failUnless written data is in final.
4481+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4482+        contents = sharesinfinal[0].read_share_data(0,73)
4483+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4484 
4485hunk ./src/allmydata/test/test_backends.py 177
4486-        # Now test the allocated_size method.
4487-        # self.failIf(mockexists.called, mockexists.call_args_list)
4488-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4489-        #self.failIf(mockrename.called, mockrename.call_args_list)
4490-        #self.failIf(mockstat.called, mockstat.call_args_list)
4491+        # Cover interior of for share in get_shares loop.
4492+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4493+       
4494+    @mock.patch('time.time')
4495+    @mock.patch('allmydata.util.fileutil.get_available_space')
4496+    def test_out_of_space(self, mockget_available_space, mocktime):
4497+        mocktime.return_value = 0
4498+       
4499+        def call_get_available_space(dir, reserve):
4500+            return 0
4501+
4502+        mockget_available_space.side_effect = call_get_available_space
4503+       
4504+       
4505+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4506 
4507     @mock.patch('os.path.exists')
4508     @mock.patch('os.path.getsize')
4509hunk ./src/allmydata/test/test_backends.py 234
4510         bs = self.ss.remote_get_buckets('teststorage_index')
4511 
4512         self.failUnlessEqual(len(bs), 1)
4513-        b = bs['0']
4514+        b = bs[0]
4515         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4516         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4517         # If you try to read past the end you get the as much data as is there.
4518}
4519[temporary work-in-progress patch to be unrecorded
4520zooko@zooko.com**20110714003008
4521 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4522 tidy up a few tests, work done in pair-programming with Zancas
4523] {
4524hunk ./src/allmydata/storage/backends/das/core.py 65
4525         self._clean_incomplete()
4526 
4527     def _clean_incomplete(self):
4528-        fileutil.rm_dir(self.incomingdir)
4529+        fileutil.rmtree(self.incomingdir)
4530         fileutil.make_dirs(self.incomingdir)
4531 
4532     def _setup_corruption_advisory(self):
4533hunk ./src/allmydata/storage/immutable.py 1
4534-import os, stat, struct, time
4535+import os, time
4536 
4537 from foolscap.api import Referenceable
4538 
4539hunk ./src/allmydata/storage/server.py 1
4540-import os, re, weakref, struct, time
4541+import os, weakref, struct, time
4542 
4543 from foolscap.api import Referenceable
4544 from twisted.application import service
4545hunk ./src/allmydata/storage/server.py 7
4546 
4547 from zope.interface import implements
4548-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4549+from allmydata.interfaces import RIStorageServer, IStatsProducer
4550 from allmydata.util import fileutil, idlib, log, time_format
4551 import allmydata # for __full_version__
4552 
4553hunk ./src/allmydata/storage/server.py 313
4554         self.add_latency("get", time.time() - start)
4555         return bucketreaders
4556 
4557-    def remote_get_incoming(self, storageindex):
4558-        incoming_share_set = self.backend.get_incoming(storageindex)
4559-        return incoming_share_set
4560-
4561     def get_leases(self, storageindex):
4562         """Provide an iterator that yields all of the leases attached to this
4563         bucket. Each lease is returned as a LeaseInfo instance.
4564hunk ./src/allmydata/test/test_backends.py 3
4565 from twisted.trial import unittest
4566 
4567+from twisted.path.filepath import FilePath
4568+
4569 from StringIO import StringIO
4570 
4571 from allmydata.test.common_util import ReallyEqualMixin
4572hunk ./src/allmydata/test/test_backends.py 38
4573 
4574 
4575 testnodeid = 'testnodeidxxxxxxxxxx'
4576-tempdir = 'teststoredir'
4577-basedir = os.path.join(tempdir, 'shares')
4578+storedir = 'teststoredir'
4579+storedirfp = FilePath(storedir)
4580+basedir = os.path.join(storedir, 'shares')
4581 baseincdir = os.path.join(basedir, 'incoming')
4582 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4583 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4584hunk ./src/allmydata/test/test_backends.py 53
4585                      'cutoff_date' : None,
4586                      'sharetypes' : None}
4587 
4588-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4589+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4590+    """ NullBackend is just for testing and executable documentation, so
4591+    this test is actually a test of StorageServer in which we're using
4592+    NullBackend as helper code for the test, rather than a test of
4593+    NullBackend. """
4594     def setUp(self):
4595         self.ss = StorageServer(testnodeid, backend=NullCore())
4596 
4597hunk ./src/allmydata/test/test_backends.py 62
4598     @mock.patch('os.mkdir')
4599+
4600     @mock.patch('__builtin__.open')
4601     @mock.patch('os.listdir')
4602     @mock.patch('os.path.isdir')
4603hunk ./src/allmydata/test/test_backends.py 69
4604     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4605         """ Write a new share. """
4606 
4607-        # Now begin the test.
4608         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4609         bs[0].remote_write(0, 'a')
4610         self.failIf(mockisdir.called)
4611hunk ./src/allmydata/test/test_backends.py 83
4612     @mock.patch('os.listdir')
4613     @mock.patch('os.path.isdir')
4614     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4615-        """ This tests whether a server instance can be constructed
4616-        with a filesystem backend. To pass the test, it has to use the
4617-        filesystem in only the prescribed ways. """
4618+        """ This tests whether a server instance can be constructed with a
4619+        filesystem backend. To pass the test, it mustn't use the filesystem
4620+        outside of its configured storedir. """
4621 
4622         def call_open(fname, mode):
4623hunk ./src/allmydata/test/test_backends.py 88
4624-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4625-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4626-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4627-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4628-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4629+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4630+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4631+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4632+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4633+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4634                 return StringIO()
4635             else:
4636hunk ./src/allmydata/test/test_backends.py 95
4637-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4638+                fnamefp = FilePath(fname)
4639+                self.failUnless(storedirfp in fnamefp.parents(),
4640+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4641         mockopen.side_effect = call_open
4642 
4643         def call_isdir(fname):
4644hunk ./src/allmydata/test/test_backends.py 101
4645-            if fname == os.path.join(tempdir,'shares'):
4646+            if fname == os.path.join(storedir, 'shares'):
4647                 return True
4648hunk ./src/allmydata/test/test_backends.py 103
4649-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4650+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4651                 return True
4652             else:
4653                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4654hunk ./src/allmydata/test/test_backends.py 109
4655         mockisdir.side_effect = call_isdir
4656 
4657+        mocklistdir.return_value = []
4658+
4659         def call_mkdir(fname, mode):
4660hunk ./src/allmydata/test/test_backends.py 112
4661-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4662             self.failUnlessEqual(0777, mode)
4663hunk ./src/allmydata/test/test_backends.py 113
4664-            if fname == tempdir:
4665-                return None
4666-            elif fname == os.path.join(tempdir,'shares'):
4667-                return None
4668-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4669-                return None
4670-            else:
4671-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4672+            self.failUnlessIn(fname,
4673+                              [storedir,
4674+                               os.path.join(storedir, 'shares'),
4675+                               os.path.join(storedir, 'shares', 'incoming')],
4676+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4677         mockmkdir.side_effect = call_mkdir
4678 
4679         # Now begin the test.
4680hunk ./src/allmydata/test/test_backends.py 121
4681-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4682+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4683 
4684         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4685 
4686hunk ./src/allmydata/test/test_backends.py 126
4687 
4688-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4689+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4690+    """ This tests both the StorageServer xyz """
4691     @mock.patch('__builtin__.open')
4692     def setUp(self, mockopen):
4693         def call_open(fname, mode):
4694hunk ./src/allmydata/test/test_backends.py 131
4695-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4696-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4697-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4698-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4699-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4700+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4701+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4702+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4703+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4704+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4705                 return StringIO()
4706             else:
4707                 _assert(False, "The tester code doesn't recognize this case.") 
4708hunk ./src/allmydata/test/test_backends.py 141
4709 
4710         mockopen.side_effect = call_open
4711-        self.backend = DASCore(tempdir, expiration_policy)
4712+        self.backend = DASCore(storedir, expiration_policy)
4713         self.ss = StorageServer(testnodeid, self.backend)
4714hunk ./src/allmydata/test/test_backends.py 143
4715-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4716+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4717         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4718 
4719     @mock.patch('time.time')
4720hunk ./src/allmydata/test/test_backends.py 147
4721-    def test_write_share(self, mocktime):
4722-        """ Write a new share. """
4723-        # Now begin the test.
4724+    def test_write_and_read_share(self, mocktime):
4725+        """
4726+        Write a new share, read it, and test the server's (and FS backend's)
4727+        handling of simultaneous and successive attempts to write the same
4728+        share.
4729+        """
4730 
4731         mocktime.return_value = 0
4732         # Inspect incoming and fail unless it's empty.
4733hunk ./src/allmydata/test/test_backends.py 159
4734         incomingset = self.ss.backend.get_incoming('teststorage_index')
4735         self.failUnlessReallyEqual(incomingset, set())
4736         
4737-        # Among other things, populate incoming with the sharenum: 0.
4738+        # Populate incoming with the sharenum: 0.
4739         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4740 
4741         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4742hunk ./src/allmydata/test/test_backends.py 163
4743-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4744+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4745         
4746hunk ./src/allmydata/test/test_backends.py 165
4747-        # Attempt to create a second share writer with the same share.
4748+        # Attempt to create a second share writer with the same sharenum.
4749         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4750 
4751         # Show that no sharewriter results from a remote_allocate_buckets
4752hunk ./src/allmydata/test/test_backends.py 169
4753-        # with the same si, until BucketWriter.remote_close() has been called.
4754+        # with the same si and sharenum, until BucketWriter.remote_close()
4755+        # has been called.
4756         self.failIf(bsa)
4757 
4758         # Test allocated size.
4759hunk ./src/allmydata/test/test_backends.py 187
4760         # Postclose: (Omnibus) failUnless written data is in final.
4761         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4762         contents = sharesinfinal[0].read_share_data(0,73)
4763-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4764+        self.failUnlessReallyEqual(contents, client_data)
4765 
4766hunk ./src/allmydata/test/test_backends.py 189
4767-        # Cover interior of for share in get_shares loop.
4768-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4769+        # Exercise the case that the share we're asking to allocate is
4770+        # already (completely) uploaded.
4771+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4772         
4773     @mock.patch('time.time')
4774     @mock.patch('allmydata.util.fileutil.get_available_space')
4775hunk ./src/allmydata/test/test_backends.py 210
4776     @mock.patch('os.path.getsize')
4777     @mock.patch('__builtin__.open')
4778     @mock.patch('os.listdir')
4779-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4780+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4781         """ This tests whether the code correctly finds and reads
4782         shares written out by old (Tahoe-LAFS <= v1.8.2)
4783         servers. There is a similar test in test_download, but that one
4784hunk ./src/allmydata/test/test_backends.py 219
4785         StorageServer object. """
4786 
4787         def call_listdir(dirname):
4788-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4789+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4790             return ['0']
4791 
4792         mocklistdir.side_effect = call_listdir
4793hunk ./src/allmydata/test/test_backends.py 226
4794 
4795         def call_open(fname, mode):
4796             self.failUnlessReallyEqual(fname, sharefname)
4797-            self.failUnless('r' in mode, mode)
4798+            self.failUnlessEqual(mode[0], 'r', mode)
4799             self.failUnless('b' in mode, mode)
4800 
4801             return StringIO(share_data)
4802hunk ./src/allmydata/test/test_backends.py 268
4803         filesystem in only the prescribed ways. """
4804 
4805         def call_open(fname, mode):
4806-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4807-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4808-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4809-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4810-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4811+            if fname == os.path.join(storedir,'bucket_counter.state'):
4812+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4813+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4814+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4815+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4816                 return StringIO()
4817             else:
4818                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4819hunk ./src/allmydata/test/test_backends.py 279
4820         mockopen.side_effect = call_open
4821 
4822         def call_isdir(fname):
4823-            if fname == os.path.join(tempdir,'shares'):
4824+            if fname == os.path.join(storedir,'shares'):
4825                 return True
4826hunk ./src/allmydata/test/test_backends.py 281
4827-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4828+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4829                 return True
4830             else:
4831                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4832hunk ./src/allmydata/test/test_backends.py 290
4833         def call_mkdir(fname, mode):
4834             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4835             self.failUnlessEqual(0777, mode)
4836-            if fname == tempdir:
4837+            if fname == storedir:
4838                 return None
4839hunk ./src/allmydata/test/test_backends.py 292
4840-            elif fname == os.path.join(tempdir,'shares'):
4841+            elif fname == os.path.join(storedir,'shares'):
4842                 return None
4843hunk ./src/allmydata/test/test_backends.py 294
4844-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4845+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4846                 return None
4847             else:
4848                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4849hunk ./src/allmydata/util/fileutil.py 5
4850 Futz with files like a pro.
4851 """
4852 
4853-import sys, exceptions, os, stat, tempfile, time, binascii
4854+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4855 
4856 from twisted.python import log
4857 
4858hunk ./src/allmydata/util/fileutil.py 186
4859             raise tx
4860         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4861 
4862-def rm_dir(dirname):
4863+def rmtree(dirname):
4864     """
4865     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4866     already gone, do nothing and return without raising an exception.  If this
4867hunk ./src/allmydata/util/fileutil.py 205
4868             else:
4869                 remove(fullname)
4870         os.rmdir(dirname)
4871-    except Exception, le:
4872-        # Ignore "No such file or directory"
4873-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4874+    except EnvironmentError, le:
4875+        # Ignore "No such file or directory", collect any other exception.
4876+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4877             excs.append(le)
4878hunk ./src/allmydata/util/fileutil.py 209
4879+    except Exception, le:
4880+        excs.append(le)
4881 
4882     # Okay, now we've recursively removed everything, ignoring any "No
4883     # such file or directory" errors, and collecting any other errors.
4884hunk ./src/allmydata/util/fileutil.py 222
4885             raise OSError, "Failed to remove dir for unknown reason."
4886         raise OSError, excs
4887 
4888+def rm_dir(dirname):
4889+    # Renamed to be like shutil.rmtree and unlike rmdir.
4890+    return rmtree(dirname)
4891 
4892 def remove_if_possible(f):
4893     try:
4894}
4895[work in progress intended to be unrecorded and never committed to trunk
4896zooko@zooko.com**20110714212139
4897 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
4898 switch from os.path.join to filepath
4899 incomplete refactoring of common "stay in your subtree" tester code into a superclass
4900 
4901] {
4902hunk ./src/allmydata/test/test_backends.py 3
4903 from twisted.trial import unittest
4904 
4905-from twisted.path.filepath import FilePath
4906+from twisted.python.filepath import FilePath
4907 
4908 from StringIO import StringIO
4909 
4910hunk ./src/allmydata/test/test_backends.py 10
4911 from allmydata.test.common_util import ReallyEqualMixin
4912 from allmydata.util.assertutil import _assert
4913 
4914-import mock, os
4915+import mock
4916 
4917 # This is the code that we're going to be testing.
4918 from allmydata.storage.server import StorageServer
4919hunk ./src/allmydata/test/test_backends.py 25
4920 shareversionnumber = '\x00\x00\x00\x01'
4921 sharedatalength = '\x00\x00\x00\x01'
4922 numberofleases = '\x00\x00\x00\x01'
4923+
4924 shareinputdata = 'a'
4925 ownernumber = '\x00\x00\x00\x00'
4926 renewsecret  = 'x'*32
4927hunk ./src/allmydata/test/test_backends.py 39
4928 
4929 
4930 testnodeid = 'testnodeidxxxxxxxxxx'
4931-storedir = 'teststoredir'
4932-storedirfp = FilePath(storedir)
4933-basedir = os.path.join(storedir, 'shares')
4934-baseincdir = os.path.join(basedir, 'incoming')
4935-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4936-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4937-shareincomingname = os.path.join(sharedirincomingname, '0')
4938-sharefname = os.path.join(sharedirfinalname, '0')
4939+
4940+class TestFilesMixin(unittest.TestCase):
4941+    def setUp(self):
4942+        self.storedir = FilePath('teststoredir')
4943+        self.basedir = self.storedir.child('shares')
4944+        self.baseincdir = self.basedir.child('incoming')
4945+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4946+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4947+        self.shareincomingname = self.sharedirincomingname.child('0')
4948+        self.sharefname = self.sharedirfinalname.child('0')
4949+
4950+    def call_open(self, fname, mode):
4951+        fnamefp = FilePath(fname)
4952+        if fnamefp == self.storedir.child('bucket_counter.state'):
4953+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
4954+        elif fnamefp == self.storedir.child('lease_checker.state'):
4955+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
4956+        elif fnamefp == self.storedir.child('lease_checker.history'):
4957+            return StringIO()
4958+        else:
4959+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4960+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
4961+
4962+    def call_isdir(self, fname):
4963+        fnamefp = FilePath(fname)
4964+        if fnamefp == self.storedir.child('shares'):
4965+            return True
4966+        elif fnamefp == self.storedir.child('shares').child('incoming'):
4967+            return True
4968+        else:
4969+            self.failUnless(self.storedir in fnamefp.parents(),
4970+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4971+
4972+    def call_mkdir(self, fname, mode):
4973+        self.failUnlessEqual(0777, mode)
4974+        fnamefp = FilePath(fname)
4975+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4976+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4977+
4978+
4979+    @mock.patch('os.mkdir')
4980+    @mock.patch('__builtin__.open')
4981+    @mock.patch('os.listdir')
4982+    @mock.patch('os.path.isdir')
4983+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4984+        mocklistdir.return_value = []
4985+        mockmkdir.side_effect = self.call_mkdir
4986+        mockisdir.side_effect = self.call_isdir
4987+        mockopen.side_effect = self.call_open
4988+        mocklistdir.return_value = []
4989+       
4990+        test_func()
4991+       
4992+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4993 
4994 expiration_policy = {'enabled' : False,
4995                      'mode' : 'age',
4996hunk ./src/allmydata/test/test_backends.py 123
4997         self.failIf(mockopen.called)
4998         self.failIf(mockmkdir.called)
4999 
5000-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
5001-    @mock.patch('time.time')
5002-    @mock.patch('os.mkdir')
5003-    @mock.patch('__builtin__.open')
5004-    @mock.patch('os.listdir')
5005-    @mock.patch('os.path.isdir')
5006-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5007+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5008+    def test_create_server_fs_backend(self):
5009         """ This tests whether a server instance can be constructed with a
5010         filesystem backend. To pass the test, it mustn't use the filesystem
5011         outside of its configured storedir. """
5012hunk ./src/allmydata/test/test_backends.py 129
5013 
5014-        def call_open(fname, mode):
5015-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5016-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5017-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5018-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5019-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5020-                return StringIO()
5021-            else:
5022-                fnamefp = FilePath(fname)
5023-                self.failUnless(storedirfp in fnamefp.parents(),
5024-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5025-        mockopen.side_effect = call_open
5026+        def _f():
5027+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5028 
5029hunk ./src/allmydata/test/test_backends.py 132
5030-        def call_isdir(fname):
5031-            if fname == os.path.join(storedir, 'shares'):
5032-                return True
5033-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
5034-                return True
5035-            else:
5036-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5037-        mockisdir.side_effect = call_isdir
5038-
5039-        mocklistdir.return_value = []
5040-
5041-        def call_mkdir(fname, mode):
5042-            self.failUnlessEqual(0777, mode)
5043-            self.failUnlessIn(fname,
5044-                              [storedir,
5045-                               os.path.join(storedir, 'shares'),
5046-                               os.path.join(storedir, 'shares', 'incoming')],
5047-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
5048-        mockmkdir.side_effect = call_mkdir
5049-
5050-        # Now begin the test.
5051-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5052-
5053-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5054+        self._help_test_stay_in_your_subtree(_f)
5055 
5056 
5057 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5058}
5059[another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
5060zooko@zooko.com**20110715191500
5061 Ignore-this: af33336789041800761e80510ea2f583
5062 In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
5063] {
5064hunk ./src/allmydata/storage/backends/das/core.py 59
5065                 log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5066                         umid="0wZ27w", level=log.UNUSUAL)
5067 
5068-        self.sharedir = os.path.join(self.storedir, "shares")
5069-        fileutil.make_dirs(self.sharedir)
5070-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
5071+        self.sharedir = self.storedir.child("shares")
5072+        fileutil.fp_make_dirs(self.sharedir)
5073+        self.incomingdir = self.sharedir.child('incoming')
5074         self._clean_incomplete()
5075 
5076     def _clean_incomplete(self):
5077hunk ./src/allmydata/storage/backends/das/core.py 65
5078-        fileutil.rmtree(self.incomingdir)
5079-        fileutil.make_dirs(self.incomingdir)
5080+        fileutil.fp_remove(self.incomingdir)
5081+        fileutil.fp_make_dirs(self.incomingdir)
5082 
5083     def _setup_corruption_advisory(self):
5084         # we don't actually create the corruption-advisory dir until necessary
5085hunk ./src/allmydata/storage/backends/das/core.py 70
5086-        self.corruption_advisory_dir = os.path.join(self.storedir,
5087-                                                    "corruption-advisories")
5088+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
5089 
5090     def _setup_bucket_counter(self):
5091hunk ./src/allmydata/storage/backends/das/core.py 73
5092-        statefname = os.path.join(self.storedir, "bucket_counter.state")
5093+        statefname = self.storedir.child("bucket_counter.state")
5094         self.bucket_counter = FSBucketCountingCrawler(statefname)
5095         self.bucket_counter.setServiceParent(self)
5096 
5097hunk ./src/allmydata/storage/backends/das/core.py 78
5098     def _setup_lease_checkerf(self, expiration_policy):
5099-        statefile = os.path.join(self.storedir, "lease_checker.state")
5100-        historyfile = os.path.join(self.storedir, "lease_checker.history")
5101+        statefile = self.storedir.child("lease_checker.state")
5102+        historyfile = self.storedir.child("lease_checker.history")
5103         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
5104         self.lease_checker.setServiceParent(self)
5105 
5106hunk ./src/allmydata/storage/backends/das/core.py 83
5107-    def get_incoming(self, storageindex):
5108+    def get_incoming_shnums(self, storageindex):
5109         """Return the set of incoming shnums."""
5110         try:
5111hunk ./src/allmydata/storage/backends/das/core.py 86
5112-            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
5113-            incominglist = os.listdir(incomingsharesdir)
5114-            incomingshnums = [int(x) for x in incominglist]
5115-            return set(incomingshnums)
5116-        except OSError:
5117-            # XXX I'd like to make this more specific. If there are no shares at all.
5118-            return set()
5119+           
5120+            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5121+            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5122+            return frozenset(incomingshnums)
5123+        except UnlistableError:
5124+            # There is no shares directory at all.
5125+            return frozenset()
5126             
5127     def get_shares(self, storageindex):
5128         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5129hunk ./src/allmydata/storage/backends/das/core.py 96
5130-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
5131+        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5132         try:
5133hunk ./src/allmydata/storage/backends/das/core.py 98
5134-            for f in os.listdir(finalstoragedir):
5135-                if NUM_RE.match(f):
5136-                    filename = os.path.join(finalstoragedir, f)
5137-                    yield ImmutableShare(filename, storageindex, int(f))
5138-        except OSError:
5139-            # Commonly caused by there being no shares at all.
5140+            for f in finalstoragedir.listdir():
5141+                if NUM_RE.match(f.basename):
5142+                    yield ImmutableShare(f, storageindex, int(f))
5143+        except UnlistableError:
5144+            # There is no shares directory at all.
5145             pass
5146         
5147     def get_available_space(self):
5148hunk ./src/allmydata/storage/backends/das/core.py 149
5149 # then the value stored in this field will be the actual share data length
5150 # modulo 2**32.
5151 
5152-class ImmutableShare:
5153+class ImmutableShare(object):
5154     LEASE_SIZE = struct.calcsize(">L32s32sL")
5155     sharetype = "immutable"
5156 
5157hunk ./src/allmydata/storage/backends/das/core.py 166
5158         if create:
5159             # touch the file, so later callers will see that we're working on
5160             # it. Also construct the metadata.
5161-            assert not os.path.exists(self.finalhome)
5162-            fileutil.make_dirs(os.path.dirname(self.incominghome))
5163+            assert not finalhome.exists()
5164+            fp_make_dirs(self.incominghome)
5165             f = open(self.incominghome, 'wb')
5166             # The second field -- the four-byte share data length -- is no
5167             # longer used as of Tahoe v1.3.0, but we continue to write it in
5168hunk ./src/allmydata/storage/backends/das/core.py 316
5169         except IndexError:
5170             self.add_lease(lease_info)
5171 
5172-
5173     def cancel_lease(self, cancel_secret):
5174         """Remove a lease with the given cancel_secret. If the last lease is
5175         cancelled, the file will be removed. Return the number of bytes that
5176hunk ./src/allmydata/storage/common.py 19
5177 def si_a2b(ascii_storageindex):
5178     return base32.a2b(ascii_storageindex)
5179 
5180-def storage_index_to_dir(storageindex):
5181+def storage_index_to_dir(startfp, storageindex):
5182     sia = si_b2a(storageindex)
5183     return os.path.join(sia[:2], sia)
5184hunk ./src/allmydata/storage/server.py 210
5185 
5186         # fill incoming with all shares that are incoming use a set operation
5187         # since there's no need to operate on individual pieces
5188-        incoming = self.backend.get_incoming(storageindex)
5189+        incoming = self.backend.get_incoming_shnums(storageindex)
5190 
5191         for shnum in ((sharenums - alreadygot) - incoming):
5192             if (not limited) or (remaining_space >= max_space_per_bucket):
5193hunk ./src/allmydata/test/test_backends.py 5
5194 
5195 from twisted.python.filepath import FilePath
5196 
5197+from allmydata.util.log import msg
5198+
5199 from StringIO import StringIO
5200 
5201 from allmydata.test.common_util import ReallyEqualMixin
5202hunk ./src/allmydata/test/test_backends.py 42
5203 
5204 testnodeid = 'testnodeidxxxxxxxxxx'
5205 
5206-class TestFilesMixin(unittest.TestCase):
5207-    def setUp(self):
5208-        self.storedir = FilePath('teststoredir')
5209-        self.basedir = self.storedir.child('shares')
5210-        self.baseincdir = self.basedir.child('incoming')
5211-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5212-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5213-        self.shareincomingname = self.sharedirincomingname.child('0')
5214-        self.sharefname = self.sharedirfinalname.child('0')
5215+class MockStat:
5216+    def __init__(self):
5217+        self.st_mode = None
5218 
5219hunk ./src/allmydata/test/test_backends.py 46
5220+class MockFiles(unittest.TestCase):
5221+    """ I simulate a filesystem that the code under test can use. I flag the
5222+    code under test if it reads or writes outside of its prescribed
5223+    subtree. I simulate just the parts of the filesystem that the current
5224+    implementation of DAS backend needs. """
5225     def call_open(self, fname, mode):
5226         fnamefp = FilePath(fname)
5227hunk ./src/allmydata/test/test_backends.py 53
5228+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5229+                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5230+
5231         if fnamefp == self.storedir.child('bucket_counter.state'):
5232             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
5233         elif fnamefp == self.storedir.child('lease_checker.state'):
5234hunk ./src/allmydata/test/test_backends.py 61
5235             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
5236         elif fnamefp == self.storedir.child('lease_checker.history'):
5237+            # This is separated out from the else clause below just because
5238+            # we know this particular file is going to be used by the
5239+            # current implementation of DAS backend, and we might want to
5240+            # use this information in this test in the future...
5241             return StringIO()
5242         else:
5243hunk ./src/allmydata/test/test_backends.py 67
5244-            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5245-                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5246+            # Anything else you open inside your subtree appears to be an
5247+            # empty file.
5248+            return StringIO()
5249 
5250     def call_isdir(self, fname):
5251         fnamefp = FilePath(fname)
5252hunk ./src/allmydata/test/test_backends.py 73
5253-        if fnamefp == self.storedir.child('shares'):
5254+        return fnamefp.isdir()
5255+
5256+        self.failUnless(self.storedir == self or self.storedir in self.parents(),
5257+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
5258+
5259+        # The first two cases are separate from the else clause below just
5260+        # because we know that the current implementation of the DAS backend
5261+        # inspects these two directories and we might want to make use of
5262+        # that information in the tests in the future...
5263+        if self == self.storedir.child('shares'):
5264             return True
5265hunk ./src/allmydata/test/test_backends.py 84
5266-        elif fnamefp == self.storedir.child('shares').child('incoming'):
5267+        elif self == self.storedir.child('shares').child('incoming'):
5268             return True
5269         else:
5270hunk ./src/allmydata/test/test_backends.py 87
5271-            self.failUnless(self.storedir in fnamefp.parents(),
5272-                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5273+            # Anything else you open inside your subtree appears to be a
5274+            # directory.
5275+            return True
5276 
5277     def call_mkdir(self, fname, mode):
5278hunk ./src/allmydata/test/test_backends.py 92
5279-        self.failUnlessEqual(0777, mode)
5280         fnamefp = FilePath(fname)
5281         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5282                         "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5283hunk ./src/allmydata/test/test_backends.py 95
5284+        self.failUnlessEqual(0777, mode)
5285 
5286hunk ./src/allmydata/test/test_backends.py 97
5287+    def call_listdir(self, fname):
5288+        fnamefp = FilePath(fname)
5289+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5290+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5291 
5292hunk ./src/allmydata/test/test_backends.py 102
5293-    @mock.patch('os.mkdir')
5294-    @mock.patch('__builtin__.open')
5295-    @mock.patch('os.listdir')
5296-    @mock.patch('os.path.isdir')
5297-    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5298-        mocklistdir.return_value = []
5299+    def call_stat(self, fname):
5300+        fnamefp = FilePath(fname)
5301+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5302+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5303+
5304+        msg("%s.call_stat(%s)" % (self, fname,))
5305+        mstat = MockStat()
5306+        mstat.st_mode = 16893 # a directory
5307+        return mstat
5308+
5309+    def setUp(self):
5310+        msg( "%s.setUp()" % (self,))
5311+        self.storedir = FilePath('teststoredir')
5312+        self.basedir = self.storedir.child('shares')
5313+        self.baseincdir = self.basedir.child('incoming')
5314+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5315+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5316+        self.shareincomingname = self.sharedirincomingname.child('0')
5317+        self.sharefname = self.sharedirfinalname.child('0')
5318+
5319+        self.mocklistdirp = mock.patch('os.listdir')
5320+        mocklistdir = self.mocklistdirp.__enter__()
5321+        mocklistdir.side_effect = self.call_listdir
5322+
5323+        self.mockmkdirp = mock.patch('os.mkdir')
5324+        mockmkdir = self.mockmkdirp.__enter__()
5325         mockmkdir.side_effect = self.call_mkdir
5326hunk ./src/allmydata/test/test_backends.py 129
5327+
5328+        self.mockisdirp = mock.patch('os.path.isdir')
5329+        mockisdir = self.mockisdirp.__enter__()
5330         mockisdir.side_effect = self.call_isdir
5331hunk ./src/allmydata/test/test_backends.py 133
5332+
5333+        self.mockopenp = mock.patch('__builtin__.open')
5334+        mockopen = self.mockopenp.__enter__()
5335         mockopen.side_effect = self.call_open
5336hunk ./src/allmydata/test/test_backends.py 137
5337-        mocklistdir.return_value = []
5338-       
5339-        test_func()
5340-       
5341-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5342+
5343+        self.mockstatp = mock.patch('os.stat')
5344+        mockstat = self.mockstatp.__enter__()
5345+        mockstat.side_effect = self.call_stat
5346+
5347+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
5348+        mockfpstat = self.mockfpstatp.__enter__()
5349+        mockfpstat.side_effect = self.call_stat
5350+
5351+    def tearDown(self):
5352+        msg( "%s.tearDown()" % (self,))
5353+        self.mockfpstatp.__exit__()
5354+        self.mockstatp.__exit__()
5355+        self.mockopenp.__exit__()
5356+        self.mockisdirp.__exit__()
5357+        self.mockmkdirp.__exit__()
5358+        self.mocklistdirp.__exit__()
5359 
5360 expiration_policy = {'enabled' : False,
5361                      'mode' : 'age',
5362hunk ./src/allmydata/test/test_backends.py 184
5363         self.failIf(mockopen.called)
5364         self.failIf(mockmkdir.called)
5365 
5366-class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5367+class TestServerConstruction(MockFiles, ReallyEqualMixin):
5368     def test_create_server_fs_backend(self):
5369         """ This tests whether a server instance can be constructed with a
5370         filesystem backend. To pass the test, it mustn't use the filesystem
5371hunk ./src/allmydata/test/test_backends.py 190
5372         outside of its configured storedir. """
5373 
5374-        def _f():
5375-            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5376+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5377 
5378hunk ./src/allmydata/test/test_backends.py 192
5379-        self._help_test_stay_in_your_subtree(_f)
5380-
5381-
5382-class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5383-    """ This tests both the StorageServer xyz """
5384-    @mock.patch('__builtin__.open')
5385-    def setUp(self, mockopen):
5386-        def call_open(fname, mode):
5387-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5388-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5389-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5390-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5391-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5392-                return StringIO()
5393-            else:
5394-                _assert(False, "The tester code doesn't recognize this case.") 
5395-
5396-        mockopen.side_effect = call_open
5397-        self.backend = DASCore(storedir, expiration_policy)
5398-        self.ss = StorageServer(testnodeid, self.backend)
5399-        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
5400-        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5401+class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
5402+    """ This tests both the StorageServer and the DAS backend together. """
5403+    def setUp(self):
5404+        MockFiles.setUp(self)
5405+        try:
5406+            self.backend = DASCore(self.storedir, expiration_policy)
5407+            self.ss = StorageServer(testnodeid, self.backend)
5408+            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
5409+            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5410+        except:
5411+            MockFiles.tearDown(self)
5412+            raise
5413 
5414     @mock.patch('time.time')
5415     def test_write_and_read_share(self, mocktime):
5416hunk ./src/allmydata/util/fileutil.py 8
5417 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5418 
5419 from twisted.python import log
5420+from twisted.python.filepath import UnlistableError
5421 
5422 from pycryptopp.cipher.aes import AES
5423 
5424hunk ./src/allmydata/util/fileutil.py 187
5425             raise tx
5426         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5427 
5428+def fp_make_dirs(dirfp):
5429+    """
5430+    An idempotent version of FilePath.makedirs().  If the dir already
5431+    exists, do nothing and return without raising an exception.  If this
5432+    call creates the dir, return without raising an exception.  If there is
5433+    an error that prevents creation or if the directory gets deleted after
5434+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
5435+    exists, raise an exception.
5436+    """
5437+    log.msg( "xxx 0 %s" % (dirfp,))
5438+    tx = None
5439+    try:
5440+        dirfp.makedirs()
5441+    except OSError, x:
5442+        tx = x
5443+
5444+    if not dirfp.isdir():
5445+        if tx:
5446+            raise tx
5447+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5448+
5449 def rmtree(dirname):
5450     """
5451     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5452hunk ./src/allmydata/util/fileutil.py 244
5453             raise OSError, "Failed to remove dir for unknown reason."
5454         raise OSError, excs
5455 
5456+def fp_remove(dirfp):
5457+    try:
5458+        dirfp.remove()
5459+    except UnlistableError, e:
5460+        if e.originalException.errno != errno.ENOENT:
5461+            raise
5462+
5463 def rm_dir(dirname):
5464     # Renamed to be like shutil.rmtree and unlike rmdir.
5465     return rmtree(dirname)
5466}
5467[another temporary patch for sharing work-in-progress
5468zooko@zooko.com**20110720055918
5469 Ignore-this: dfa0270476cbc6511cdb54c5c9a55a8e
5470 A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
5471 There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
5472 (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
5473 
5474] {
5475hunk ./src/allmydata/storage/backends/das/core.py 5
5476 
5477 from allmydata.interfaces import IStorageBackend
5478 from allmydata.storage.backends.base import Backend
5479-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5480+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5481 from allmydata.util.assertutil import precondition
5482 
5483 #from foolscap.api import Referenceable
5484hunk ./src/allmydata/storage/backends/das/core.py 10
5485 from twisted.application import service
5486+from twisted.python.filepath import UnlistableError
5487 
5488 from zope.interface import implements
5489 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
5490hunk ./src/allmydata/storage/backends/das/core.py 17
5491 from allmydata.util import fileutil, idlib, log, time_format
5492 import allmydata # for __full_version__
5493 
5494-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5495-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5496+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5497+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5498 from allmydata.storage.lease import LeaseInfo
5499 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5500      create_mutable_sharefile
5501hunk ./src/allmydata/storage/backends/das/core.py 41
5502 # $SHARENUM matches this regex:
5503 NUM_RE=re.compile("^[0-9]+$")
5504 
5505+def is_num(fp):
5506+    return NUM_RE.match(fp.basename)
5507+
5508 class DASCore(Backend):
5509     implements(IStorageBackend)
5510     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
5511hunk ./src/allmydata/storage/backends/das/core.py 58
5512         self.storedir = storedir
5513         self.readonly = readonly
5514         self.reserved_space = int(reserved_space)
5515-        if self.reserved_space:
5516-            if self.get_available_space() is None:
5517-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5518-                        umid="0wZ27w", level=log.UNUSUAL)
5519-
5520         self.sharedir = self.storedir.child("shares")
5521         fileutil.fp_make_dirs(self.sharedir)
5522         self.incomingdir = self.sharedir.child('incoming')
5523hunk ./src/allmydata/storage/backends/das/core.py 62
5524         self._clean_incomplete()
5525+        if self.reserved_space and (self.get_available_space() is None):
5526+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5527+                    umid="0wZ27w", level=log.UNUSUAL)
5528+
5529 
5530     def _clean_incomplete(self):
5531         fileutil.fp_remove(self.incomingdir)
5532hunk ./src/allmydata/storage/backends/das/core.py 87
5533         self.lease_checker.setServiceParent(self)
5534 
5535     def get_incoming_shnums(self, storageindex):
5536-        """Return the set of incoming shnums."""
5537+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
5538+        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
5539         try:
5540hunk ./src/allmydata/storage/backends/das/core.py 90
5541-           
5542-            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5543-            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5544-            return frozenset(incomingshnums)
5545+            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
5546+            shnums = [ int(fp.basename) for fp in childfps ]
5547+            return frozenset(shnums)
5548         except UnlistableError:
5549             # There is no shares directory at all.
5550             return frozenset()
5551hunk ./src/allmydata/storage/backends/das/core.py 98
5552             
5553     def get_shares(self, storageindex):
5554-        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5555+        """ Generate ImmutableShare objects for shares we have for this
5556+        storageindex. ("Shares we have" means completed ones, excluding
5557+        incoming ones.)"""
5558         finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5559         try:
5560hunk ./src/allmydata/storage/backends/das/core.py 103
5561-            for f in finalstoragedir.listdir():
5562-                if NUM_RE.match(f.basename):
5563-                    yield ImmutableShare(f, storageindex, int(f))
5564+            for fp in finalstoragedir.children():
5565+                if is_num(fp):
5566+                    yield ImmutableShare(fp, storageindex)
5567         except UnlistableError:
5568             # There is no shares directory at all.
5569             pass
5570hunk ./src/allmydata/storage/backends/das/core.py 116
5571         return fileutil.get_available_space(self.storedir, self.reserved_space)
5572 
5573     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
5574-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
5575-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
5576+        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
5577+        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
5578         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
5579         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
5580         return bw
5581hunk ./src/allmydata/storage/backends/das/expirer.py 50
5582     slow_start = 360 # wait 6 minutes after startup
5583     minimum_cycle_time = 12*60*60 # not more than twice per day
5584 
5585-    def __init__(self, statefile, historyfile, expiration_policy):
5586-        self.historyfile = historyfile
5587+    def __init__(self, statefile, historyfp, expiration_policy):
5588+        self.historyfp = historyfp
5589         self.expiration_enabled = expiration_policy['enabled']
5590         self.mode = expiration_policy['mode']
5591         self.override_lease_duration = None
5592hunk ./src/allmydata/storage/backends/das/expirer.py 80
5593             self.state["cycle-to-date"].setdefault(k, so_far[k])
5594 
5595         # initialize history
5596-        if not os.path.exists(self.historyfile):
5597+        if not self.historyfp.exists():
5598             history = {} # cyclenum -> dict
5599hunk ./src/allmydata/storage/backends/das/expirer.py 82
5600-            f = open(self.historyfile, "wb")
5601-            pickle.dump(history, f)
5602-            f.close()
5603+            self.historyfp.setContent(pickle.dumps(history))
5604 
5605     def create_empty_cycle_dict(self):
5606         recovered = self.create_empty_recovered_dict()
5607hunk ./src/allmydata/storage/backends/das/expirer.py 305
5608         # copy() needs to become a deepcopy
5609         h["space-recovered"] = s["space-recovered"].copy()
5610 
5611-        history = pickle.load(open(self.historyfile, "rb"))
5612+        history = pickle.load(self.historyfp.getContent())
5613         history[cycle] = h
5614         while len(history) > 10:
5615             oldcycles = sorted(history.keys())
5616hunk ./src/allmydata/storage/backends/das/expirer.py 310
5617             del history[oldcycles[0]]
5618-        f = open(self.historyfile, "wb")
5619-        pickle.dump(history, f)
5620-        f.close()
5621+        self.historyfp.setContent(pickle.dumps(history))
5622 
5623     def get_state(self):
5624         """In addition to the crawler state described in
5625hunk ./src/allmydata/storage/backends/das/expirer.py 379
5626         progress = self.get_progress()
5627 
5628         state = ShareCrawler.get_state(self) # does a shallow copy
5629-        history = pickle.load(open(self.historyfile, "rb"))
5630+        history = pickle.load(self.historyfp.getContent())
5631         state["history"] = history
5632 
5633         if not progress["cycle-in-progress"]:
5634hunk ./src/allmydata/storage/common.py 19
5635 def si_a2b(ascii_storageindex):
5636     return base32.a2b(ascii_storageindex)
5637 
5638-def storage_index_to_dir(startfp, storageindex):
5639+def si_dir(startfp, storageindex):
5640     sia = si_b2a(storageindex)
5641hunk ./src/allmydata/storage/common.py 21
5642-    return os.path.join(sia[:2], sia)
5643+    return startfp.child(sia[:2]).child(sia)
5644hunk ./src/allmydata/storage/crawler.py 68
5645     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
5646     minimum_cycle_time = 300 # don't run a cycle faster than this
5647 
5648-    def __init__(self, statefname, allowed_cpu_percentage=None):
5649+    def __init__(self, statefp, allowed_cpu_percentage=None):
5650         service.MultiService.__init__(self)
5651         if allowed_cpu_percentage is not None:
5652             self.allowed_cpu_percentage = allowed_cpu_percentage
5653hunk ./src/allmydata/storage/crawler.py 72
5654-        self.statefname = statefname
5655+        self.statefp = statefp
5656         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
5657                          for i in range(2**10)]
5658         self.prefixes.sort()
5659hunk ./src/allmydata/storage/crawler.py 192
5660         #                            of the last bucket to be processed, or
5661         #                            None if we are sleeping between cycles
5662         try:
5663-            f = open(self.statefname, "rb")
5664-            state = pickle.load(f)
5665-            f.close()
5666+            state = pickle.loads(self.statefp.getContent())
5667         except EnvironmentError:
5668             state = {"version": 1,
5669                      "last-cycle-finished": None,
5670hunk ./src/allmydata/storage/crawler.py 228
5671         else:
5672             last_complete_prefix = self.prefixes[lcpi]
5673         self.state["last-complete-prefix"] = last_complete_prefix
5674-        tmpfile = self.statefname + ".tmp"
5675-        f = open(tmpfile, "wb")
5676-        pickle.dump(self.state, f)
5677-        f.close()
5678-        fileutil.move_into_place(tmpfile, self.statefname)
5679+        self.statefp.setContent(pickle.dumps(self.state))
5680 
5681     def startService(self):
5682         # arrange things to look like we were just sleeping, so
5683hunk ./src/allmydata/storage/crawler.py 440
5684 
5685     minimum_cycle_time = 60*60 # we don't need this more than once an hour
5686 
5687-    def __init__(self, statefname, num_sample_prefixes=1):
5688-        FSShareCrawler.__init__(self, statefname)
5689+    def __init__(self, statefp, num_sample_prefixes=1):
5690+        FSShareCrawler.__init__(self, statefp)
5691         self.num_sample_prefixes = num_sample_prefixes
5692 
5693     def add_initial_state(self):
5694hunk ./src/allmydata/storage/server.py 11
5695 from allmydata.util import fileutil, idlib, log, time_format
5696 import allmydata # for __full_version__
5697 
5698-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5699-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5700+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5701+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5702 from allmydata.storage.lease import LeaseInfo
5703 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5704      create_mutable_sharefile
5705hunk ./src/allmydata/storage/server.py 173
5706         # to a particular owner.
5707         start = time.time()
5708         self.count("allocate")
5709-        alreadygot = set()
5710         incoming = set()
5711         bucketwriters = {} # k: shnum, v: BucketWriter
5712 
5713hunk ./src/allmydata/storage/server.py 199
5714             remaining_space -= self.allocated_size()
5715         # self.readonly_storage causes remaining_space <= 0
5716 
5717-        # fill alreadygot with all shares that we have, not just the ones
5718+        # Fill alreadygot with all shares that we have, not just the ones
5719         # they asked about: this will save them a lot of work. Add or update
5720         # leases for all of them: if they want us to hold shares for this
5721hunk ./src/allmydata/storage/server.py 202
5722-        # file, they'll want us to hold leases for this file.
5723+        # file, they'll want us to hold leases for all the shares of it.
5724+        alreadygot = set()
5725         for share in self.backend.get_shares(storageindex):
5726hunk ./src/allmydata/storage/server.py 205
5727-            alreadygot.add(share.shnum)
5728             share.add_or_renew_lease(lease_info)
5729hunk ./src/allmydata/storage/server.py 206
5730+            alreadygot.add(share.shnum)
5731 
5732hunk ./src/allmydata/storage/server.py 208
5733-        # fill incoming with all shares that are incoming use a set operation
5734-        # since there's no need to operate on individual pieces
5735+        # all share numbers that are incoming
5736         incoming = self.backend.get_incoming_shnums(storageindex)
5737 
5738         for shnum in ((sharenums - alreadygot) - incoming):
5739hunk ./src/allmydata/storage/server.py 282
5740             total_space_freed += sf.cancel_lease(cancel_secret)
5741 
5742         if found_buckets:
5743-            storagedir = os.path.join(self.sharedir,
5744-                                      storage_index_to_dir(storageindex))
5745-            if not os.listdir(storagedir):
5746-                os.rmdir(storagedir)
5747+            storagedir = si_dir(self.sharedir, storageindex)
5748+            fp_rmdir_if_empty(storagedir)
5749 
5750         if self.stats_provider:
5751             self.stats_provider.count('storage_server.bytes_freed',
5752hunk ./src/allmydata/test/test_backends.py 52
5753     subtree. I simulate just the parts of the filesystem that the current
5754     implementation of DAS backend needs. """
5755     def call_open(self, fname, mode):
5756+        assert isinstance(fname, basestring), fname
5757         fnamefp = FilePath(fname)
5758         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5759                         "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5760hunk ./src/allmydata/test/test_backends.py 104
5761                         "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5762 
5763     def call_stat(self, fname):
5764+        assert isinstance(fname, basestring), fname
5765         fnamefp = FilePath(fname)
5766         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5767                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5768hunk ./src/allmydata/test/test_backends.py 217
5769 
5770         mocktime.return_value = 0
5771         # Inspect incoming and fail unless it's empty.
5772-        incomingset = self.ss.backend.get_incoming('teststorage_index')
5773-        self.failUnlessReallyEqual(incomingset, set())
5774+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
5775+        self.failUnlessReallyEqual(incomingset, frozenset())
5776         
5777         # Populate incoming with the sharenum: 0.
5778hunk ./src/allmydata/test/test_backends.py 221
5779-        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5780+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5781 
5782         # Inspect incoming and fail unless the sharenum: 0 is listed there.
5783hunk ./src/allmydata/test/test_backends.py 224
5784-        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
5785+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
5786         
5787         # Attempt to create a second share writer with the same sharenum.
5788hunk ./src/allmydata/test/test_backends.py 227
5789-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5790+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5791 
5792         # Show that no sharewriter results from a remote_allocate_buckets
5793         # with the same si and sharenum, until BucketWriter.remote_close()
5794hunk ./src/allmydata/test/test_backends.py 280
5795         StorageServer object. """
5796 
5797         def call_listdir(dirname):
5798+            precondition(isinstance(dirname, basestring), dirname)
5799             self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
5800             return ['0']
5801 
5802hunk ./src/allmydata/test/test_backends.py 287
5803         mocklistdir.side_effect = call_listdir
5804 
5805         def call_open(fname, mode):
5806+            precondition(isinstance(fname, basestring), fname)
5807             self.failUnlessReallyEqual(fname, sharefname)
5808             self.failUnlessEqual(mode[0], 'r', mode)
5809             self.failUnless('b' in mode, mode)
5810hunk ./src/allmydata/test/test_backends.py 297
5811 
5812         datalen = len(share_data)
5813         def call_getsize(fname):
5814+            precondition(isinstance(fname, basestring), fname)
5815             self.failUnlessReallyEqual(fname, sharefname)
5816             return datalen
5817         mockgetsize.side_effect = call_getsize
5818hunk ./src/allmydata/test/test_backends.py 303
5819 
5820         def call_exists(fname):
5821+            precondition(isinstance(fname, basestring), fname)
5822             self.failUnlessReallyEqual(fname, sharefname)
5823             return True
5824         mockexists.side_effect = call_exists
5825hunk ./src/allmydata/test/test_backends.py 321
5826         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
5827 
5828 
5829-class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
5830-    @mock.patch('time.time')
5831-    @mock.patch('os.mkdir')
5832-    @mock.patch('__builtin__.open')
5833-    @mock.patch('os.listdir')
5834-    @mock.patch('os.path.isdir')
5835-    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5836+class TestBackendConstruction(MockFiles, ReallyEqualMixin):
5837+    def test_create_fs_backend(self):
5838         """ This tests whether a file system backend instance can be
5839         constructed. To pass the test, it has to use the
5840         filesystem in only the prescribed ways. """
5841hunk ./src/allmydata/test/test_backends.py 327
5842 
5843-        def call_open(fname, mode):
5844-            if fname == os.path.join(storedir,'bucket_counter.state'):
5845-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5846-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5847-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5848-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5849-                return StringIO()
5850-            else:
5851-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5852-        mockopen.side_effect = call_open
5853-
5854-        def call_isdir(fname):
5855-            if fname == os.path.join(storedir,'shares'):
5856-                return True
5857-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5858-                return True
5859-            else:
5860-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5861-        mockisdir.side_effect = call_isdir
5862-
5863-        def call_mkdir(fname, mode):
5864-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
5865-            self.failUnlessEqual(0777, mode)
5866-            if fname == storedir:
5867-                return None
5868-            elif fname == os.path.join(storedir,'shares'):
5869-                return None
5870-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5871-                return None
5872-            else:
5873-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
5874-        mockmkdir.side_effect = call_mkdir
5875-
5876         # Now begin the test.
5877hunk ./src/allmydata/test/test_backends.py 328
5878-        DASCore('teststoredir', expiration_policy)
5879-
5880-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5881-
5882+        DASCore(self.storedir, expiration_policy)
5883hunk ./src/allmydata/util/fileutil.py 7
5884 
5885 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5886 
5887+from allmydata.util.assertutil import precondition
5888+
5889 from twisted.python import log
5890hunk ./src/allmydata/util/fileutil.py 10
5891-from twisted.python.filepath import UnlistableError
5892+from twisted.python.filepath import FilePath, UnlistableError
5893 
5894 from pycryptopp.cipher.aes import AES
5895 
5896hunk ./src/allmydata/util/fileutil.py 210
5897             raise tx
5898         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5899 
5900+def fp_rmdir_if_empty(dirfp):
5901+    """ Remove the directory if it is empty. """
5902+    try:
5903+        os.rmdir(dirfp.path)
5904+    except OSError, e:
5905+        if e.errno != errno.ENOTEMPTY:
5906+            raise
5907+    else:
5908+        dirfp.changed()
5909+
5910 def rmtree(dirname):
5911     """
5912     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5913hunk ./src/allmydata/util/fileutil.py 257
5914         raise OSError, excs
5915 
5916 def fp_remove(dirfp):
5917+    """
5918+    An idempotent version of shutil.rmtree().  If the dir is already gone,
5919+    do nothing and return without raising an exception.  If this call
5920+    removes the dir, return without raising an exception.  If there is an
5921+    error that prevents removal or if the directory gets created again by
5922+    someone else after this deletes it and before this checks that it is
5923+    gone, raise an exception.
5924+    """
5925     try:
5926         dirfp.remove()
5927     except UnlistableError, e:
5928hunk ./src/allmydata/util/fileutil.py 270
5929         if e.originalException.errno != errno.ENOENT:
5930             raise
5931+    except OSError, e:
5932+        if e.errno != errno.ENOENT:
5933+            raise
5934 
5935 def rm_dir(dirname):
5936     # Renamed to be like shutil.rmtree and unlike rmdir.
5937hunk ./src/allmydata/util/fileutil.py 387
5938         import traceback
5939         traceback.print_exc()
5940 
5941-def get_disk_stats(whichdir, reserved_space=0):
5942+def get_disk_stats(whichdirfp, reserved_space=0):
5943     """Return disk statistics for the storage disk, in the form of a dict
5944     with the following fields.
5945       total:            total bytes on disk
5946hunk ./src/allmydata/util/fileutil.py 408
5947     you can pass how many bytes you would like to leave unused on this
5948     filesystem as reserved_space.
5949     """
5950+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5951 
5952     if have_GetDiskFreeSpaceExW:
5953         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
5954hunk ./src/allmydata/util/fileutil.py 419
5955         n_free_for_nonroot = c_ulonglong(0)
5956         n_total            = c_ulonglong(0)
5957         n_free_for_root    = c_ulonglong(0)
5958-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
5959+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
5960                                                byref(n_total),
5961                                                byref(n_free_for_root))
5962         if retval == 0:
5963hunk ./src/allmydata/util/fileutil.py 424
5964             raise OSError("Windows error %d attempting to get disk statistics for %r"
5965-                          % (GetLastError(), whichdir))
5966+                          % (GetLastError(), whichdirfp.path))
5967         free_for_nonroot = n_free_for_nonroot.value
5968         total            = n_total.value
5969         free_for_root    = n_free_for_root.value
5970hunk ./src/allmydata/util/fileutil.py 433
5971         # <http://docs.python.org/library/os.html#os.statvfs>
5972         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
5973         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
5974-        s = os.statvfs(whichdir)
5975+        s = os.statvfs(whichdirfp.path)
5976 
5977         # on my mac laptop:
5978         #  statvfs(2) is a wrapper around statfs(2).
5979hunk ./src/allmydata/util/fileutil.py 460
5980              'avail': avail,
5981            }
5982 
5983-def get_available_space(whichdir, reserved_space):
5984+def get_available_space(whichdirfp, reserved_space):
5985     """Returns available space for share storage in bytes, or None if no
5986     API to get this information is available.
5987 
5988hunk ./src/allmydata/util/fileutil.py 472
5989     you can pass how many bytes you would like to leave unused on this
5990     filesystem as reserved_space.
5991     """
5992+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5993     try:
5994hunk ./src/allmydata/util/fileutil.py 474
5995-        return get_disk_stats(whichdir, reserved_space)['avail']
5996+        return get_disk_stats(whichdirfp, reserved_space)['avail']
5997     except AttributeError:
5998         return None
5999hunk ./src/allmydata/util/fileutil.py 477
6000-    except EnvironmentError:
6001-        log.msg("OS call to get disk statistics failed")
6002-        return 0
6003}
6004[jacp16 or so
6005wilcoxjg@gmail.com**20110722070036
6006 Ignore-this: 7548785cad146056eede9a16b93b569f
6007] {
6008merger 0.0 (
6009hunk ./src/allmydata/_auto_deps.py 21
6010-    "Twisted >= 2.4.0",
6011+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6012+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6013+    # support asynchronous close.
6014+    "Twisted >= 10.1.0",
6015hunk ./src/allmydata/_auto_deps.py 21
6016-    "Twisted >= 2.4.0",
6017+    "Twisted >= 11.0",
6018)
6019hunk ./src/allmydata/storage/backends/das/core.py 2
6020 import os, re, weakref, struct, time, stat
6021+from twisted.application import service
6022+from twisted.python.filepath import UnlistableError
6023+from twisted.python.filepath import FilePath
6024+from zope.interface import implements
6025 
6026hunk ./src/allmydata/storage/backends/das/core.py 7
6027+import allmydata # for __full_version__
6028 from allmydata.interfaces import IStorageBackend
6029 from allmydata.storage.backends.base import Backend
6030hunk ./src/allmydata/storage/backends/das/core.py 10
6031-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6032+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6033 from allmydata.util.assertutil import precondition
6034hunk ./src/allmydata/storage/backends/das/core.py 12
6035-
6036-#from foolscap.api import Referenceable
6037-from twisted.application import service
6038-from twisted.python.filepath import UnlistableError
6039-
6040-from zope.interface import implements
6041 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6042 from allmydata.util import fileutil, idlib, log, time_format
6043hunk ./src/allmydata/storage/backends/das/core.py 14
6044-import allmydata # for __full_version__
6045-
6046-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6047-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6048 from allmydata.storage.lease import LeaseInfo
6049 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6050      create_mutable_sharefile
6051hunk ./src/allmydata/storage/backends/das/core.py 21
6052 from allmydata.storage.crawler import FSBucketCountingCrawler
6053 from allmydata.util.hashutil import constant_time_compare
6054 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6055-
6056-from zope.interface import implements
6057+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6058 
6059 # storage/
6060 # storage/shares/incoming
6061hunk ./src/allmydata/storage/backends/das/core.py 49
6062         self._setup_lease_checkerf(expiration_policy)
6063 
6064     def _setup_storage(self, storedir, readonly, reserved_space):
6065+        precondition(isinstance(storedir, FilePath)) 
6066         self.storedir = storedir
6067         self.readonly = readonly
6068         self.reserved_space = int(reserved_space)
6069hunk ./src/allmydata/storage/backends/das/core.py 83
6070 
6071     def get_incoming_shnums(self, storageindex):
6072         """ Return a frozenset of the shnum (as ints) of incoming shares. """
6073-        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
6074+        incomingdir = si_si2dir(self.incomingdir, storageindex)
6075         try:
6076             childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
6077             shnums = [ int(fp.basename) for fp in childfps ]
6078hunk ./src/allmydata/storage/backends/das/core.py 96
6079         """ Generate ImmutableShare objects for shares we have for this
6080         storageindex. ("Shares we have" means completed ones, excluding
6081         incoming ones.)"""
6082-        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
6083+        finalstoragedir = si_si2dir(self.sharedir, storageindex)
6084         try:
6085             for fp in finalstoragedir.children():
6086                 if is_num(fp):
6087hunk ./src/allmydata/storage/backends/das/core.py 111
6088         return fileutil.get_available_space(self.storedir, self.reserved_space)
6089 
6090     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6091-        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
6092-        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6093+        finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
6094+        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6095         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
6096         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
6097         return bw
6098hunk ./src/allmydata/storage/backends/null/core.py 18
6099         return None
6100 
6101     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6102-       
6103-        immutableshare = ImmutableShare()
6104+        immutableshare = ImmutableShare()
6105         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
6106 
6107     def set_storage_server(self, ss):
6108hunk ./src/allmydata/storage/backends/null/core.py 24
6109         self.ss = ss
6110 
6111-    def get_incoming(self, storageindex):
6112-        return set()
6113+    def get_incoming_shnums(self, storageindex):
6114+        return frozenset()
6115 
6116 class ImmutableShare:
6117     sharetype = "immutable"
6118hunk ./src/allmydata/storage/common.py 19
6119 def si_a2b(ascii_storageindex):
6120     return base32.a2b(ascii_storageindex)
6121 
6122-def si_dir(startfp, storageindex):
6123+def si_si2dir(startfp, storageindex):
6124     sia = si_b2a(storageindex)
6125     return startfp.child(sia[:2]).child(sia)
6126hunk ./src/allmydata/storage/immutable.py 20
6127     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
6128         self.ss = ss
6129         self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
6130-
6131         self._canary = canary
6132         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
6133         self.closed = False
6134hunk ./src/allmydata/storage/lease.py 17
6135 
6136     def get_expiration_time(self):
6137         return self.expiration_time
6138+
6139     def get_grant_renew_time_time(self):
6140         # hack, based upon fixed 31day expiration period
6141         return self.expiration_time - 31*24*60*60
6142hunk ./src/allmydata/storage/lease.py 21
6143+
6144     def get_age(self):
6145         return time.time() - self.get_grant_renew_time_time()
6146 
6147hunk ./src/allmydata/storage/lease.py 32
6148          self.expiration_time) = struct.unpack(">L32s32sL", data)
6149         self.nodeid = None
6150         return self
6151+
6152     def to_immutable_data(self):
6153         return struct.pack(">L32s32sL",
6154                            self.owner_num,
6155hunk ./src/allmydata/storage/lease.py 45
6156                            int(self.expiration_time),
6157                            self.renew_secret, self.cancel_secret,
6158                            self.nodeid)
6159+
6160     def from_mutable_data(self, data):
6161         (self.owner_num,
6162          self.expiration_time,
6163hunk ./src/allmydata/storage/server.py 11
6164 from allmydata.util import fileutil, idlib, log, time_format
6165 import allmydata # for __full_version__
6166 
6167-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6168-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6169+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6170+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6171 from allmydata.storage.lease import LeaseInfo
6172 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6173      create_mutable_sharefile
6174hunk ./src/allmydata/storage/server.py 88
6175             else:
6176                 stats["mean"] = None
6177 
6178-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
6179-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
6180-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
6181+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
6182+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
6183+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
6184                              (0.999, "99_9_percentile", 1000)]
6185 
6186             for percentile, percentilestring, minnumtoobserve in orderstatlist:
6187hunk ./src/allmydata/storage/server.py 231
6188             header = f.read(32)
6189             f.close()
6190             if header[:32] == MutableShareFile.MAGIC:
6191+                # XXX  Can I exploit this code?
6192                 sf = MutableShareFile(filename, self)
6193                 # note: if the share has been migrated, the renew_lease()
6194                 # call will throw an exception, with information to help the
6195hunk ./src/allmydata/storage/server.py 237
6196                 # client update the lease.
6197             elif header[:4] == struct.pack(">L", 1):
6198+                # Check if version number is "1".
6199+                # XXX WHAT ABOUT OTHER VERSIONS!!!!!!!?
6200                 sf = ShareFile(filename)
6201             else:
6202                 continue # non-sharefile
6203hunk ./src/allmydata/storage/server.py 285
6204             total_space_freed += sf.cancel_lease(cancel_secret)
6205 
6206         if found_buckets:
6207-            storagedir = si_dir(self.sharedir, storageindex)
6208+            # XXX  Yikes looks like code that shouldn't be in the server!
6209+            storagedir = si_si2dir(self.sharedir, storageindex)
6210             fp_rmdir_if_empty(storagedir)
6211 
6212         if self.stats_provider:
6213hunk ./src/allmydata/storage/server.py 301
6214             self.stats_provider.count('storage_server.bytes_added', consumed_size)
6215         del self._active_writers[bw]
6216 
6217-
6218     def remote_get_buckets(self, storageindex):
6219         start = time.time()
6220         self.count("get")
6221hunk ./src/allmydata/storage/server.py 329
6222         except StopIteration:
6223             return iter([])
6224 
6225+    #  XXX  As far as Zancas' grockery has gotten.
6226     def remote_slot_testv_and_readv_and_writev(self, storageindex,
6227                                                secrets,
6228                                                test_and_write_vectors,
6229hunk ./src/allmydata/storage/server.py 338
6230         self.count("writev")
6231         si_s = si_b2a(storageindex)
6232         log.msg("storage: slot_writev %s" % si_s)
6233-        si_dir = storage_index_to_dir(storageindex)
6234+       
6235         (write_enabler, renew_secret, cancel_secret) = secrets
6236         # shares exist if there is a file for them
6237hunk ./src/allmydata/storage/server.py 341
6238-        bucketdir = os.path.join(self.sharedir, si_dir)
6239+        bucketdir = si_si2dir(self.sharedir, storageindex)
6240         shares = {}
6241         if os.path.isdir(bucketdir):
6242             for sharenum_s in os.listdir(bucketdir):
6243hunk ./src/allmydata/storage/server.py 430
6244         si_s = si_b2a(storageindex)
6245         lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
6246                      facility="tahoe.storage", level=log.OPERATIONAL)
6247-        si_dir = storage_index_to_dir(storageindex)
6248         # shares exist if there is a file for them
6249hunk ./src/allmydata/storage/server.py 431
6250-        bucketdir = os.path.join(self.sharedir, si_dir)
6251+        bucketdir = si_si2dir(self.sharedir, storageindex)
6252         if not os.path.isdir(bucketdir):
6253             self.add_latency("readv", time.time() - start)
6254             return {}
6255hunk ./src/allmydata/test/test_backends.py 2
6256 from twisted.trial import unittest
6257-
6258 from twisted.python.filepath import FilePath
6259hunk ./src/allmydata/test/test_backends.py 3
6260-
6261 from allmydata.util.log import msg
6262hunk ./src/allmydata/test/test_backends.py 4
6263-
6264 from StringIO import StringIO
6265hunk ./src/allmydata/test/test_backends.py 5
6266-
6267 from allmydata.test.common_util import ReallyEqualMixin
6268 from allmydata.util.assertutil import _assert
6269hunk ./src/allmydata/test/test_backends.py 7
6270-
6271 import mock
6272 
6273 # This is the code that we're going to be testing.
6274hunk ./src/allmydata/test/test_backends.py 11
6275 from allmydata.storage.server import StorageServer
6276-
6277 from allmydata.storage.backends.das.core import DASCore
6278 from allmydata.storage.backends.null.core import NullCore
6279 
6280hunk ./src/allmydata/test/test_backends.py 14
6281-
6282-# The following share file contents was generated with
6283+# The following share file content was generated with
6284 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
6285hunk ./src/allmydata/test/test_backends.py 16
6286-# with share data == 'a'.
6287+# with share data == 'a'. The total size of this input
6288+# is 85 bytes.
6289 shareversionnumber = '\x00\x00\x00\x01'
6290 sharedatalength = '\x00\x00\x00\x01'
6291 numberofleases = '\x00\x00\x00\x01'
6292hunk ./src/allmydata/test/test_backends.py 21
6293-
6294 shareinputdata = 'a'
6295 ownernumber = '\x00\x00\x00\x00'
6296 renewsecret  = 'x'*32
6297hunk ./src/allmydata/test/test_backends.py 31
6298 client_data = shareinputdata + ownernumber + renewsecret + \
6299     cancelsecret + expirationtime + nextlease
6300 share_data = containerdata + client_data
6301-
6302-
6303 testnodeid = 'testnodeidxxxxxxxxxx'
6304 
6305 class MockStat:
6306hunk ./src/allmydata/test/test_backends.py 105
6307         mstat.st_mode = 16893 # a directory
6308         return mstat
6309 
6310+    def call_get_available_space(self, storedir, reservedspace):
6311+        # The input vector has an input size of 85.
6312+        return 85 - reservedspace
6313+
6314+    def call_exists(self):
6315+        # I'm only called in the ImmutableShareFile constructor.
6316+        return False
6317+
6318     def setUp(self):
6319         msg( "%s.setUp()" % (self,))
6320         self.storedir = FilePath('teststoredir')
6321hunk ./src/allmydata/test/test_backends.py 147
6322         mockfpstat = self.mockfpstatp.__enter__()
6323         mockfpstat.side_effect = self.call_stat
6324 
6325+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6326+        mockget_available_space = self.mockget_available_space.__enter__()
6327+        mockget_available_space.side_effect = self.call_get_available_space
6328+
6329+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6330+        mockfpexists = self.mockfpexists.__enter__()
6331+        mockfpexists.side_effect = self.call_exists
6332+
6333     def tearDown(self):
6334         msg( "%s.tearDown()" % (self,))
6335hunk ./src/allmydata/test/test_backends.py 157
6336+        self.mockfpexists.__exit__()
6337+        self.mockget_available_space.__exit__()
6338         self.mockfpstatp.__exit__()
6339         self.mockstatp.__exit__()
6340         self.mockopenp.__exit__()
6341hunk ./src/allmydata/test/test_backends.py 166
6342         self.mockmkdirp.__exit__()
6343         self.mocklistdirp.__exit__()
6344 
6345+
6346 expiration_policy = {'enabled' : False,
6347                      'mode' : 'age',
6348                      'override_lease_duration' : None,
6349hunk ./src/allmydata/test/test_backends.py 182
6350         self.ss = StorageServer(testnodeid, backend=NullCore())
6351 
6352     @mock.patch('os.mkdir')
6353-
6354     @mock.patch('__builtin__.open')
6355     @mock.patch('os.listdir')
6356     @mock.patch('os.path.isdir')
6357hunk ./src/allmydata/test/test_backends.py 201
6358         filesystem backend. To pass the test, it mustn't use the filesystem
6359         outside of its configured storedir. """
6360 
6361-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
6362+        StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
6363 
6364 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
6365     """ This tests both the StorageServer and the DAS backend together. """
6366hunk ./src/allmydata/test/test_backends.py 205
6367+   
6368     def setUp(self):
6369         MockFiles.setUp(self)
6370         try:
6371hunk ./src/allmydata/test/test_backends.py 211
6372             self.backend = DASCore(self.storedir, expiration_policy)
6373             self.ss = StorageServer(testnodeid, self.backend)
6374-            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6375-            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
6376+            self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6377+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
6378         except:
6379             MockFiles.tearDown(self)
6380             raise
6381hunk ./src/allmydata/test/test_backends.py 233
6382         # Populate incoming with the sharenum: 0.
6383         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6384 
6385-        # Inspect incoming and fail unless the sharenum: 0 is listed there.
6386-        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6387+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
6388+        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6389         
6390         # Attempt to create a second share writer with the same sharenum.
6391         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6392hunk ./src/allmydata/test/test_backends.py 257
6393 
6394         # Postclose: (Omnibus) failUnless written data is in final.
6395         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
6396-        contents = sharesinfinal[0].read_share_data(0,73)
6397+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
6398+        contents = sharesinfinal[0].read_share_data(0, 73)
6399         self.failUnlessReallyEqual(contents, client_data)
6400 
6401         # Exercise the case that the share we're asking to allocate is
6402hunk ./src/allmydata/test/test_backends.py 276
6403         mockget_available_space.side_effect = call_get_available_space
6404         
6405         
6406-        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6407+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6408 
6409     @mock.patch('os.path.exists')
6410     @mock.patch('os.path.getsize')
6411}
6412[jacp17
6413wilcoxjg@gmail.com**20110722203244
6414 Ignore-this: e79a5924fb2eb786ee4e9737a8228f87
6415] {
6416hunk ./src/allmydata/storage/backends/das/core.py 14
6417 from allmydata.util.assertutil import precondition
6418 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6419 from allmydata.util import fileutil, idlib, log, time_format
6420+from allmydata.util.fileutil import fp_make_dirs
6421 from allmydata.storage.lease import LeaseInfo
6422 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6423      create_mutable_sharefile
6424hunk ./src/allmydata/storage/backends/das/core.py 19
6425 from allmydata.storage.immutable import BucketWriter, BucketReader
6426-from allmydata.storage.crawler import FSBucketCountingCrawler
6427+from allmydata.storage.crawler import BucketCountingCrawler
6428 from allmydata.util.hashutil import constant_time_compare
6429hunk ./src/allmydata/storage/backends/das/core.py 21
6430-from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6431+from allmydata.storage.backends.das.expirer import LeaseCheckingCrawler
6432 _pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6433 
6434 # storage/
6435hunk ./src/allmydata/storage/backends/das/core.py 43
6436     implements(IStorageBackend)
6437     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
6438         Backend.__init__(self)
6439-
6440         self._setup_storage(storedir, readonly, reserved_space)
6441         self._setup_corruption_advisory()
6442         self._setup_bucket_counter()
6443hunk ./src/allmydata/storage/backends/das/core.py 72
6444 
6445     def _setup_bucket_counter(self):
6446         statefname = self.storedir.child("bucket_counter.state")
6447-        self.bucket_counter = FSBucketCountingCrawler(statefname)
6448+        self.bucket_counter = BucketCountingCrawler(statefname)
6449         self.bucket_counter.setServiceParent(self)
6450 
6451     def _setup_lease_checkerf(self, expiration_policy):
6452hunk ./src/allmydata/storage/backends/das/core.py 78
6453         statefile = self.storedir.child("lease_checker.state")
6454         historyfile = self.storedir.child("lease_checker.history")
6455-        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6456+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6457         self.lease_checker.setServiceParent(self)
6458 
6459     def get_incoming_shnums(self, storageindex):
6460hunk ./src/allmydata/storage/backends/das/core.py 168
6461             # it. Also construct the metadata.
6462             assert not finalhome.exists()
6463             fp_make_dirs(self.incominghome)
6464-            f = open(self.incominghome, 'wb')
6465+            f = self.incominghome.child(str(self.shnum))
6466             # The second field -- the four-byte share data length -- is no
6467             # longer used as of Tahoe v1.3.0, but we continue to write it in
6468             # there in case someone downgrades a storage server from >=
6469hunk ./src/allmydata/storage/backends/das/core.py 178
6470             # the largest length that can fit into the field. That way, even
6471             # if this does happen, the old < v1.3.0 server will still allow
6472             # clients to read the first part of the share.
6473-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6474-            f.close()
6475+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6476+            #f.close()
6477             self._lease_offset = max_size + 0x0c
6478             self._num_leases = 0
6479         else:
6480hunk ./src/allmydata/storage/backends/das/core.py 261
6481         f.write(data)
6482         f.close()
6483 
6484-    def _write_lease_record(self, f, lease_number, lease_info):
6485+    def _write_lease_record(self, lease_number, lease_info):
6486         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6487         f.seek(offset)
6488         assert f.tell() == offset
6489hunk ./src/allmydata/storage/backends/das/core.py 290
6490                 yield LeaseInfo().from_immutable_data(data)
6491 
6492     def add_lease(self, lease_info):
6493-        f = open(self.incominghome, 'rb+')
6494+        self.incominghome, 'rb+')
6495         num_leases = self._read_num_leases(f)
6496         self._write_lease_record(f, num_leases, lease_info)
6497         self._write_num_leases(f, num_leases+1)
6498hunk ./src/allmydata/storage/backends/das/expirer.py 1
6499-import time, os, pickle, struct
6500-from allmydata.storage.crawler import FSShareCrawler
6501+import time, os, pickle, struct # os, pickle, and struct will almost certainly be migrated to the backend...
6502+from allmydata.storage.crawler import ShareCrawler
6503 from allmydata.storage.common import UnknownMutableContainerVersionError, \
6504      UnknownImmutableContainerVersionError
6505 from twisted.python import log as twlog
6506hunk ./src/allmydata/storage/backends/das/expirer.py 7
6507 
6508-class FSLeaseCheckingCrawler(FSShareCrawler):
6509+class LeaseCheckingCrawler(ShareCrawler):
6510     """I examine the leases on all shares, determining which are still valid
6511     and which have expired. I can remove the expired leases (if so
6512     configured), and the share will be deleted when the last lease is
6513hunk ./src/allmydata/storage/backends/das/expirer.py 66
6514         else:
6515             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
6516         self.sharetypes_to_expire = expiration_policy['sharetypes']
6517-        FSShareCrawler.__init__(self, statefile)
6518+        ShareCrawler.__init__(self, statefile)
6519 
6520     def add_initial_state(self):
6521         # we fill ["cycle-to-date"] here (even though they will be reset in
6522hunk ./src/allmydata/storage/crawler.py 1
6523-
6524 import os, time, struct
6525 import cPickle as pickle
6526 from twisted.internet import reactor
6527hunk ./src/allmydata/storage/crawler.py 11
6528 class TimeSliceExceeded(Exception):
6529     pass
6530 
6531-class FSShareCrawler(service.MultiService):
6532-    """A subcless of ShareCrawler is attached to a StorageServer, and
6533+class ShareCrawler(service.MultiService):
6534+    """A subclass of ShareCrawler is attached to a StorageServer, and
6535     periodically walks all of its shares, processing each one in some
6536     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
6537     since large servers can easily have a terabyte of shares, in several
6538hunk ./src/allmydata/storage/crawler.py 426
6539         pass
6540 
6541 
6542-class FSBucketCountingCrawler(FSShareCrawler):
6543+class BucketCountingCrawler(ShareCrawler):
6544     """I keep track of how many buckets are being managed by this server.
6545     This is equivalent to the number of distributed files and directories for
6546     which I am providing storage. The actual number of files+directories in
6547hunk ./src/allmydata/storage/crawler.py 440
6548     minimum_cycle_time = 60*60 # we don't need this more than once an hour
6549 
6550     def __init__(self, statefp, num_sample_prefixes=1):
6551-        FSShareCrawler.__init__(self, statefp)
6552+        ShareCrawler.__init__(self, statefp)
6553         self.num_sample_prefixes = num_sample_prefixes
6554 
6555     def add_initial_state(self):
6556hunk ./src/allmydata/test/test_backends.py 113
6557         # I'm only called in the ImmutableShareFile constructor.
6558         return False
6559 
6560+    def call_setContent(self, inputstring):
6561+        # XXX Good enough for expirer, not sure about elsewhere...
6562+        return True
6563+
6564     def setUp(self):
6565         msg( "%s.setUp()" % (self,))
6566         self.storedir = FilePath('teststoredir')
6567hunk ./src/allmydata/test/test_backends.py 159
6568         mockfpexists = self.mockfpexists.__enter__()
6569         mockfpexists.side_effect = self.call_exists
6570 
6571+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6572+        mocksetContent = self.mocksetContent.__enter__()
6573+        mocksetContent.side_effect = self.call_setContent
6574+
6575     def tearDown(self):
6576         msg( "%s.tearDown()" % (self,))
6577hunk ./src/allmydata/test/test_backends.py 165
6578+        self.mocksetContent.__exit__()
6579         self.mockfpexists.__exit__()
6580         self.mockget_available_space.__exit__()
6581         self.mockfpstatp.__exit__()
6582}
6583[jacp18
6584wilcoxjg@gmail.com**20110723031915
6585 Ignore-this: 21e7f22ac20e3f8af22ea2e9b755d6a5
6586] {
6587hunk ./src/allmydata/_auto_deps.py 21
6588     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6589     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6590 
6591-    "Twisted >= 2.4.0",
6592+v v v v v v v
6593+    "Twisted >= 11.0",
6594+*************
6595+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6596+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6597+    # support asynchronous close.
6598+    "Twisted >= 10.1.0",
6599+^ ^ ^ ^ ^ ^ ^
6600 
6601     # foolscap < 0.5.1 had a performance bug which spent
6602     # O(N**2) CPU for transferring large mutable files
6603hunk ./src/allmydata/storage/backends/das/core.py 168
6604             # it. Also construct the metadata.
6605             assert not finalhome.exists()
6606             fp_make_dirs(self.incominghome)
6607-            f = self.incominghome.child(str(self.shnum))
6608+            f = self.incominghome
6609             # The second field -- the four-byte share data length -- is no
6610             # longer used as of Tahoe v1.3.0, but we continue to write it in
6611             # there in case someone downgrades a storage server from >=
6612hunk ./src/allmydata/storage/backends/das/core.py 178
6613             # the largest length that can fit into the field. That way, even
6614             # if this does happen, the old < v1.3.0 server will still allow
6615             # clients to read the first part of the share.
6616-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6617-            #f.close()
6618+            print 'f: ',f
6619+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6620             self._lease_offset = max_size + 0x0c
6621             self._num_leases = 0
6622         else:
6623hunk ./src/allmydata/storage/backends/das/core.py 263
6624 
6625     def _write_lease_record(self, lease_number, lease_info):
6626         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6627-        f.seek(offset)
6628-        assert f.tell() == offset
6629-        f.write(lease_info.to_immutable_data())
6630+        fh = f.open()
6631+        try:
6632+            fh.seek(offset)
6633+            assert fh.tell() == offset
6634+            fh.write(lease_info.to_immutable_data())
6635+        finally:
6636+            fh.close()
6637 
6638     def _read_num_leases(self, f):
6639hunk ./src/allmydata/storage/backends/das/core.py 272
6640-        f.seek(0x08)
6641-        (num_leases,) = struct.unpack(">L", f.read(4))
6642+        fh = f.open()
6643+        try:
6644+            fh.seek(0x08)
6645+            ro = fh.read(4)
6646+            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6647+            (num_leases,) = struct.unpack(">L", ro)
6648+        finally:
6649+            fh.close()
6650         return num_leases
6651 
6652     def _write_num_leases(self, f, num_leases):
6653hunk ./src/allmydata/storage/backends/das/core.py 283
6654-        f.seek(0x08)
6655-        f.write(struct.pack(">L", num_leases))
6656+        fh = f.open()
6657+        try:
6658+            fh.seek(0x08)
6659+            fh.write(struct.pack(">L", num_leases))
6660+        finally:
6661+            fh.close()
6662 
6663     def _truncate_leases(self, f, num_leases):
6664         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
6665hunk ./src/allmydata/storage/backends/das/core.py 304
6666                 yield LeaseInfo().from_immutable_data(data)
6667 
6668     def add_lease(self, lease_info):
6669-        self.incominghome, 'rb+')
6670-        num_leases = self._read_num_leases(f)
6671+        f = self.incominghome
6672+        num_leases = self._read_num_leases(self.incominghome)
6673         self._write_lease_record(f, num_leases, lease_info)
6674         self._write_num_leases(f, num_leases+1)
6675hunk ./src/allmydata/storage/backends/das/core.py 308
6676-        f.close()
6677-
6678+       
6679     def renew_lease(self, renew_secret, new_expire_time):
6680         for i,lease in enumerate(self.get_leases()):
6681             if constant_time_compare(lease.renew_secret, renew_secret):
6682hunk ./src/allmydata/test/test_backends.py 33
6683 share_data = containerdata + client_data
6684 testnodeid = 'testnodeidxxxxxxxxxx'
6685 
6686+
6687 class MockStat:
6688     def __init__(self):
6689         self.st_mode = None
6690hunk ./src/allmydata/test/test_backends.py 43
6691     code under test if it reads or writes outside of its prescribed
6692     subtree. I simulate just the parts of the filesystem that the current
6693     implementation of DAS backend needs. """
6694+
6695+    def setUp(self):
6696+        msg( "%s.setUp()" % (self,))
6697+        self.storedir = FilePath('teststoredir')
6698+        self.basedir = self.storedir.child('shares')
6699+        self.baseincdir = self.basedir.child('incoming')
6700+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6701+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6702+        self.shareincomingname = self.sharedirincomingname.child('0')
6703+        self.sharefilename = self.sharedirfinalname.child('0')
6704+        self.sharefilecontents = StringIO(share_data)
6705+
6706+        self.mocklistdirp = mock.patch('os.listdir')
6707+        mocklistdir = self.mocklistdirp.__enter__()
6708+        mocklistdir.side_effect = self.call_listdir
6709+
6710+        self.mockmkdirp = mock.patch('os.mkdir')
6711+        mockmkdir = self.mockmkdirp.__enter__()
6712+        mockmkdir.side_effect = self.call_mkdir
6713+
6714+        self.mockisdirp = mock.patch('os.path.isdir')
6715+        mockisdir = self.mockisdirp.__enter__()
6716+        mockisdir.side_effect = self.call_isdir
6717+
6718+        self.mockopenp = mock.patch('__builtin__.open')
6719+        mockopen = self.mockopenp.__enter__()
6720+        mockopen.side_effect = self.call_open
6721+
6722+        self.mockstatp = mock.patch('os.stat')
6723+        mockstat = self.mockstatp.__enter__()
6724+        mockstat.side_effect = self.call_stat
6725+
6726+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6727+        mockfpstat = self.mockfpstatp.__enter__()
6728+        mockfpstat.side_effect = self.call_stat
6729+
6730+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6731+        mockget_available_space = self.mockget_available_space.__enter__()
6732+        mockget_available_space.side_effect = self.call_get_available_space
6733+
6734+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6735+        mockfpexists = self.mockfpexists.__enter__()
6736+        mockfpexists.side_effect = self.call_exists
6737+
6738+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6739+        mocksetContent = self.mocksetContent.__enter__()
6740+        mocksetContent.side_effect = self.call_setContent
6741+
6742     def call_open(self, fname, mode):
6743         assert isinstance(fname, basestring), fname
6744         fnamefp = FilePath(fname)
6745hunk ./src/allmydata/test/test_backends.py 107
6746             # current implementation of DAS backend, and we might want to
6747             # use this information in this test in the future...
6748             return StringIO()
6749+        elif fnamefp == self.shareincomingname:
6750+            print "repr(fnamefp): ", repr(fnamefp)
6751         else:
6752             # Anything else you open inside your subtree appears to be an
6753             # empty file.
6754hunk ./src/allmydata/test/test_backends.py 168
6755         # XXX Good enough for expirer, not sure about elsewhere...
6756         return True
6757 
6758-    def setUp(self):
6759-        msg( "%s.setUp()" % (self,))
6760-        self.storedir = FilePath('teststoredir')
6761-        self.basedir = self.storedir.child('shares')
6762-        self.baseincdir = self.basedir.child('incoming')
6763-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6764-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6765-        self.shareincomingname = self.sharedirincomingname.child('0')
6766-        self.sharefname = self.sharedirfinalname.child('0')
6767-
6768-        self.mocklistdirp = mock.patch('os.listdir')
6769-        mocklistdir = self.mocklistdirp.__enter__()
6770-        mocklistdir.side_effect = self.call_listdir
6771-
6772-        self.mockmkdirp = mock.patch('os.mkdir')
6773-        mockmkdir = self.mockmkdirp.__enter__()
6774-        mockmkdir.side_effect = self.call_mkdir
6775-
6776-        self.mockisdirp = mock.patch('os.path.isdir')
6777-        mockisdir = self.mockisdirp.__enter__()
6778-        mockisdir.side_effect = self.call_isdir
6779-
6780-        self.mockopenp = mock.patch('__builtin__.open')
6781-        mockopen = self.mockopenp.__enter__()
6782-        mockopen.side_effect = self.call_open
6783-
6784-        self.mockstatp = mock.patch('os.stat')
6785-        mockstat = self.mockstatp.__enter__()
6786-        mockstat.side_effect = self.call_stat
6787-
6788-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6789-        mockfpstat = self.mockfpstatp.__enter__()
6790-        mockfpstat.side_effect = self.call_stat
6791-
6792-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6793-        mockget_available_space = self.mockget_available_space.__enter__()
6794-        mockget_available_space.side_effect = self.call_get_available_space
6795-
6796-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6797-        mockfpexists = self.mockfpexists.__enter__()
6798-        mockfpexists.side_effect = self.call_exists
6799-
6800-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6801-        mocksetContent = self.mocksetContent.__enter__()
6802-        mocksetContent.side_effect = self.call_setContent
6803 
6804     def tearDown(self):
6805         msg( "%s.tearDown()" % (self,))
6806hunk ./src/allmydata/test/test_backends.py 239
6807         handling of simultaneous and successive attempts to write the same
6808         share.
6809         """
6810-
6811         mocktime.return_value = 0
6812         # Inspect incoming and fail unless it's empty.
6813         incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
6814}
6815[jacp19orso
6816wilcoxjg@gmail.com**20110724034230
6817 Ignore-this: f001093c467225c289489636a61935fe
6818] {
6819hunk ./src/allmydata/_auto_deps.py 21
6820     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6821     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6822 
6823-v v v v v v v
6824-    "Twisted >= 11.0",
6825-*************
6826+
6827     # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6828     # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6829     # support asynchronous close.
6830hunk ./src/allmydata/_auto_deps.py 26
6831     "Twisted >= 10.1.0",
6832-^ ^ ^ ^ ^ ^ ^
6833+
6834 
6835     # foolscap < 0.5.1 had a performance bug which spent
6836     # O(N**2) CPU for transferring large mutable files
6837hunk ./src/allmydata/storage/backends/das/core.py 153
6838     LEASE_SIZE = struct.calcsize(">L32s32sL")
6839     sharetype = "immutable"
6840 
6841-    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
6842+    def __init__(self, finalhome, storageindex, shnum, incominghome, max_size=None, create=False):
6843         """ If max_size is not None then I won't allow more than
6844         max_size to be written to me. If create=True then max_size
6845         must not be None. """
6846hunk ./src/allmydata/storage/backends/das/core.py 167
6847             # touch the file, so later callers will see that we're working on
6848             # it. Also construct the metadata.
6849             assert not finalhome.exists()
6850-            fp_make_dirs(self.incominghome)
6851-            f = self.incominghome
6852+            fp_make_dirs(self.incominghome.parent())
6853             # The second field -- the four-byte share data length -- is no
6854             # longer used as of Tahoe v1.3.0, but we continue to write it in
6855             # there in case someone downgrades a storage server from >=
6856hunk ./src/allmydata/storage/backends/das/core.py 177
6857             # the largest length that can fit into the field. That way, even
6858             # if this does happen, the old < v1.3.0 server will still allow
6859             # clients to read the first part of the share.
6860-            print 'f: ',f
6861-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6862+            self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6863             self._lease_offset = max_size + 0x0c
6864             self._num_leases = 0
6865         else:
6866hunk ./src/allmydata/storage/backends/das/core.py 182
6867             f = open(self.finalhome, 'rb')
6868-            filesize = os.path.getsize(self.finalhome)
6869             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
6870             f.close()
6871hunk ./src/allmydata/storage/backends/das/core.py 184
6872+            filesize = self.finalhome.getsize()
6873             if version != 1:
6874                 msg = "sharefile %s had version %d but we wanted 1" % \
6875                       (self.finalhome, version)
6876hunk ./src/allmydata/storage/backends/das/core.py 259
6877         f.write(data)
6878         f.close()
6879 
6880-    def _write_lease_record(self, lease_number, lease_info):
6881+    def _write_lease_record(self, f, lease_number, lease_info):
6882         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6883         fh = f.open()
6884hunk ./src/allmydata/storage/backends/das/core.py 262
6885+        print fh
6886         try:
6887             fh.seek(offset)
6888             assert fh.tell() == offset
6889hunk ./src/allmydata/storage/backends/das/core.py 271
6890             fh.close()
6891 
6892     def _read_num_leases(self, f):
6893-        fh = f.open()
6894+        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
6895         try:
6896             fh.seek(0x08)
6897             ro = fh.read(4)
6898hunk ./src/allmydata/storage/backends/das/core.py 275
6899-            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6900             (num_leases,) = struct.unpack(">L", ro)
6901         finally:
6902             fh.close()
6903hunk ./src/allmydata/storage/backends/das/core.py 302
6904                 yield LeaseInfo().from_immutable_data(data)
6905 
6906     def add_lease(self, lease_info):
6907-        f = self.incominghome
6908         num_leases = self._read_num_leases(self.incominghome)
6909hunk ./src/allmydata/storage/backends/das/core.py 303
6910-        self._write_lease_record(f, num_leases, lease_info)
6911-        self._write_num_leases(f, num_leases+1)
6912+        self._write_lease_record(self.incominghome, num_leases, lease_info)
6913+        self._write_num_leases(self.incominghome, num_leases+1)
6914         
6915     def renew_lease(self, renew_secret, new_expire_time):
6916         for i,lease in enumerate(self.get_leases()):
6917hunk ./src/allmydata/test/test_backends.py 52
6918         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6919         self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6920         self.shareincomingname = self.sharedirincomingname.child('0')
6921-        self.sharefilename = self.sharedirfinalname.child('0')
6922-        self.sharefilecontents = StringIO(share_data)
6923+        self.sharefinalname = self.sharedirfinalname.child('0')
6924 
6925hunk ./src/allmydata/test/test_backends.py 54
6926-        self.mocklistdirp = mock.patch('os.listdir')
6927-        mocklistdir = self.mocklistdirp.__enter__()
6928-        mocklistdir.side_effect = self.call_listdir
6929+        # Make patcher, patch, and make effects for fs using functions.
6930+        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
6931+        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
6932+        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
6933 
6934hunk ./src/allmydata/test/test_backends.py 59
6935-        self.mockmkdirp = mock.patch('os.mkdir')
6936-        mockmkdir = self.mockmkdirp.__enter__()
6937-        mockmkdir.side_effect = self.call_mkdir
6938+        #self.mockmkdirp = mock.patch('os.mkdir')
6939+        #mockmkdir = self.mockmkdirp.__enter__()
6940+        #mockmkdir.side_effect = self.call_mkdir
6941 
6942hunk ./src/allmydata/test/test_backends.py 63
6943-        self.mockisdirp = mock.patch('os.path.isdir')
6944+        self.mockisdirp = mock.patch('FilePath.isdir')
6945         mockisdir = self.mockisdirp.__enter__()
6946         mockisdir.side_effect = self.call_isdir
6947 
6948hunk ./src/allmydata/test/test_backends.py 67
6949-        self.mockopenp = mock.patch('__builtin__.open')
6950+        self.mockopenp = mock.patch('FilePath.open')
6951         mockopen = self.mockopenp.__enter__()
6952         mockopen.side_effect = self.call_open
6953 
6954hunk ./src/allmydata/test/test_backends.py 71
6955-        self.mockstatp = mock.patch('os.stat')
6956+        self.mockstatp = mock.patch('filepath.stat')
6957         mockstat = self.mockstatp.__enter__()
6958         mockstat.side_effect = self.call_stat
6959 
6960hunk ./src/allmydata/test/test_backends.py 91
6961         mocksetContent = self.mocksetContent.__enter__()
6962         mocksetContent.side_effect = self.call_setContent
6963 
6964+    #  The behavior of mocked filesystem using functions
6965     def call_open(self, fname, mode):
6966         assert isinstance(fname, basestring), fname
6967         fnamefp = FilePath(fname)
6968hunk ./src/allmydata/test/test_backends.py 109
6969             # use this information in this test in the future...
6970             return StringIO()
6971         elif fnamefp == self.shareincomingname:
6972-            print "repr(fnamefp): ", repr(fnamefp)
6973+            self.incomingsharefilecontents.closed = False
6974+            return self.incomingsharefilecontents
6975         else:
6976             # Anything else you open inside your subtree appears to be an
6977             # empty file.
6978hunk ./src/allmydata/test/test_backends.py 152
6979         fnamefp = FilePath(fname)
6980         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
6981                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
6982-
6983         msg("%s.call_stat(%s)" % (self, fname,))
6984         mstat = MockStat()
6985         mstat.st_mode = 16893 # a directory
6986hunk ./src/allmydata/test/test_backends.py 166
6987         return False
6988 
6989     def call_setContent(self, inputstring):
6990-        # XXX Good enough for expirer, not sure about elsewhere...
6991-        return True
6992-
6993+        self.incomingsharefilecontents = StringIO(inputstring)
6994 
6995     def tearDown(self):
6996         msg( "%s.tearDown()" % (self,))
6997}
6998[jacp19
6999wilcoxjg@gmail.com**20110727080553
7000 Ignore-this: 851b1ebdeeee712abfbda557af142726
7001] {
7002hunk ./src/allmydata/storage/backends/das/core.py 1
7003-import os, re, weakref, struct, time, stat
7004+import re, weakref, struct, time, stat
7005 from twisted.application import service
7006 from twisted.python.filepath import UnlistableError
7007hunk ./src/allmydata/storage/backends/das/core.py 4
7008+from twisted.python import filepath
7009 from twisted.python.filepath import FilePath
7010 from zope.interface import implements
7011 
7012hunk ./src/allmydata/storage/backends/das/core.py 50
7013         self._setup_lease_checkerf(expiration_policy)
7014 
7015     def _setup_storage(self, storedir, readonly, reserved_space):
7016-        precondition(isinstance(storedir, FilePath)) 
7017+        precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7018         self.storedir = storedir
7019         self.readonly = readonly
7020         self.reserved_space = int(reserved_space)
7021hunk ./src/allmydata/storage/backends/das/core.py 195
7022         self._data_offset = 0xc
7023 
7024     def close(self):
7025-        fileutil.make_dirs(os.path.dirname(self.finalhome))
7026-        fileutil.rename(self.incominghome, self.finalhome)
7027+        fileutil.fp_make_dirs(self.finalhome.parent())
7028+        self.incominghome.moveTo(self.finalhome)
7029         try:
7030             # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
7031             # We try to delete the parent (.../ab/abcde) to avoid leaving
7032hunk ./src/allmydata/storage/backends/das/core.py 209
7033             # their children to know when they should do the rmdir. This
7034             # approach is simpler, but relies on os.rmdir refusing to delete
7035             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
7036-            #print "os.path.dirname(self.incominghome): "
7037-            #print os.path.dirname(self.incominghome)
7038-            os.rmdir(os.path.dirname(self.incominghome))
7039+            fileutil.fp_rmdir_if_empty(self.incominghome.parent())
7040             # we also delete the grandparent (prefix) directory, .../ab ,
7041             # again to avoid leaving directories lying around. This might
7042             # fail if there is another bucket open that shares a prefix (like
7043hunk ./src/allmydata/storage/backends/das/core.py 214
7044             # ab/abfff).
7045-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
7046+            fileutil.fp_rmdir_if_empty(self.incominghome.parent().parent())
7047             # we leave the great-grandparent (incoming/) directory in place.
7048         except EnvironmentError:
7049             # ignore the "can't rmdir because the directory is not empty"
7050hunk ./src/allmydata/storage/backends/das/core.py 224
7051         pass
7052         
7053     def stat(self):
7054-        return os.stat(self.finalhome)[stat.ST_SIZE]
7055-        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
7056+        return filepath.stat(self.finalhome)[stat.ST_SIZE]
7057 
7058     def get_shnum(self):
7059         return self.shnum
7060hunk ./src/allmydata/storage/backends/das/core.py 230
7061 
7062     def unlink(self):
7063-        os.unlink(self.finalhome)
7064+        self.finalhome.remove()
7065 
7066     def read_share_data(self, offset, length):
7067         precondition(offset >= 0)
7068hunk ./src/allmydata/storage/backends/das/core.py 237
7069         # Reads beyond the end of the data are truncated. Reads that start
7070         # beyond the end of the data return an empty string.
7071         seekpos = self._data_offset+offset
7072-        fsize = os.path.getsize(self.finalhome)
7073+        fsize = self.finalhome.getsize()
7074         actuallength = max(0, min(length, fsize-seekpos))
7075         if actuallength == 0:
7076             return ""
7077hunk ./src/allmydata/storage/backends/das/core.py 241
7078-        f = open(self.finalhome, 'rb')
7079-        f.seek(seekpos)
7080-        return f.read(actuallength)
7081+        try:
7082+            fh = open(self.finalhome, 'rb')
7083+            fh.seek(seekpos)
7084+            sharedata = fh.read(actuallength)
7085+        finally:
7086+            fh.close()
7087+        return sharedata
7088 
7089     def write_share_data(self, offset, data):
7090         length = len(data)
7091hunk ./src/allmydata/storage/backends/das/core.py 264
7092     def _write_lease_record(self, f, lease_number, lease_info):
7093         offset = self._lease_offset + lease_number * self.LEASE_SIZE
7094         fh = f.open()
7095-        print fh
7096         try:
7097             fh.seek(offset)
7098             assert fh.tell() == offset
7099hunk ./src/allmydata/storage/backends/das/core.py 269
7100             fh.write(lease_info.to_immutable_data())
7101         finally:
7102+            print dir(fh)
7103             fh.close()
7104 
7105     def _read_num_leases(self, f):
7106hunk ./src/allmydata/storage/backends/das/core.py 273
7107-        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
7108+        fh = f.open() #XXX  Should be mocking FilePath.open()
7109         try:
7110             fh.seek(0x08)
7111             ro = fh.read(4)
7112hunk ./src/allmydata/storage/backends/das/core.py 280
7113             (num_leases,) = struct.unpack(">L", ro)
7114         finally:
7115             fh.close()
7116+            print "end of _read_num_leases"
7117         return num_leases
7118 
7119     def _write_num_leases(self, f, num_leases):
7120hunk ./src/allmydata/storage/crawler.py 6
7121 from twisted.internet import reactor
7122 from twisted.application import service
7123 from allmydata.storage.common import si_b2a
7124-from allmydata.util import fileutil
7125 
7126 class TimeSliceExceeded(Exception):
7127     pass
7128hunk ./src/allmydata/storage/crawler.py 478
7129             old_cycle,buckets = self.state["storage-index-samples"][prefix]
7130             if old_cycle != cycle:
7131                 del self.state["storage-index-samples"][prefix]
7132-
7133hunk ./src/allmydata/test/test_backends.py 1
7134+import os
7135 from twisted.trial import unittest
7136 from twisted.python.filepath import FilePath
7137 from allmydata.util.log import msg
7138hunk ./src/allmydata/test/test_backends.py 9
7139 from allmydata.test.common_util import ReallyEqualMixin
7140 from allmydata.util.assertutil import _assert
7141 import mock
7142+from mock import Mock
7143 
7144 # This is the code that we're going to be testing.
7145 from allmydata.storage.server import StorageServer
7146hunk ./src/allmydata/test/test_backends.py 40
7147     def __init__(self):
7148         self.st_mode = None
7149 
7150+class MockFilePath:
7151+    def __init__(self, PathString):
7152+        self.PathName = PathString
7153+    def child(self, ChildString):
7154+        return MockFilePath(os.path.join(self.PathName, ChildString))
7155+    def parent(self):
7156+        return MockFilePath(os.path.dirname(self.PathName))
7157+    def makedirs(self):
7158+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7159+        pass
7160+    def isdir(self):
7161+        return True
7162+    def remove(self):
7163+        pass
7164+    def children(self):
7165+        return []
7166+    def exists(self):
7167+        return False
7168+    def setContent(self, ContentString):
7169+        self.File = MockFile(ContentString)
7170+    def open(self):
7171+        return self.File.open()
7172+
7173+class MockFile:
7174+    def __init__(self, ContentString):
7175+        self.Contents = ContentString
7176+    def open(self):
7177+        return self
7178+    def close(self):
7179+        pass
7180+    def seek(self, position):
7181+        pass
7182+    def read(self, amount):
7183+        pass
7184+
7185+
7186+class MockBCC:
7187+    def setServiceParent(self, Parent):
7188+        pass
7189+
7190+class MockLCC:
7191+    def setServiceParent(self, Parent):
7192+        pass
7193+
7194 class MockFiles(unittest.TestCase):
7195     """ I simulate a filesystem that the code under test can use. I flag the
7196     code under test if it reads or writes outside of its prescribed
7197hunk ./src/allmydata/test/test_backends.py 91
7198     implementation of DAS backend needs. """
7199 
7200     def setUp(self):
7201+        # Make patcher, patch, and make effects for fs using functions.
7202         msg( "%s.setUp()" % (self,))
7203hunk ./src/allmydata/test/test_backends.py 93
7204-        self.storedir = FilePath('teststoredir')
7205+        self.storedir = MockFilePath('teststoredir')
7206         self.basedir = self.storedir.child('shares')
7207         self.baseincdir = self.basedir.child('incoming')
7208         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
7209hunk ./src/allmydata/test/test_backends.py 101
7210         self.shareincomingname = self.sharedirincomingname.child('0')
7211         self.sharefinalname = self.sharedirfinalname.child('0')
7212 
7213-        # Make patcher, patch, and make effects for fs using functions.
7214-        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
7215-        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
7216-        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
7217-
7218-        #self.mockmkdirp = mock.patch('os.mkdir')
7219-        #mockmkdir = self.mockmkdirp.__enter__()
7220-        #mockmkdir.side_effect = self.call_mkdir
7221-
7222-        self.mockisdirp = mock.patch('FilePath.isdir')
7223-        mockisdir = self.mockisdirp.__enter__()
7224-        mockisdir.side_effect = self.call_isdir
7225+        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
7226+        FakePath = self.FilePathFake.__enter__()
7227 
7228hunk ./src/allmydata/test/test_backends.py 104
7229-        self.mockopenp = mock.patch('FilePath.open')
7230-        mockopen = self.mockopenp.__enter__()
7231-        mockopen.side_effect = self.call_open
7232+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
7233+        FakeBCC = self.BCountingCrawler.__enter__()
7234+        FakeBCC.side_effect = self.call_FakeBCC
7235 
7236hunk ./src/allmydata/test/test_backends.py 108
7237-        self.mockstatp = mock.patch('filepath.stat')
7238-        mockstat = self.mockstatp.__enter__()
7239-        mockstat.side_effect = self.call_stat
7240+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
7241+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
7242+        FakeLCC.side_effect = self.call_FakeLCC
7243 
7244hunk ./src/allmydata/test/test_backends.py 112
7245-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
7246-        mockfpstat = self.mockfpstatp.__enter__()
7247-        mockfpstat.side_effect = self.call_stat
7248+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
7249+        GetSpace = self.get_available_space.__enter__()
7250+        GetSpace.side_effect = self.call_get_available_space
7251 
7252hunk ./src/allmydata/test/test_backends.py 116
7253-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
7254-        mockget_available_space = self.mockget_available_space.__enter__()
7255-        mockget_available_space.side_effect = self.call_get_available_space
7256+    def call_FakeBCC(self, StateFile):
7257+        return MockBCC()
7258 
7259hunk ./src/allmydata/test/test_backends.py 119
7260-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
7261-        mockfpexists = self.mockfpexists.__enter__()
7262-        mockfpexists.side_effect = self.call_exists
7263-
7264-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
7265-        mocksetContent = self.mocksetContent.__enter__()
7266-        mocksetContent.side_effect = self.call_setContent
7267-
7268-    #  The behavior of mocked filesystem using functions
7269-    def call_open(self, fname, mode):
7270-        assert isinstance(fname, basestring), fname
7271-        fnamefp = FilePath(fname)
7272-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
7273-                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
7274-
7275-        if fnamefp == self.storedir.child('bucket_counter.state'):
7276-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
7277-        elif fnamefp == self.storedir.child('lease_checker.state'):
7278-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
7279-        elif fnamefp == self.storedir.child('lease_checker.history'):
7280-            # This is separated out from the else clause below just because
7281-            # we know this particular file is going to be used by the
7282-            # current implementation of DAS backend, and we might want to
7283-            # use this information in this test in the future...
7284-            return StringIO()
7285-        elif fnamefp == self.shareincomingname:
7286-            self.incomingsharefilecontents.closed = False
7287-            return self.incomingsharefilecontents
7288-        else:
7289-            # Anything else you open inside your subtree appears to be an
7290-            # empty file.
7291-            return StringIO()
7292-
7293-    def call_isdir(self, fname):
7294-        fnamefp = FilePath(fname)
7295-        return fnamefp.isdir()
7296-
7297-        self.failUnless(self.storedir == self or self.storedir in self.parents(),
7298-                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
7299-
7300-        # The first two cases are separate from the else clause below just
7301-        # because we know that the current implementation of the DAS backend
7302-        # inspects these two directories and we might want to make use of
7303-        # that information in the tests in the future...
7304-        if self == self.storedir.child('shares'):
7305-            return True
7306-        elif self == self.storedir.child('shares').child('incoming'):
7307-            return True
7308-        else:
7309-            # Anything else you open inside your subtree appears to be a
7310-            # directory.
7311-            return True
7312-
7313-    def call_mkdir(self, fname, mode):
7314-        fnamefp = FilePath(fname)
7315-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
7316-                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
7317-        self.failUnlessEqual(0777, mode)
7318+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
7319+        return MockLCC()
7320 
7321     def call_listdir(self, fname):
7322         fnamefp = FilePath(fname)
7323hunk ./src/allmydata/test/test_backends.py 150
7324 
7325     def tearDown(self):
7326         msg( "%s.tearDown()" % (self,))
7327-        self.mocksetContent.__exit__()
7328-        self.mockfpexists.__exit__()
7329-        self.mockget_available_space.__exit__()
7330-        self.mockfpstatp.__exit__()
7331-        self.mockstatp.__exit__()
7332-        self.mockopenp.__exit__()
7333-        self.mockisdirp.__exit__()
7334-        self.mockmkdirp.__exit__()
7335-        self.mocklistdirp.__exit__()
7336-
7337+        FakePath = self.FilePathFake.__exit__()       
7338+        FakeBCC = self.BCountingCrawler.__exit__()
7339 
7340 expiration_policy = {'enabled' : False,
7341                      'mode' : 'age',
7342hunk ./src/allmydata/test/test_backends.py 222
7343         # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7344         
7345         # Attempt to create a second share writer with the same sharenum.
7346-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7347+        # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7348 
7349         # Show that no sharewriter results from a remote_allocate_buckets
7350         # with the same si and sharenum, until BucketWriter.remote_close()
7351hunk ./src/allmydata/test/test_backends.py 227
7352         # has been called.
7353-        self.failIf(bsa)
7354+        # self.failIf(bsa)
7355 
7356         # Test allocated size.
7357hunk ./src/allmydata/test/test_backends.py 230
7358-        spaceint = self.ss.allocated_size()
7359-        self.failUnlessReallyEqual(spaceint, 1)
7360+        # spaceint = self.ss.allocated_size()
7361+        # self.failUnlessReallyEqual(spaceint, 1)
7362 
7363         # Write 'a' to shnum 0. Only tested together with close and read.
7364hunk ./src/allmydata/test/test_backends.py 234
7365-        bs[0].remote_write(0, 'a')
7366+        # bs[0].remote_write(0, 'a')
7367         
7368         # Preclose: Inspect final, failUnless nothing there.
7369hunk ./src/allmydata/test/test_backends.py 237
7370-        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
7371-        bs[0].remote_close()
7372+        # self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
7373+        # bs[0].remote_close()
7374 
7375         # Postclose: (Omnibus) failUnless written data is in final.
7376hunk ./src/allmydata/test/test_backends.py 241
7377-        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
7378-        self.failUnlessReallyEqual(len(sharesinfinal), 1)
7379-        contents = sharesinfinal[0].read_share_data(0, 73)
7380-        self.failUnlessReallyEqual(contents, client_data)
7381+        # sharesinfinal = list(self.backend.get_shares('teststorage_index'))
7382+        # self.failUnlessReallyEqual(len(sharesinfinal), 1)
7383+        # contents = sharesinfinal[0].read_share_data(0, 73)
7384+        # self.failUnlessReallyEqual(contents, client_data)
7385 
7386         # Exercise the case that the share we're asking to allocate is
7387         # already (completely) uploaded.
7388hunk ./src/allmydata/test/test_backends.py 248
7389-        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
7390+        # self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
7391         
7392     @mock.patch('time.time')
7393     @mock.patch('allmydata.util.fileutil.get_available_space')
7394}
7395[jacp20
7396wilcoxjg@gmail.com**20110728072514
7397 Ignore-this: 6a03289023c3c79b8d09e2711183ea82
7398] {
7399hunk ./src/allmydata/storage/backends/das/core.py 52
7400     def _setup_storage(self, storedir, readonly, reserved_space):
7401         precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7402         self.storedir = storedir
7403+        print "self.storedir: ", self.storedir
7404         self.readonly = readonly
7405         self.reserved_space = int(reserved_space)
7406         self.sharedir = self.storedir.child("shares")
7407hunk ./src/allmydata/storage/backends/das/core.py 85
7408 
7409     def get_incoming_shnums(self, storageindex):
7410         """ Return a frozenset of the shnum (as ints) of incoming shares. """
7411-        incomingdir = si_si2dir(self.incomingdir, storageindex)
7412+        print "self.incomingdir.children(): ", self.incomingdir.children()
7413+        print "self.incomingdir.pathname: ", self.incomingdir.pathname
7414+        incomingthissi = si_si2dir(self.incomingdir, storageindex)
7415+        print "incomingthissi.children(): ", incomingthissi.children()
7416         try:
7417hunk ./src/allmydata/storage/backends/das/core.py 90
7418-            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
7419+            childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
7420             shnums = [ int(fp.basename) for fp in childfps ]
7421             return frozenset(shnums)
7422         except UnlistableError:
7423hunk ./src/allmydata/storage/backends/das/core.py 117
7424 
7425     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
7426         finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
7427-        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
7428+        incominghome = si_si2dir(self.incomingdir, storageindex).child(str(shnum))
7429         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
7430         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
7431         return bw
7432hunk ./src/allmydata/storage/backends/das/core.py 183
7433             # if this does happen, the old < v1.3.0 server will still allow
7434             # clients to read the first part of the share.
7435             self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
7436+            print "We got here right?"
7437             self._lease_offset = max_size + 0x0c
7438             self._num_leases = 0
7439         else:
7440hunk ./src/allmydata/storage/backends/das/core.py 274
7441             assert fh.tell() == offset
7442             fh.write(lease_info.to_immutable_data())
7443         finally:
7444-            print dir(fh)
7445             fh.close()
7446 
7447     def _read_num_leases(self, f):
7448hunk ./src/allmydata/storage/backends/das/core.py 284
7449             (num_leases,) = struct.unpack(">L", ro)
7450         finally:
7451             fh.close()
7452-            print "end of _read_num_leases"
7453         return num_leases
7454 
7455     def _write_num_leases(self, f, num_leases):
7456hunk ./src/allmydata/storage/common.py 21
7457 
7458 def si_si2dir(startfp, storageindex):
7459     sia = si_b2a(storageindex)
7460-    return startfp.child(sia[:2]).child(sia)
7461+    print "I got here right?  sia =", sia
7462+    print "What the fuck is startfp? ", startfp
7463+    print "What the fuck is startfp.pathname? ", startfp.pathname
7464+    newfp = startfp.child(sia[:2])
7465+    print "Did I get here?"
7466+    return newfp.child(sia)
7467hunk ./src/allmydata/test/test_backends.py 5
7468 from twisted.trial import unittest
7469 from twisted.python.filepath import FilePath
7470 from allmydata.util.log import msg
7471-from StringIO import StringIO
7472+from tempfile import TemporaryFile
7473 from allmydata.test.common_util import ReallyEqualMixin
7474 from allmydata.util.assertutil import _assert
7475 import mock
7476hunk ./src/allmydata/test/test_backends.py 34
7477     cancelsecret + expirationtime + nextlease
7478 share_data = containerdata + client_data
7479 testnodeid = 'testnodeidxxxxxxxxxx'
7480+fakefilepaths = {}
7481 
7482 
7483 class MockStat:
7484hunk ./src/allmydata/test/test_backends.py 41
7485     def __init__(self):
7486         self.st_mode = None
7487 
7488+
7489 class MockFilePath:
7490hunk ./src/allmydata/test/test_backends.py 43
7491-    def __init__(self, PathString):
7492-        self.PathName = PathString
7493-    def child(self, ChildString):
7494-        return MockFilePath(os.path.join(self.PathName, ChildString))
7495+    def __init__(self, pathstring):
7496+        self.pathname = pathstring
7497+        self.spawn = {}
7498+        self.antecedent = os.path.dirname(self.pathname)
7499+    def child(self, childstring):
7500+        arg2child = os.path.join(self.pathname, childstring)
7501+        print "arg2child: ", arg2child
7502+        if fakefilepaths.has_key(arg2child):
7503+            child = fakefilepaths[arg2child]
7504+            print "Should have gotten here."
7505+        else:
7506+            child = MockFilePath(arg2child)
7507+        return child
7508     def parent(self):
7509hunk ./src/allmydata/test/test_backends.py 57
7510-        return MockFilePath(os.path.dirname(self.PathName))
7511+        if fakefilepaths.has_key(self.antecedent):
7512+            parent = fakefilepaths[self.antecedent]
7513+        else:
7514+            parent = MockFilePath(self.antecedent)
7515+        return parent
7516+    def children(self):
7517+        childrenfromffs = frozenset(fakefilepaths.values())
7518+        return list(childrenfromffs | frozenset(self.spawn.values())) 
7519     def makedirs(self):
7520         # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7521         pass
7522hunk ./src/allmydata/test/test_backends.py 72
7523         return True
7524     def remove(self):
7525         pass
7526-    def children(self):
7527-        return []
7528     def exists(self):
7529         return False
7530hunk ./src/allmydata/test/test_backends.py 74
7531-    def setContent(self, ContentString):
7532-        self.File = MockFile(ContentString)
7533     def open(self):
7534         return self.File.open()
7535hunk ./src/allmydata/test/test_backends.py 76
7536+    def setparents(self):
7537+        antecedents = []
7538+        def f(fps, antecedents):
7539+            newfps = os.path.split(fps)[0]
7540+            if newfps:
7541+                antecedents.append(newfps)
7542+                f(newfps, antecedents)
7543+        f(self.pathname, antecedents)
7544+        for fps in antecedents:
7545+            if not fakefilepaths.has_key(fps):
7546+                fakefilepaths[fps] = MockFilePath(fps)
7547+    def setContent(self, contentstring):
7548+        print "I am self.pathname: ", self.pathname
7549+        fakefilepaths[self.pathname] = self
7550+        self.File = MockFile(contentstring)
7551+        self.setparents()
7552+    def create(self):
7553+        fakefilepaths[self.pathname] = self
7554+        self.setparents()
7555+           
7556 
7557 class MockFile:
7558hunk ./src/allmydata/test/test_backends.py 98
7559-    def __init__(self, ContentString):
7560-        self.Contents = ContentString
7561+    def __init__(self, contentstring):
7562+        self.buffer = contentstring
7563+        self.pos = 0
7564     def open(self):
7565         return self
7566hunk ./src/allmydata/test/test_backends.py 103
7567+    def write(self, instring):
7568+        begin = self.pos
7569+        padlen = begin - len(self.buffer)
7570+        if padlen > 0:
7571+            self.buffer += '\x00' * padlen
7572+            end = self.pos + len(instring)
7573+            self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7574+            self.pos = end
7575     def close(self):
7576         pass
7577hunk ./src/allmydata/test/test_backends.py 113
7578-    def seek(self, position):
7579-        pass
7580-    def read(self, amount):
7581-        pass
7582+    def seek(self, pos):
7583+        self.pos = pos
7584+    def read(self, numberbytes):
7585+        return self.buffer[self.pos:self.pos+numberbytes]
7586+    def tell(self):
7587+        return self.pos
7588 
7589 
7590 class MockBCC:
7591hunk ./src/allmydata/test/test_backends.py 125
7592     def setServiceParent(self, Parent):
7593         pass
7594 
7595+
7596 class MockLCC:
7597     def setServiceParent(self, Parent):
7598         pass
7599hunk ./src/allmydata/test/test_backends.py 130
7600 
7601+
7602 class MockFiles(unittest.TestCase):
7603     """ I simulate a filesystem that the code under test can use. I flag the
7604     code under test if it reads or writes outside of its prescribed
7605hunk ./src/allmydata/test/test_backends.py 193
7606         return False
7607 
7608     def call_setContent(self, inputstring):
7609-        self.incomingsharefilecontents = StringIO(inputstring)
7610+        self.incomingsharefilecontents = TemporaryFile(inputstring)
7611 
7612     def tearDown(self):
7613         msg( "%s.tearDown()" % (self,))
7614hunk ./src/allmydata/test/test_backends.py 206
7615                      'cutoff_date' : None,
7616                      'sharetypes' : None}
7617 
7618+
7619 class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
7620     """ NullBackend is just for testing and executable documentation, so
7621     this test is actually a test of StorageServer in which we're using
7622hunk ./src/allmydata/test/test_backends.py 229
7623         self.failIf(mockopen.called)
7624         self.failIf(mockmkdir.called)
7625 
7626+
7627 class TestServerConstruction(MockFiles, ReallyEqualMixin):
7628     def test_create_server_fs_backend(self):
7629         """ This tests whether a server instance can be constructed with a
7630hunk ./src/allmydata/test/test_backends.py 238
7631 
7632         StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
7633 
7634+
7635 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
7636     """ This tests both the StorageServer and the DAS backend together. """
7637     
7638hunk ./src/allmydata/test/test_backends.py 262
7639         """
7640         mocktime.return_value = 0
7641         # Inspect incoming and fail unless it's empty.
7642-        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7643-        self.failUnlessReallyEqual(incomingset, frozenset())
7644+        # incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7645+        # self.failUnlessReallyEqual(incomingset, frozenset())
7646         
7647         # Populate incoming with the sharenum: 0.
7648         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7649hunk ./src/allmydata/test/test_backends.py 269
7650 
7651         # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
7652-        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7653+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7654         
7655         # Attempt to create a second share writer with the same sharenum.
7656         # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7657hunk ./src/allmydata/test/test_backends.py 274
7658 
7659+        # print bsa
7660         # Show that no sharewriter results from a remote_allocate_buckets
7661         # with the same si and sharenum, until BucketWriter.remote_close()
7662         # has been called.
7663hunk ./src/allmydata/test/test_backends.py 339
7664             self.failUnlessEqual(mode[0], 'r', mode)
7665             self.failUnless('b' in mode, mode)
7666 
7667-            return StringIO(share_data)
7668+            return TemporaryFile(share_data)
7669         mockopen.side_effect = call_open
7670 
7671         datalen = len(share_data)
7672}
7673[Completed FilePath based test_write_and_read_share
7674wilcoxjg@gmail.com**20110729043830
7675 Ignore-this: 2c32adb041f0344394927cd3ce8f3b36
7676] {
7677hunk ./src/allmydata/storage/backends/das/core.py 38
7678 NUM_RE=re.compile("^[0-9]+$")
7679 
7680 def is_num(fp):
7681-    return NUM_RE.match(fp.basename)
7682+    return NUM_RE.match(fp.basename())
7683 
7684 class DASCore(Backend):
7685     implements(IStorageBackend)
7686hunk ./src/allmydata/storage/backends/das/core.py 52
7687     def _setup_storage(self, storedir, readonly, reserved_space):
7688         precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7689         self.storedir = storedir
7690-        print "self.storedir: ", self.storedir
7691         self.readonly = readonly
7692         self.reserved_space = int(reserved_space)
7693         self.sharedir = self.storedir.child("shares")
7694hunk ./src/allmydata/storage/backends/das/core.py 84
7695 
7696     def get_incoming_shnums(self, storageindex):
7697         """ Return a frozenset of the shnum (as ints) of incoming shares. """
7698-        print "self.incomingdir.children(): ", self.incomingdir.children()
7699-        print "self.incomingdir.pathname: ", self.incomingdir.pathname
7700         incomingthissi = si_si2dir(self.incomingdir, storageindex)
7701hunk ./src/allmydata/storage/backends/das/core.py 85
7702-        print "incomingthissi.children(): ", incomingthissi.children()
7703         try:
7704             childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
7705hunk ./src/allmydata/storage/backends/das/core.py 87
7706-            shnums = [ int(fp.basename) for fp in childfps ]
7707+            shnums = [ int(fp.basename()) for fp in childfps ]
7708             return frozenset(shnums)
7709         except UnlistableError:
7710             # There is no shares directory at all.
7711hunk ./src/allmydata/storage/backends/das/core.py 101
7712         try:
7713             for fp in finalstoragedir.children():
7714                 if is_num(fp):
7715-                    yield ImmutableShare(fp, storageindex)
7716+                    finalhome = finalstoragedir.child(str(fp.basename()))
7717+                    yield ImmutableShare(storageindex, fp, finalhome)
7718         except UnlistableError:
7719             # There is no shares directory at all.
7720             pass
7721hunk ./src/allmydata/storage/backends/das/core.py 115
7722     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
7723         finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
7724         incominghome = si_si2dir(self.incomingdir, storageindex).child(str(shnum))
7725-        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
7726+        immsh = ImmutableShare(storageindex, shnum, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
7727         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
7728         return bw
7729 
7730hunk ./src/allmydata/storage/backends/das/core.py 155
7731     LEASE_SIZE = struct.calcsize(">L32s32sL")
7732     sharetype = "immutable"
7733 
7734-    def __init__(self, finalhome, storageindex, shnum, incominghome, max_size=None, create=False):
7735+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
7736         """ If max_size is not None then I won't allow more than
7737         max_size to be written to me. If create=True then max_size
7738         must not be None. """
7739hunk ./src/allmydata/storage/backends/das/core.py 180
7740             # if this does happen, the old < v1.3.0 server will still allow
7741             # clients to read the first part of the share.
7742             self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
7743-            print "We got here right?"
7744             self._lease_offset = max_size + 0x0c
7745             self._num_leases = 0
7746         else:
7747hunk ./src/allmydata/storage/backends/das/core.py 183
7748-            f = open(self.finalhome, 'rb')
7749-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
7750-            f.close()
7751+            fh = self.finalhome.open(mode='rb')
7752+            try:
7753+                (version, unused, num_leases) = struct.unpack(">LLL", fh.read(0xc))
7754+            finally:
7755+                fh.close()
7756             filesize = self.finalhome.getsize()
7757             if version != 1:
7758                 msg = "sharefile %s had version %d but we wanted 1" % \
7759hunk ./src/allmydata/storage/backends/das/core.py 227
7760         pass
7761         
7762     def stat(self):
7763-        return filepath.stat(self.finalhome)[stat.ST_SIZE]
7764+        return filepath.stat(self.finalhome.path)[stat.ST_SIZE]
7765 
7766     def get_shnum(self):
7767         return self.shnum
7768hunk ./src/allmydata/storage/backends/das/core.py 244
7769         actuallength = max(0, min(length, fsize-seekpos))
7770         if actuallength == 0:
7771             return ""
7772+        fh = self.finalhome.open(mode='rb')
7773         try:
7774hunk ./src/allmydata/storage/backends/das/core.py 246
7775-            fh = open(self.finalhome, 'rb')
7776             fh.seek(seekpos)
7777             sharedata = fh.read(actuallength)
7778         finally:
7779hunk ./src/allmydata/storage/backends/das/core.py 257
7780         precondition(offset >= 0, offset)
7781         if self._max_size is not None and offset+length > self._max_size:
7782             raise DataTooLargeError(self._max_size, offset, length)
7783-        f = open(self.incominghome, 'rb+')
7784-        real_offset = self._data_offset+offset
7785-        f.seek(real_offset)
7786-        assert f.tell() == real_offset
7787-        f.write(data)
7788-        f.close()
7789+        fh = self.incominghome.open(mode='rb+')
7790+        try:
7791+            real_offset = self._data_offset+offset
7792+            fh.seek(real_offset)
7793+            assert fh.tell() == real_offset
7794+            fh.write(data)
7795+        finally:
7796+            fh.close()
7797 
7798     def _write_lease_record(self, f, lease_number, lease_info):
7799         offset = self._lease_offset + lease_number * self.LEASE_SIZE
7800hunk ./src/allmydata/storage/backends/das/core.py 299
7801 
7802     def get_leases(self):
7803         """Yields a LeaseInfo instance for all leases."""
7804-        f = open(self.finalhome, 'rb')
7805-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
7806-        f.seek(self._lease_offset)
7807+        fh = self.finalhome.open(mode='rb')
7808+        (version, unused, num_leases) = struct.unpack(">LLL", fh.read(0xc))
7809+        fh.seek(self._lease_offset)
7810         for i in range(num_leases):
7811hunk ./src/allmydata/storage/backends/das/core.py 303
7812-            data = f.read(self.LEASE_SIZE)
7813+            data = fh.read(self.LEASE_SIZE)
7814             if data:
7815                 yield LeaseInfo().from_immutable_data(data)
7816 
7817hunk ./src/allmydata/storage/common.py 21
7818 
7819 def si_si2dir(startfp, storageindex):
7820     sia = si_b2a(storageindex)
7821-    print "I got here right?  sia =", sia
7822-    print "What the fuck is startfp? ", startfp
7823-    print "What the fuck is startfp.pathname? ", startfp.pathname
7824     newfp = startfp.child(sia[:2])
7825hunk ./src/allmydata/storage/common.py 22
7826-    print "Did I get here?"
7827     return newfp.child(sia)
7828hunk ./src/allmydata/test/test_backends.py 1
7829-import os
7830+import os, stat
7831 from twisted.trial import unittest
7832 from twisted.python.filepath import FilePath
7833 from allmydata.util.log import msg
7834hunk ./src/allmydata/test/test_backends.py 44
7835 
7836 class MockFilePath:
7837     def __init__(self, pathstring):
7838-        self.pathname = pathstring
7839+        self.path = pathstring
7840         self.spawn = {}
7841hunk ./src/allmydata/test/test_backends.py 46
7842-        self.antecedent = os.path.dirname(self.pathname)
7843+        self.antecedent = os.path.dirname(self.path)
7844     def child(self, childstring):
7845hunk ./src/allmydata/test/test_backends.py 48
7846-        arg2child = os.path.join(self.pathname, childstring)
7847-        print "arg2child: ", arg2child
7848+        arg2child = os.path.join(self.path, childstring)
7849         if fakefilepaths.has_key(arg2child):
7850             child = fakefilepaths[arg2child]
7851hunk ./src/allmydata/test/test_backends.py 51
7852-            print "Should have gotten here."
7853         else:
7854             child = MockFilePath(arg2child)
7855         return child
7856hunk ./src/allmydata/test/test_backends.py 61
7857             parent = MockFilePath(self.antecedent)
7858         return parent
7859     def children(self):
7860-        childrenfromffs = frozenset(fakefilepaths.values())
7861+        childrenfromffs = [ffp for ffp in fakefilepaths.values() if ffp.path.startswith(self.path)]
7862+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
7863+        childrenfromffs = frozenset(childrenfromffs)
7864         return list(childrenfromffs | frozenset(self.spawn.values())) 
7865     def makedirs(self):
7866         # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7867hunk ./src/allmydata/test/test_backends.py 74
7868         pass
7869     def exists(self):
7870         return False
7871-    def open(self):
7872-        return self.File.open()
7873+    def open(self, mode='r'):
7874+        return self.fileobject.open(mode)
7875     def setparents(self):
7876         antecedents = []
7877         def f(fps, antecedents):
7878hunk ./src/allmydata/test/test_backends.py 83
7879             if newfps:
7880                 antecedents.append(newfps)
7881                 f(newfps, antecedents)
7882-        f(self.pathname, antecedents)
7883+        f(self.path, antecedents)
7884         for fps in antecedents:
7885             if not fakefilepaths.has_key(fps):
7886                 fakefilepaths[fps] = MockFilePath(fps)
7887hunk ./src/allmydata/test/test_backends.py 88
7888     def setContent(self, contentstring):
7889-        print "I am self.pathname: ", self.pathname
7890-        fakefilepaths[self.pathname] = self
7891-        self.File = MockFile(contentstring)
7892+        fakefilepaths[self.path] = self
7893+        self.fileobject = MockFileObject(contentstring)
7894         self.setparents()
7895     def create(self):
7896hunk ./src/allmydata/test/test_backends.py 92
7897-        fakefilepaths[self.pathname] = self
7898+        fakefilepaths[self.path] = self
7899         self.setparents()
7900hunk ./src/allmydata/test/test_backends.py 94
7901-           
7902+    def basename(self):
7903+        return os.path.split(self.path)[1]
7904+    def moveTo(self, newffp):
7905+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
7906+        if fakefilepaths.has_key(newffp.path):
7907+            raise OSError
7908+        else:
7909+            fakefilepaths[newffp.path] = self
7910+            self.path = newffp.path
7911+    def getsize(self):
7912+        return self.fileobject.getsize()
7913 
7914hunk ./src/allmydata/test/test_backends.py 106
7915-class MockFile:
7916+class MockFileObject:
7917     def __init__(self, contentstring):
7918         self.buffer = contentstring
7919         self.pos = 0
7920hunk ./src/allmydata/test/test_backends.py 110
7921-    def open(self):
7922+    def open(self, mode='r'):
7923         return self
7924     def write(self, instring):
7925         begin = self.pos
7926hunk ./src/allmydata/test/test_backends.py 117
7927         padlen = begin - len(self.buffer)
7928         if padlen > 0:
7929             self.buffer += '\x00' * padlen
7930-            end = self.pos + len(instring)
7931-            self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7932-            self.pos = end
7933+        end = self.pos + len(instring)
7934+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7935+        self.pos = end
7936     def close(self):
7937hunk ./src/allmydata/test/test_backends.py 121
7938-        pass
7939+        self.pos = 0
7940     def seek(self, pos):
7941         self.pos = pos
7942     def read(self, numberbytes):
7943hunk ./src/allmydata/test/test_backends.py 128
7944         return self.buffer[self.pos:self.pos+numberbytes]
7945     def tell(self):
7946         return self.pos
7947-
7948+    def size(self):
7949+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
7950+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
7951+        return {stat.ST_SIZE:len(self.buffer)}
7952+    def getsize(self):
7953+        return len(self.buffer)
7954 
7955 class MockBCC:
7956     def setServiceParent(self, Parent):
7957hunk ./src/allmydata/test/test_backends.py 177
7958         GetSpace = self.get_available_space.__enter__()
7959         GetSpace.side_effect = self.call_get_available_space
7960 
7961+        self.statforsize = mock.patch('allmydata.storage.backends.das.core.filepath.stat')
7962+        getsize = self.statforsize.__enter__()
7963+        getsize.side_effect = self.call_statforsize
7964+
7965+    def call_statforsize(self, fakefpname):
7966+        return fakefilepaths[fakefpname].fileobject.size()
7967+
7968     def call_FakeBCC(self, StateFile):
7969         return MockBCC()
7970 
7971hunk ./src/allmydata/test/test_backends.py 220
7972         msg( "%s.tearDown()" % (self,))
7973         FakePath = self.FilePathFake.__exit__()       
7974         FakeBCC = self.BCountingCrawler.__exit__()
7975+        getsize = self.statforsize.__exit__()
7976 
7977 expiration_policy = {'enabled' : False,
7978                      'mode' : 'age',
7979hunk ./src/allmydata/test/test_backends.py 284
7980         """
7981         mocktime.return_value = 0
7982         # Inspect incoming and fail unless it's empty.
7983-        # incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7984-        # self.failUnlessReallyEqual(incomingset, frozenset())
7985+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7986+        self.failUnlessReallyEqual(incomingset, frozenset())
7987         
7988         # Populate incoming with the sharenum: 0.
7989         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7990hunk ./src/allmydata/test/test_backends.py 294
7991         self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7992         
7993         # Attempt to create a second share writer with the same sharenum.
7994-        # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7995+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7996 
7997hunk ./src/allmydata/test/test_backends.py 296
7998-        # print bsa
7999         # Show that no sharewriter results from a remote_allocate_buckets
8000         # with the same si and sharenum, until BucketWriter.remote_close()
8001         # has been called.
8002hunk ./src/allmydata/test/test_backends.py 299
8003-        # self.failIf(bsa)
8004+        self.failIf(bsa)
8005 
8006         # Test allocated size.
8007hunk ./src/allmydata/test/test_backends.py 302
8008-        # spaceint = self.ss.allocated_size()
8009-        # self.failUnlessReallyEqual(spaceint, 1)
8010+        spaceint = self.ss.allocated_size()
8011+        self.failUnlessReallyEqual(spaceint, 1)
8012 
8013         # Write 'a' to shnum 0. Only tested together with close and read.
8014hunk ./src/allmydata/test/test_backends.py 306
8015-        # bs[0].remote_write(0, 'a')
8016+        bs[0].remote_write(0, 'a')
8017         
8018         # Preclose: Inspect final, failUnless nothing there.
8019hunk ./src/allmydata/test/test_backends.py 309
8020-        # self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
8021-        # bs[0].remote_close()
8022+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
8023+        bs[0].remote_close()
8024 
8025         # Postclose: (Omnibus) failUnless written data is in final.
8026hunk ./src/allmydata/test/test_backends.py 313
8027-        # sharesinfinal = list(self.backend.get_shares('teststorage_index'))
8028-        # self.failUnlessReallyEqual(len(sharesinfinal), 1)
8029-        # contents = sharesinfinal[0].read_share_data(0, 73)
8030-        # self.failUnlessReallyEqual(contents, client_data)
8031+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
8032+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
8033+        contents = sharesinfinal[0].read_share_data(0, 73)
8034+        self.failUnlessReallyEqual(contents, client_data)
8035 
8036         # Exercise the case that the share we're asking to allocate is
8037         # already (completely) uploaded.
8038hunk ./src/allmydata/test/test_backends.py 320
8039-        # self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8040+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8041         
8042     @mock.patch('time.time')
8043     @mock.patch('allmydata.util.fileutil.get_available_space')
8044}
8045[TestServerAndFSBackend.test_read_old_share passes
8046wilcoxjg@gmail.com**20110729235356
8047 Ignore-this: 574636c959ea58d4609bea2428ff51d3
8048] {
8049hunk ./src/allmydata/storage/backends/das/core.py 37
8050 # $SHARENUM matches this regex:
8051 NUM_RE=re.compile("^[0-9]+$")
8052 
8053-def is_num(fp):
8054-    return NUM_RE.match(fp.basename())
8055-
8056 class DASCore(Backend):
8057     implements(IStorageBackend)
8058     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
8059hunk ./src/allmydata/storage/backends/das/core.py 97
8060         finalstoragedir = si_si2dir(self.sharedir, storageindex)
8061         try:
8062             for fp in finalstoragedir.children():
8063-                if is_num(fp):
8064-                    finalhome = finalstoragedir.child(str(fp.basename()))
8065-                    yield ImmutableShare(storageindex, fp, finalhome)
8066+                fpshnumstr = fp.basename()
8067+                if NUM_RE.match(fpshnumstr):
8068+                    finalhome = finalstoragedir.child(fpshnumstr)
8069+                    yield ImmutableShare(storageindex, fpshnumstr, finalhome)
8070         except UnlistableError:
8071             # There is no shares directory at all.
8072             pass
8073hunk ./src/allmydata/test/test_backends.py 15
8074 from allmydata.storage.server import StorageServer
8075 from allmydata.storage.backends.das.core import DASCore
8076 from allmydata.storage.backends.null.core import NullCore
8077+from allmydata.storage.common import si_si2dir
8078 
8079 # The following share file content was generated with
8080 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
8081hunk ./src/allmydata/test/test_backends.py 155
8082     def setUp(self):
8083         # Make patcher, patch, and make effects for fs using functions.
8084         msg( "%s.setUp()" % (self,))
8085+        fakefilepaths = {}
8086         self.storedir = MockFilePath('teststoredir')
8087         self.basedir = self.storedir.child('shares')
8088         self.baseincdir = self.basedir.child('incoming')
8089hunk ./src/allmydata/test/test_backends.py 223
8090         FakePath = self.FilePathFake.__exit__()       
8091         FakeBCC = self.BCountingCrawler.__exit__()
8092         getsize = self.statforsize.__exit__()
8093+        fakefilepaths = {}
8094 
8095 expiration_policy = {'enabled' : False,
8096                      'mode' : 'age',
8097hunk ./src/allmydata/test/test_backends.py 334
8098             return 0
8099 
8100         mockget_available_space.side_effect = call_get_available_space
8101-       
8102-       
8103         alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8104 
8105hunk ./src/allmydata/test/test_backends.py 336
8106-    @mock.patch('os.path.exists')
8107-    @mock.patch('os.path.getsize')
8108-    @mock.patch('__builtin__.open')
8109-    @mock.patch('os.listdir')
8110-    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
8111+    def test_read_old_share(self):
8112         """ This tests whether the code correctly finds and reads
8113         shares written out by old (Tahoe-LAFS <= v1.8.2)
8114         servers. There is a similar test in test_download, but that one
8115hunk ./src/allmydata/test/test_backends.py 344
8116         stack of code. This one is for exercising just the
8117         StorageServer object. """
8118 
8119-        def call_listdir(dirname):
8120-            precondition(isinstance(dirname, basestring), dirname)
8121-            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
8122-            return ['0']
8123-
8124-        mocklistdir.side_effect = call_listdir
8125-
8126-        def call_open(fname, mode):
8127-            precondition(isinstance(fname, basestring), fname)
8128-            self.failUnlessReallyEqual(fname, sharefname)
8129-            self.failUnlessEqual(mode[0], 'r', mode)
8130-            self.failUnless('b' in mode, mode)
8131-
8132-            return TemporaryFile(share_data)
8133-        mockopen.side_effect = call_open
8134-
8135         datalen = len(share_data)
8136hunk ./src/allmydata/test/test_backends.py 345
8137-        def call_getsize(fname):
8138-            precondition(isinstance(fname, basestring), fname)
8139-            self.failUnlessReallyEqual(fname, sharefname)
8140-            return datalen
8141-        mockgetsize.side_effect = call_getsize
8142-
8143-        def call_exists(fname):
8144-            precondition(isinstance(fname, basestring), fname)
8145-            self.failUnlessReallyEqual(fname, sharefname)
8146-            return True
8147-        mockexists.side_effect = call_exists
8148+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(3))
8149+        finalhome.setContent(share_data)
8150 
8151         # Now begin the test.
8152         bs = self.ss.remote_get_buckets('teststorage_index')
8153hunk ./src/allmydata/test/test_backends.py 352
8154 
8155         self.failUnlessEqual(len(bs), 1)
8156-        b = bs[0]
8157+        b = bs['3']
8158         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
8159         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
8160         # If you try to read past the end you get the as much data as is there.
8161}
8162[TestServerAndFSBackend passes en total!
8163wilcoxjg@gmail.com**20110730010025
8164 Ignore-this: fdc92e08674af1da5708c30557ac5860
8165] {
8166hunk ./src/allmydata/storage/backends/das/core.py 83
8167         """ Return a frozenset of the shnum (as ints) of incoming shares. """
8168         incomingthissi = si_si2dir(self.incomingdir, storageindex)
8169         try:
8170-            childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
8171+            childfps = [ fp for fp in incomingthissi.children() if NUM_RE.match(fp.basename()) ]
8172             shnums = [ int(fp.basename()) for fp in childfps ]
8173             return frozenset(shnums)
8174         except UnlistableError:
8175hunk ./src/allmydata/test/test_backends.py 35
8176     cancelsecret + expirationtime + nextlease
8177 share_data = containerdata + client_data
8178 testnodeid = 'testnodeidxxxxxxxxxx'
8179-fakefilepaths = {}
8180 
8181 
8182hunk ./src/allmydata/test/test_backends.py 37
8183+class MockFiles(unittest.TestCase):
8184+    """ I simulate a filesystem that the code under test can use. I flag the
8185+    code under test if it reads or writes outside of its prescribed
8186+    subtree. I simulate just the parts of the filesystem that the current
8187+    implementation of DAS backend needs. """
8188+
8189+    def setUp(self):
8190+        # Make patcher, patch, and make effects for fs using functions.
8191+        msg( "%s.setUp()" % (self,))
8192+        self.fakefilepaths = {}
8193+        self.storedir = MockFilePath('teststoredir', self.fakefilepaths)
8194+        self.basedir = self.storedir.child('shares')
8195+        self.baseincdir = self.basedir.child('incoming')
8196+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8197+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8198+        self.shareincomingname = self.sharedirincomingname.child('0')
8199+        self.sharefinalname = self.sharedirfinalname.child('0')
8200+
8201+        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
8202+        FakePath = self.FilePathFake.__enter__()
8203+
8204+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
8205+        FakeBCC = self.BCountingCrawler.__enter__()
8206+        FakeBCC.side_effect = self.call_FakeBCC
8207+
8208+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
8209+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
8210+        FakeLCC.side_effect = self.call_FakeLCC
8211+
8212+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
8213+        GetSpace = self.get_available_space.__enter__()
8214+        GetSpace.side_effect = self.call_get_available_space
8215+
8216+        self.statforsize = mock.patch('allmydata.storage.backends.das.core.filepath.stat')
8217+        getsize = self.statforsize.__enter__()
8218+        getsize.side_effect = self.call_statforsize
8219+
8220+    def call_statforsize(self, fakefpname):
8221+        return self.fakefilepaths[fakefpname].fileobject.size()
8222+
8223+    def call_FakeBCC(self, StateFile):
8224+        return MockBCC()
8225+
8226+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
8227+        return MockLCC()
8228+
8229+    def call_listdir(self, fname):
8230+        fnamefp = FilePath(fname)
8231+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8232+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8233+
8234+    def call_stat(self, fname):
8235+        assert isinstance(fname, basestring), fname
8236+        fnamefp = FilePath(fname)
8237+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8238+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8239+        msg("%s.call_stat(%s)" % (self, fname,))
8240+        mstat = MockStat()
8241+        mstat.st_mode = 16893 # a directory
8242+        return mstat
8243+
8244+    def call_get_available_space(self, storedir, reservedspace):
8245+        # The input vector has an input size of 85.
8246+        return 85 - reservedspace
8247+
8248+    def call_exists(self):
8249+        # I'm only called in the ImmutableShareFile constructor.
8250+        return False
8251+
8252+    def call_setContent(self, inputstring):
8253+        self.incomingsharefilecontents = TemporaryFile(inputstring)
8254+
8255+    def tearDown(self):
8256+        msg( "%s.tearDown()" % (self,))
8257+        FakePath = self.FilePathFake.__exit__()       
8258+        FakeBCC = self.BCountingCrawler.__exit__()
8259+        getsize = self.statforsize.__exit__()
8260+        self.fakefilepaths = {}
8261+
8262 class MockStat:
8263     def __init__(self):
8264         self.st_mode = None
8265hunk ./src/allmydata/test/test_backends.py 122
8266 
8267 
8268 class MockFilePath:
8269-    def __init__(self, pathstring):
8270+    def __init__(self, pathstring, ffpathsenvironment):
8271+        self.fakefilepaths = ffpathsenvironment
8272         self.path = pathstring
8273         self.spawn = {}
8274         self.antecedent = os.path.dirname(self.path)
8275hunk ./src/allmydata/test/test_backends.py 129
8276     def child(self, childstring):
8277         arg2child = os.path.join(self.path, childstring)
8278-        if fakefilepaths.has_key(arg2child):
8279-            child = fakefilepaths[arg2child]
8280+        if self.fakefilepaths.has_key(arg2child):
8281+            child = self.fakefilepaths[arg2child]
8282         else:
8283hunk ./src/allmydata/test/test_backends.py 132
8284-            child = MockFilePath(arg2child)
8285+            child = MockFilePath(arg2child, self.fakefilepaths)
8286         return child
8287     def parent(self):
8288hunk ./src/allmydata/test/test_backends.py 135
8289-        if fakefilepaths.has_key(self.antecedent):
8290-            parent = fakefilepaths[self.antecedent]
8291+        if self.fakefilepaths.has_key(self.antecedent):
8292+            parent = self.fakefilepaths[self.antecedent]
8293         else:
8294hunk ./src/allmydata/test/test_backends.py 138
8295-            parent = MockFilePath(self.antecedent)
8296+            parent = MockFilePath(self.antecedent, self.fakefilepaths)
8297         return parent
8298     def children(self):
8299hunk ./src/allmydata/test/test_backends.py 141
8300-        childrenfromffs = [ffp for ffp in fakefilepaths.values() if ffp.path.startswith(self.path)]
8301+        childrenfromffs = [ffp for ffp in self.fakefilepaths.values() if ffp.path.startswith(self.path)]
8302         childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
8303         childrenfromffs = frozenset(childrenfromffs)
8304         return list(childrenfromffs | frozenset(self.spawn.values())) 
8305hunk ./src/allmydata/test/test_backends.py 165
8306                 f(newfps, antecedents)
8307         f(self.path, antecedents)
8308         for fps in antecedents:
8309-            if not fakefilepaths.has_key(fps):
8310-                fakefilepaths[fps] = MockFilePath(fps)
8311+            if not self.fakefilepaths.has_key(fps):
8312+                self.fakefilepaths[fps] = MockFilePath(fps, self.fakefilepaths)
8313     def setContent(self, contentstring):
8314hunk ./src/allmydata/test/test_backends.py 168
8315-        fakefilepaths[self.path] = self
8316+        self.fakefilepaths[self.path] = self
8317         self.fileobject = MockFileObject(contentstring)
8318         self.setparents()
8319     def create(self):
8320hunk ./src/allmydata/test/test_backends.py 172
8321-        fakefilepaths[self.path] = self
8322+        self.fakefilepaths[self.path] = self
8323         self.setparents()
8324     def basename(self):
8325         return os.path.split(self.path)[1]
8326hunk ./src/allmydata/test/test_backends.py 178
8327     def moveTo(self, newffp):
8328         #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
8329-        if fakefilepaths.has_key(newffp.path):
8330+        if self.fakefilepaths.has_key(newffp.path):
8331             raise OSError
8332         else:
8333hunk ./src/allmydata/test/test_backends.py 181
8334-            fakefilepaths[newffp.path] = self
8335+            self.fakefilepaths[newffp.path] = self
8336             self.path = newffp.path
8337     def getsize(self):
8338         return self.fileobject.getsize()
8339hunk ./src/allmydata/test/test_backends.py 225
8340         pass
8341 
8342 
8343-class MockFiles(unittest.TestCase):
8344-    """ I simulate a filesystem that the code under test can use. I flag the
8345-    code under test if it reads or writes outside of its prescribed
8346-    subtree. I simulate just the parts of the filesystem that the current
8347-    implementation of DAS backend needs. """
8348-
8349-    def setUp(self):
8350-        # Make patcher, patch, and make effects for fs using functions.
8351-        msg( "%s.setUp()" % (self,))
8352-        fakefilepaths = {}
8353-        self.storedir = MockFilePath('teststoredir')
8354-        self.basedir = self.storedir.child('shares')
8355-        self.baseincdir = self.basedir.child('incoming')
8356-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8357-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8358-        self.shareincomingname = self.sharedirincomingname.child('0')
8359-        self.sharefinalname = self.sharedirfinalname.child('0')
8360-
8361-        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
8362-        FakePath = self.FilePathFake.__enter__()
8363-
8364-        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
8365-        FakeBCC = self.BCountingCrawler.__enter__()
8366-        FakeBCC.side_effect = self.call_FakeBCC
8367-
8368-        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
8369-        FakeLCC = self.LeaseCheckingCrawler.__enter__()
8370-        FakeLCC.side_effect = self.call_FakeLCC
8371-
8372-        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
8373-        GetSpace = self.get_available_space.__enter__()
8374-        GetSpace.side_effect = self.call_get_available_space
8375-
8376-        self.statforsize = mock.patch('allmydata.storage.backends.das.core.filepath.stat')
8377-        getsize = self.statforsize.__enter__()
8378-        getsize.side_effect = self.call_statforsize
8379-
8380-    def call_statforsize(self, fakefpname):
8381-        return fakefilepaths[fakefpname].fileobject.size()
8382-
8383-    def call_FakeBCC(self, StateFile):
8384-        return MockBCC()
8385-
8386-    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
8387-        return MockLCC()
8388 
8389hunk ./src/allmydata/test/test_backends.py 226
8390-    def call_listdir(self, fname):
8391-        fnamefp = FilePath(fname)
8392-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8393-                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8394-
8395-    def call_stat(self, fname):
8396-        assert isinstance(fname, basestring), fname
8397-        fnamefp = FilePath(fname)
8398-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8399-                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8400-        msg("%s.call_stat(%s)" % (self, fname,))
8401-        mstat = MockStat()
8402-        mstat.st_mode = 16893 # a directory
8403-        return mstat
8404-
8405-    def call_get_available_space(self, storedir, reservedspace):
8406-        # The input vector has an input size of 85.
8407-        return 85 - reservedspace
8408-
8409-    def call_exists(self):
8410-        # I'm only called in the ImmutableShareFile constructor.
8411-        return False
8412-
8413-    def call_setContent(self, inputstring):
8414-        self.incomingsharefilecontents = TemporaryFile(inputstring)
8415-
8416-    def tearDown(self):
8417-        msg( "%s.tearDown()" % (self,))
8418-        FakePath = self.FilePathFake.__exit__()       
8419-        FakeBCC = self.BCountingCrawler.__exit__()
8420-        getsize = self.statforsize.__exit__()
8421-        fakefilepaths = {}
8422 
8423 expiration_policy = {'enabled' : False,
8424                      'mode' : 'age',
8425}
8426[current test_backend tests pass
8427wilcoxjg@gmail.com**20110730034159
8428 Ignore-this: 4bcf2566404f7b38c464512b82e8b722
8429] {
8430hunk ./src/allmydata/test/test_backends.py 7
8431 from allmydata.util.log import msg
8432 from tempfile import TemporaryFile
8433 from allmydata.test.common_util import ReallyEqualMixin
8434-from allmydata.util.assertutil import _assert
8435 import mock
8436hunk ./src/allmydata/test/test_backends.py 8
8437-from mock import Mock
8438-
8439 # This is the code that we're going to be testing.
8440 from allmydata.storage.server import StorageServer
8441 from allmydata.storage.backends.das.core import DASCore
8442hunk ./src/allmydata/test/test_backends.py 13
8443 from allmydata.storage.backends.null.core import NullCore
8444 from allmydata.storage.common import si_si2dir
8445-
8446 # The following share file content was generated with
8447 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
8448 # with share data == 'a'. The total size of this input
8449hunk ./src/allmydata/test/test_backends.py 31
8450     cancelsecret + expirationtime + nextlease
8451 share_data = containerdata + client_data
8452 testnodeid = 'testnodeidxxxxxxxxxx'
8453+expiration_policy = {'enabled' : False,
8454+                     'mode' : 'age',
8455+                     'override_lease_duration' : None,
8456+                     'cutoff_date' : None,
8457+                     'sharetypes' : None}
8458 
8459 
8460 class MockFiles(unittest.TestCase):
8461hunk ./src/allmydata/test/test_backends.py 75
8462         getsize = self.statforsize.__enter__()
8463         getsize.side_effect = self.call_statforsize
8464 
8465-    def call_statforsize(self, fakefpname):
8466-        return self.fakefilepaths[fakefpname].fileobject.size()
8467-
8468     def call_FakeBCC(self, StateFile):
8469         return MockBCC()
8470 
8471hunk ./src/allmydata/test/test_backends.py 81
8472     def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
8473         return MockLCC()
8474 
8475-    def call_listdir(self, fname):
8476-        fnamefp = FilePath(fname)
8477-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8478-                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8479-
8480-    def call_stat(self, fname):
8481-        assert isinstance(fname, basestring), fname
8482-        fnamefp = FilePath(fname)
8483-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8484-                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8485-        msg("%s.call_stat(%s)" % (self, fname,))
8486-        mstat = MockStat()
8487-        mstat.st_mode = 16893 # a directory
8488-        return mstat
8489-
8490     def call_get_available_space(self, storedir, reservedspace):
8491         # The input vector has an input size of 85.
8492         return 85 - reservedspace
8493hunk ./src/allmydata/test/test_backends.py 85
8494 
8495-    def call_exists(self):
8496-        # I'm only called in the ImmutableShareFile constructor.
8497-        return False
8498-
8499-    def call_setContent(self, inputstring):
8500-        self.incomingsharefilecontents = TemporaryFile(inputstring)
8501+    def call_statforsize(self, fakefpname):
8502+        return self.fakefilepaths[fakefpname].fileobject.size()
8503 
8504     def tearDown(self):
8505         msg( "%s.tearDown()" % (self,))
8506hunk ./src/allmydata/test/test_backends.py 91
8507         FakePath = self.FilePathFake.__exit__()       
8508-        FakeBCC = self.BCountingCrawler.__exit__()
8509-        getsize = self.statforsize.__exit__()
8510         self.fakefilepaths = {}
8511 
8512hunk ./src/allmydata/test/test_backends.py 93
8513-class MockStat:
8514-    def __init__(self):
8515-        self.st_mode = None
8516-
8517 
8518 class MockFilePath:
8519     def __init__(self, pathstring, ffpathsenvironment):
8520hunk ./src/allmydata/test/test_backends.py 128
8521     def exists(self):
8522         return False
8523     def open(self, mode='r'):
8524+        # XXX Makes no use of mode.
8525         return self.fileobject.open(mode)
8526hunk ./src/allmydata/test/test_backends.py 130
8527-    def setparents(self):
8528+    def parents(self):
8529         antecedents = []
8530         def f(fps, antecedents):
8531             newfps = os.path.split(fps)[0]
8532hunk ./src/allmydata/test/test_backends.py 138
8533                 antecedents.append(newfps)
8534                 f(newfps, antecedents)
8535         f(self.path, antecedents)
8536-        for fps in antecedents:
8537+        return antecedents
8538+    def setparents(self):
8539+        for fps in self.parents():
8540             if not self.fakefilepaths.has_key(fps):
8541                 self.fakefilepaths[fps] = MockFilePath(fps, self.fakefilepaths)
8542     def setContent(self, contentstring):
8543hunk ./src/allmydata/test/test_backends.py 187
8544     def size(self):
8545         # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
8546         # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
8547+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
8548         return {stat.ST_SIZE:len(self.buffer)}
8549     def getsize(self):
8550         return len(self.buffer)
8551hunk ./src/allmydata/test/test_backends.py 202
8552         pass
8553 
8554 
8555-
8556-
8557-expiration_policy = {'enabled' : False,
8558-                     'mode' : 'age',
8559-                     'override_lease_duration' : None,
8560-                     'cutoff_date' : None,
8561-                     'sharetypes' : None}
8562-
8563-
8564 class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
8565     """ NullBackend is just for testing and executable documentation, so
8566     this test is actually a test of StorageServer in which we're using
8567hunk ./src/allmydata/test/test_backends.py 314
8568         stack of code. This one is for exercising just the
8569         StorageServer object. """
8570 
8571+        # Contruct a file with the appropriate contents in the mockfilesystem.
8572         datalen = len(share_data)
8573         finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(3))
8574         finalhome.setContent(share_data)
8575hunk ./src/allmydata/test/test_backends.py 330
8576         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
8577         # If you start reading past the end of the file you get the empty string.
8578         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
8579-
8580-
8581-class TestBackendConstruction(MockFiles, ReallyEqualMixin):
8582-    def test_create_fs_backend(self):
8583-        """ This tests whether a file system backend instance can be
8584-        constructed. To pass the test, it has to use the
8585-        filesystem in only the prescribed ways. """
8586-
8587-        # Now begin the test.
8588-        DASCore(self.storedir, expiration_policy)
8589}
8590[jacp21Zancas20110801.darcs.patch
8591wilcoxjg@gmail.com**20110801094603
8592 Ignore-this: a2cd3779f91b64e707002cafd2b05cb8
8593] {
8594hunk ./src/allmydata/storage/backends/das/core.py 84
8595         incomingthissi = si_si2dir(self.incomingdir, storageindex)
8596         try:
8597             childfps = [ fp for fp in incomingthissi.children() if NUM_RE.match(fp.basename()) ]
8598-            shnums = [ int(fp.basename()) for fp in childfps ]
8599+            shnums = [ int(fp.basename()) for fp in childfps if fp.exists()]
8600             return frozenset(shnums)
8601         except UnlistableError:
8602             # There is no shares directory at all.
8603hunk ./src/allmydata/test/test_backends.py 3
8604 import os, stat
8605 from twisted.trial import unittest
8606-from twisted.python.filepath import FilePath
8607 from allmydata.util.log import msg
8608hunk ./src/allmydata/test/test_backends.py 4
8609-from tempfile import TemporaryFile
8610 from allmydata.test.common_util import ReallyEqualMixin
8611 import mock
8612 # This is the code that we're going to be testing.
8613hunk ./src/allmydata/test/test_backends.py 36
8614                      'sharetypes' : None}
8615 
8616 
8617-class MockFiles(unittest.TestCase):
8618-    """ I simulate a filesystem that the code under test can use. I flag the
8619-    code under test if it reads or writes outside of its prescribed
8620-    subtree. I simulate just the parts of the filesystem that the current
8621-    implementation of DAS backend needs. """
8622-
8623+class MockFileSystem(unittest.TestCase):
8624+    """ I simulate a filesystem that the code under test can use. I simulate
8625+    just the parts of the filesystem that the current implementation of DAS
8626+    backend needs. """
8627     def setUp(self):
8628         # Make patcher, patch, and make effects for fs using functions.
8629         msg( "%s.setUp()" % (self,))
8630hunk ./src/allmydata/test/test_backends.py 43
8631-        self.fakefilepaths = {}
8632-        self.storedir = MockFilePath('teststoredir', self.fakefilepaths)
8633+        self.mockedfilepaths = {}
8634+        #keys are pathnames, values are MockFilePath objects. This is necessary because
8635+        #MockFilePath behavior sometimes depends on the filesystem. Where it does,
8636+        #self.mockedfilepaths has the relevent info.
8637+        self.storedir = MockFilePath('teststoredir', self.mockedfilepaths)
8638         self.basedir = self.storedir.child('shares')
8639         self.baseincdir = self.basedir.child('incoming')
8640         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8641hunk ./src/allmydata/test/test_backends.py 85
8642         return 85 - reservedspace
8643 
8644     def call_statforsize(self, fakefpname):
8645-        return self.fakefilepaths[fakefpname].fileobject.size()
8646+        return self.mockedfilepaths[fakefpname].fileobject.size()
8647 
8648     def tearDown(self):
8649         msg( "%s.tearDown()" % (self,))
8650hunk ./src/allmydata/test/test_backends.py 90
8651         FakePath = self.FilePathFake.__exit__()       
8652-        self.fakefilepaths = {}
8653+        self.mockedfilepaths = {}
8654 
8655 
8656 class MockFilePath:
8657hunk ./src/allmydata/test/test_backends.py 95
8658     def __init__(self, pathstring, ffpathsenvironment):
8659-        self.fakefilepaths = ffpathsenvironment
8660+        #  I can't jsut make the values MockFileObjects because they may be directories.
8661+        self.mockedfilepaths = ffpathsenvironment
8662         self.path = pathstring
8663hunk ./src/allmydata/test/test_backends.py 98
8664+        if not self.mockedfilepaths.has_key(self.path):
8665+            #  The first MockFilePath object is special
8666+            self.mockedfilepaths[self.path] = self
8667+            self.fileobject = None
8668+        else:
8669+            self.fileobject = self.mockedfilepaths[self.path].fileobject
8670         self.spawn = {}
8671         self.antecedent = os.path.dirname(self.path)
8672hunk ./src/allmydata/test/test_backends.py 106
8673+
8674+       
8675+    def setContent(self, contentstring):
8676+        # This method rewrites the data in the file that corresponds to its path
8677+        # name whether it preexisted or not.
8678+        self.fileobject = MockFileObject(contentstring)
8679+        self.mockedfilepaths[self.path].fileobject = self.fileobject
8680+        self.setparents()
8681+       
8682+    def create(self):
8683+        # This method chokes if there's a pre-existing file!
8684+        if self.mockedfilepaths[self.path].fileobject:
8685+            raise OSError
8686+        else:
8687+            self.fileobject = MockFileObject(contentstring)
8688+            self.mockedfilepaths[self.path].fileobject = self.fileobject
8689+            self.setparents()       
8690+       
8691     def child(self, childstring):
8692         arg2child = os.path.join(self.path, childstring)
8693hunk ./src/allmydata/test/test_backends.py 126
8694-        if self.fakefilepaths.has_key(arg2child):
8695-            child = self.fakefilepaths[arg2child]
8696-        else:
8697-            child = MockFilePath(arg2child, self.fakefilepaths)
8698+        child = MockFilePath(arg2child, self.mockedfilepaths)
8699         return child
8700hunk ./src/allmydata/test/test_backends.py 128
8701-    def parent(self):
8702-        if self.fakefilepaths.has_key(self.antecedent):
8703-            parent = self.fakefilepaths[self.antecedent]
8704-        else:
8705-            parent = MockFilePath(self.antecedent, self.fakefilepaths)
8706-        return parent
8707+
8708     def children(self):
8709hunk ./src/allmydata/test/test_backends.py 130
8710-        childrenfromffs = [ffp for ffp in self.fakefilepaths.values() if ffp.path.startswith(self.path)]
8711+        childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)]
8712         childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
8713         childrenfromffs = frozenset(childrenfromffs)
8714         return list(childrenfromffs | frozenset(self.spawn.values())) 
8715hunk ./src/allmydata/test/test_backends.py 134
8716+
8717     def makedirs(self):
8718         # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
8719         pass
8720hunk ./src/allmydata/test/test_backends.py 138
8721+
8722     def isdir(self):
8723         return True
8724hunk ./src/allmydata/test/test_backends.py 141
8725+
8726     def remove(self):
8727         pass
8728hunk ./src/allmydata/test/test_backends.py 144
8729+
8730     def exists(self):
8731         return False
8732hunk ./src/allmydata/test/test_backends.py 147
8733+
8734     def open(self, mode='r'):
8735         # XXX Makes no use of mode.
8736hunk ./src/allmydata/test/test_backends.py 150
8737+        if not self.mockedfilepaths[self.path].fileobject:
8738+            # If there's no fileobject there already then make one and put it there.
8739+            self.fileobject = MockFileObject()
8740+            self.mockedfilepaths[self.path].fileobject = self.fileobject
8741+        else:
8742+            # Otherwise get a ref to it.
8743+            self.fileobject = self.mockedfilepaths[self.path].fileobject
8744         return self.fileobject.open(mode)
8745hunk ./src/allmydata/test/test_backends.py 158
8746+
8747+    def parent(self):
8748+        if self.mockedfilepaths.has_key(self.antecedent):
8749+            parent = self.mockedfilepaths[self.antecedent]
8750+        else:
8751+            parent = MockFilePath(self.antecedent, self.mockedfilepaths)
8752+        return parent
8753+
8754     def parents(self):
8755         antecedents = []
8756         def f(fps, antecedents):
8757hunk ./src/allmydata/test/test_backends.py 175
8758                 f(newfps, antecedents)
8759         f(self.path, antecedents)
8760         return antecedents
8761+
8762     def setparents(self):
8763         for fps in self.parents():
8764hunk ./src/allmydata/test/test_backends.py 178
8765-            if not self.fakefilepaths.has_key(fps):
8766-                self.fakefilepaths[fps] = MockFilePath(fps, self.fakefilepaths)
8767-    def setContent(self, contentstring):
8768-        self.fakefilepaths[self.path] = self
8769-        self.fileobject = MockFileObject(contentstring)
8770-        self.setparents()
8771-    def create(self):
8772-        self.fakefilepaths[self.path] = self
8773-        self.setparents()
8774+            if not self.mockedfilepaths.has_key(fps):
8775+                self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths)
8776+
8777     def basename(self):
8778         return os.path.split(self.path)[1]
8779hunk ./src/allmydata/test/test_backends.py 183
8780+
8781     def moveTo(self, newffp):
8782         #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
8783hunk ./src/allmydata/test/test_backends.py 186
8784-        if self.fakefilepaths.has_key(newffp.path):
8785+        if self.mockedfilepaths.has_key(newffp.path):
8786             raise OSError
8787         else:
8788hunk ./src/allmydata/test/test_backends.py 189
8789-            self.fakefilepaths[newffp.path] = self
8790+            self.mockedfilepaths[newffp.path] = self
8791             self.path = newffp.path
8792hunk ./src/allmydata/test/test_backends.py 191
8793+
8794     def getsize(self):
8795         return self.fileobject.getsize()
8796 
8797hunk ./src/allmydata/test/test_backends.py 196
8798 class MockFileObject:
8799-    def __init__(self, contentstring):
8800+    def __init__(self, contentstring=''):
8801         self.buffer = contentstring
8802         self.pos = 0
8803     def open(self, mode='r'):
8804hunk ./src/allmydata/test/test_backends.py 258
8805         self.failIf(mockmkdir.called)
8806 
8807 
8808-class TestServerConstruction(MockFiles, ReallyEqualMixin):
8809+class TestServerConstruction(MockFileSystem, ReallyEqualMixin):
8810     def test_create_server_fs_backend(self):
8811         """ This tests whether a server instance can be constructed with a
8812         filesystem backend. To pass the test, it mustn't use the filesystem
8813hunk ./src/allmydata/test/test_backends.py 263
8814         outside of its configured storedir. """
8815-
8816         StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
8817 
8818 
8819hunk ./src/allmydata/test/test_backends.py 266
8820-class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
8821-    """ This tests both the StorageServer and the DAS backend together. """
8822-   
8823+class TestServerAndFSBackend(MockFileSystem, ReallyEqualMixin):
8824+    """ This tests both the StorageServer and the DAS backend together. """   
8825     def setUp(self):
8826hunk ./src/allmydata/test/test_backends.py 269
8827-        MockFiles.setUp(self)
8828+        MockFileSystem.setUp(self)
8829         try:
8830             self.backend = DASCore(self.storedir, expiration_policy)
8831             self.ss = StorageServer(testnodeid, self.backend)
8832hunk ./src/allmydata/test/test_backends.py 273
8833+
8834             self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
8835             self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
8836         except:
8837hunk ./src/allmydata/test/test_backends.py 277
8838-            MockFiles.tearDown(self)
8839+            MockFileSystem.tearDown(self)
8840             raise
8841 
8842hunk ./src/allmydata/test/test_backends.py 280
8843+    @mock.patch('time.time')
8844+    @mock.patch('allmydata.util.fileutil.get_available_space')
8845+    def test_out_of_space(self, mockget_available_space, mocktime):
8846+        mocktime.return_value = 0
8847+       
8848+        def call_get_available_space(dir, reserve):
8849+            return 0
8850+
8851+        mockget_available_space.side_effect = call_get_available_space
8852+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8853+        self.failUnlessReallyEqual(bsc, {})
8854+
8855     @mock.patch('time.time')
8856     def test_write_and_read_share(self, mocktime):
8857         """
8858hunk ./src/allmydata/test/test_backends.py 299
8859         handling of simultaneous and successive attempts to write the same
8860         share.
8861         """
8862+        #initialset = frozenset(self.mockedfilepaths.keys())
8863+        #print initialset
8864         mocktime.return_value = 0
8865         # Inspect incoming and fail unless it's empty.
8866         incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
8867hunk ./src/allmydata/test/test_backends.py 304
8868+        #afterincoming = frozenset(self.mockedfilepaths.keys()) - initialset
8869+        #print "afterincoming: ", afterincoming
8870+
8871+        #print "incomingset: ", incomingset
8872+
8873         self.failUnlessReallyEqual(incomingset, frozenset())
8874         
8875hunk ./src/allmydata/test/test_backends.py 311
8876+
8877         # Populate incoming with the sharenum: 0.
8878         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
8879 
8880hunk ./src/allmydata/test/test_backends.py 315
8881+        afterfirstallocatebukcets = frozenset(self.mockedfilepaths.keys()) - afterincoming
8882+        print "afterfirstallocatebukcets: ", afterfirstallocatebukcets
8883+
8884         # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
8885         self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
8886hunk ./src/allmydata/test/test_backends.py 320
8887-       
8888+
8889+
8890+
8891         # Attempt to create a second share writer with the same sharenum.
8892         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
8893 
8894hunk ./src/allmydata/test/test_backends.py 352
8895         # already (completely) uploaded.
8896         self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8897         
8898-    @mock.patch('time.time')
8899-    @mock.patch('allmydata.util.fileutil.get_available_space')
8900-    def test_out_of_space(self, mockget_available_space, mocktime):
8901-        mocktime.return_value = 0
8902-       
8903-        def call_get_available_space(dir, reserve):
8904-            return 0
8905-
8906-        mockget_available_space.side_effect = call_get_available_space
8907-        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8908 
8909     def test_read_old_share(self):
8910         """ This tests whether the code correctly finds and reads
8911hunk ./src/allmydata/test/test_backends.py 360
8912         is from the perspective of the client and exercises a deeper
8913         stack of code. This one is for exercising just the
8914         StorageServer object. """
8915-
8916         # Contruct a file with the appropriate contents in the mockfilesystem.
8917         datalen = len(share_data)
8918hunk ./src/allmydata/test/test_backends.py 362
8919-        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(3))
8920+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0))
8921         finalhome.setContent(share_data)
8922 
8923         # Now begin the test.
8924hunk ./src/allmydata/test/test_backends.py 369
8925         bs = self.ss.remote_get_buckets('teststorage_index')
8926 
8927         self.failUnlessEqual(len(bs), 1)
8928-        b = bs['3']
8929+        b = bs['0']
8930         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
8931         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
8932         # If you try to read past the end you get the as much data as is there.
8933}
8934
8935Context:
8936
8937[cli: make 'tahoe cp' overwrite mutable files in-place
8938Kevan Carstensen <kevan@isnotajoke.com>**20110729202039
8939 Ignore-this: b2ad21a19439722f05c49bfd35b01855
8940]
8941[SFTP: write an error message to standard error for unrecognized shell commands. Change the existing message for shell sessions to be written to standard error, and refactor some duplicated code. Also change the lines of the error messages to end in CRLF, and take into account Kevan's review comments. fixes #1442, #1446
8942david-sarah@jacaranda.org**20110729233102
8943 Ignore-this: d2f2bb4664f25007d1602bf7333e2cdd
8944]
8945[src/allmydata/scripts/cli.py: fix pyflakes warning.
8946david-sarah@jacaranda.org**20110728021402
8947 Ignore-this: 94050140ddb99865295973f49927c509
8948]
8949[Fix the help synopses of CLI commands to include [options] in the right place. fixes #1359, fixes #636
8950david-sarah@jacaranda.org**20110724225440
8951 Ignore-this: 2a8e488a5f63dabfa9db9efd83768a5
8952]
8953[encodingutil: argv and output encodings are always the same on all platforms. Lose the unnecessary generality of them being different. fixes #1120
8954david-sarah@jacaranda.org**20110629185356
8955 Ignore-this: 5ebacbe6903dfa83ffd3ff8436a97787
8956]
8957[docs/man/tahoe.1: add man page. fixes #1420
8958david-sarah@jacaranda.org**20110724171728
8959 Ignore-this: fc7601ec7f25494288d6141d0ae0004c
8960]
8961[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
8962david-sarah@jacaranda.org**20110721234941
8963 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
8964]
8965[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
8966david-sarah@jacaranda.org**20110722000320
8967 Ignore-this: 55cd558b791526113db3f83c00ec328a
8968]
8969[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
8970david-sarah@jacaranda.org**20110721233658
8971 Ignore-this: 81b41745477163c9b39c0b59db91cc62
8972]
8973[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
8974david-sarah@jacaranda.org**20110722035402
8975 Ignore-this: 5d03f544c4154f088e26c7107494bf39
8976]
8977[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
8978david-sarah@jacaranda.org**20110722024907
8979 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
8980]
8981[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
8982david-sarah@jacaranda.org**20110718005949
8983 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
8984]
8985[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
8986david-sarah@jacaranda.org**20110717194315
8987 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
8988]
8989[README.txt: say that quickstart.rst is in the docs directory.
8990david-sarah@jacaranda.org**20110717192400
8991 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
8992]
8993[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
8994zooko@zooko.com**20110717114226
8995 Ignore-this: df222120d41447ce4102616921626c82
8996 fixes #1383
8997]
8998[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
8999david-sarah@jacaranda.org**20110716181813
9000 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
9001]
9002[docs: add missing link in NEWS.rst
9003zooko@zooko.com**20110712153307
9004 Ignore-this: be7b7eb81c03700b739daa1027d72b35
9005]
9006[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
9007zooko@zooko.com**20110712153229
9008 Ignore-this: 723c4f9e2211027c79d711715d972c5
9009 Also remove a couple of vestigial references to figleaf, which is long gone.
9010 fixes #1409 (remove contrib/fuse)
9011]
9012[add Protovis.js-based download-status timeline visualization
9013Brian Warner <warner@lothar.com>**20110629222606
9014 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
9015 
9016 provide status overlap info on the webapi t=json output, add decode/decrypt
9017 rate tooltips, add zoomin/zoomout buttons
9018]
9019[add more download-status data, fix tests
9020Brian Warner <warner@lothar.com>**20110629222555
9021 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
9022]
9023[prepare for viz: improve DownloadStatus events
9024Brian Warner <warner@lothar.com>**20110629222542
9025 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
9026 
9027 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
9028]
9029[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
9030zooko@zooko.com**20110629185711
9031 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
9032]
9033[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
9034david-sarah@jacaranda.org**20110130235809
9035 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
9036]
9037[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
9038david-sarah@jacaranda.org**20110626054124
9039 Ignore-this: abb864427a1b91bd10d5132b4589fd90
9040]
9041[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
9042david-sarah@jacaranda.org**20110623205528
9043 Ignore-this: c63e23146c39195de52fb17c7c49b2da
9044]
9045[Rename test_package_initialization.py to (much shorter) test_import.py .
9046Brian Warner <warner@lothar.com>**20110611190234
9047 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
9048 
9049 The former name was making my 'ls' listings hard to read, by forcing them
9050 down to just two columns.
9051]
9052[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
9053zooko@zooko.com**20110611163741
9054 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
9055 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
9056 fixes #1412
9057]
9058[wui: right-align the size column in the WUI
9059zooko@zooko.com**20110611153758
9060 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
9061 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
9062 fixes #1412
9063]
9064[docs: three minor fixes
9065zooko@zooko.com**20110610121656
9066 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
9067 CREDITS for arc for stats tweak
9068 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
9069 English usage tweak
9070]
9071[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
9072david-sarah@jacaranda.org**20110609223719
9073 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
9074]
9075[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
9076wilcoxjg@gmail.com**20110527120135
9077 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
9078 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
9079 NEWS.rst, stats.py: documentation of change to get_latencies
9080 stats.rst: now documents percentile modification in get_latencies
9081 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
9082 fixes #1392
9083]
9084[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
9085david-sarah@jacaranda.org**20110517011214
9086 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
9087]
9088[docs: convert NEWS to NEWS.rst and change all references to it.
9089david-sarah@jacaranda.org**20110517010255
9090 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
9091]
9092[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
9093david-sarah@jacaranda.org**20110512140559
9094 Ignore-this: 784548fc5367fac5450df1c46890876d
9095]
9096[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
9097david-sarah@jacaranda.org**20110130164923
9098 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
9099]
9100[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
9101zooko@zooko.com**20110128142006
9102 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
9103 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
9104]
9105[M-x whitespace-cleanup
9106zooko@zooko.com**20110510193653
9107 Ignore-this: dea02f831298c0f65ad096960e7df5c7
9108]
9109[docs: fix typo in running.rst, thanks to arch_o_median
9110zooko@zooko.com**20110510193633
9111 Ignore-this: ca06de166a46abbc61140513918e79e8
9112]
9113[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
9114david-sarah@jacaranda.org**20110204204902
9115 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
9116]
9117[relnotes.txt: forseeable -> foreseeable. refs #1342
9118david-sarah@jacaranda.org**20110204204116
9119 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
9120]
9121[replace remaining .html docs with .rst docs
9122zooko@zooko.com**20110510191650
9123 Ignore-this: d557d960a986d4ac8216d1677d236399
9124 Remove install.html (long since deprecated).
9125 Also replace some obsolete references to install.html with references to quickstart.rst.
9126 Fix some broken internal references within docs/historical/historical_known_issues.txt.
9127 Thanks to Ravi Pinjala and Patrick McDonald.
9128 refs #1227
9129]
9130[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
9131zooko@zooko.com**20110428055232
9132 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
9133]
9134[munin tahoe_files plugin: fix incorrect file count
9135francois@ctrlaltdel.ch**20110428055312
9136 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
9137 fixes #1391
9138]
9139[corrected "k must never be smaller than N" to "k must never be greater than N"
9140secorp@allmydata.org**20110425010308
9141 Ignore-this: 233129505d6c70860087f22541805eac
9142]
9143[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
9144david-sarah@jacaranda.org**20110411190738
9145 Ignore-this: 7847d26bc117c328c679f08a7baee519
9146]
9147[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
9148david-sarah@jacaranda.org**20110410155844
9149 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
9150]
9151[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
9152david-sarah@jacaranda.org**20110410155705
9153 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
9154]
9155[remove unused variable detected by pyflakes
9156zooko@zooko.com**20110407172231
9157 Ignore-this: 7344652d5e0720af822070d91f03daf9
9158]
9159[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
9160david-sarah@jacaranda.org**20110401202750
9161 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
9162]
9163[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
9164Brian Warner <warner@lothar.com>**20110325232511
9165 Ignore-this: d5307faa6900f143193bfbe14e0f01a
9166]
9167[control.py: remove all uses of s.get_serverid()
9168warner@lothar.com**20110227011203
9169 Ignore-this: f80a787953bd7fa3d40e828bde00e855
9170]
9171[web: remove some uses of s.get_serverid(), not all
9172warner@lothar.com**20110227011159
9173 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
9174]
9175[immutable/downloader/fetcher.py: remove all get_serverid() calls
9176warner@lothar.com**20110227011156
9177 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
9178]
9179[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
9180warner@lothar.com**20110227011153
9181 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
9182 
9183 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
9184 _shares_from_server dict was being popped incorrectly (using shnum as the
9185 index instead of serverid). I'm still thinking through the consequences of
9186 this bug. It was probably benign and really hard to detect. I think it would
9187 cause us to incorrectly believe that we're pulling too many shares from a
9188 server, and thus prefer a different server rather than asking for a second
9189 share from the first server. The diversity code is intended to spread out the
9190 number of shares simultaneously being requested from each server, but with
9191 this bug, it might be spreading out the total number of shares requested at
9192 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
9193 segment, so the effect doesn't last very long).
9194]
9195[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
9196warner@lothar.com**20110227011150
9197 Ignore-this: d8d56dd8e7b280792b40105e13664554
9198 
9199 test_download.py: create+check MyShare instances better, make sure they share
9200 Server objects, now that finder.py cares
9201]
9202[immutable/downloader/finder.py: reduce use of get_serverid(), one left
9203warner@lothar.com**20110227011146
9204 Ignore-this: 5785be173b491ae8a78faf5142892020
9205]
9206[immutable/offloaded.py: reduce use of get_serverid() a bit more
9207warner@lothar.com**20110227011142
9208 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
9209]
9210[immutable/upload.py: reduce use of get_serverid()
9211warner@lothar.com**20110227011138
9212 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
9213]
9214[immutable/checker.py: remove some uses of s.get_serverid(), not all
9215warner@lothar.com**20110227011134
9216 Ignore-this: e480a37efa9e94e8016d826c492f626e
9217]
9218[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
9219warner@lothar.com**20110227011132
9220 Ignore-this: 6078279ddf42b179996a4b53bee8c421
9221 MockIServer stubs
9222]
9223[upload.py: rearrange _make_trackers a bit, no behavior changes
9224warner@lothar.com**20110227011128
9225 Ignore-this: 296d4819e2af452b107177aef6ebb40f
9226]
9227[happinessutil.py: finally rename merge_peers to merge_servers
9228warner@lothar.com**20110227011124
9229 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
9230]
9231[test_upload.py: factor out FakeServerTracker
9232warner@lothar.com**20110227011120
9233 Ignore-this: 6c182cba90e908221099472cc159325b
9234]
9235[test_upload.py: server-vs-tracker cleanup
9236warner@lothar.com**20110227011115
9237 Ignore-this: 2915133be1a3ba456e8603885437e03
9238]
9239[happinessutil.py: server-vs-tracker cleanup
9240warner@lothar.com**20110227011111
9241 Ignore-this: b856c84033562d7d718cae7cb01085a9
9242]
9243[upload.py: more tracker-vs-server cleanup
9244warner@lothar.com**20110227011107
9245 Ignore-this: bb75ed2afef55e47c085b35def2de315
9246]
9247[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
9248warner@lothar.com**20110227011103
9249 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
9250]
9251[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
9252warner@lothar.com**20110227011100
9253 Ignore-this: 7ea858755cbe5896ac212a925840fe68
9254 
9255 No behavioral changes, just updating variable/method names and log messages.
9256 The effects outside these three files should be minimal: some exception
9257 messages changed (to say "server" instead of "peer"), and some internal class
9258 names were changed. A few things still use "peer" to minimize external
9259 changes, like UploadResults.timings["peer_selection"] and
9260 happinessutil.merge_peers, which can be changed later.
9261]
9262[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
9263warner@lothar.com**20110227011056
9264 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
9265]
9266[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
9267warner@lothar.com**20110227011051
9268 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
9269]
9270[test: increase timeout on a network test because Francois's ARM machine hit that timeout
9271zooko@zooko.com**20110317165909
9272 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
9273 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
9274]
9275[docs/configuration.rst: add a "Frontend Configuration" section
9276Brian Warner <warner@lothar.com>**20110222014323
9277 Ignore-this: 657018aa501fe4f0efef9851628444ca
9278 
9279 this points to docs/frontends/*.rst, which were previously underlinked
9280]
9281[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
9282"Brian Warner <warner@lothar.com>"**20110221061544
9283 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
9284]
9285[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
9286david-sarah@jacaranda.org**20110221015817
9287 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
9288]
9289[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
9290david-sarah@jacaranda.org**20110221020125
9291 Ignore-this: b0744ed58f161bf188e037bad077fc48
9292]
9293[Refactor StorageFarmBroker handling of servers
9294Brian Warner <warner@lothar.com>**20110221015804
9295 Ignore-this: 842144ed92f5717699b8f580eab32a51
9296 
9297 Pass around IServer instance instead of (peerid, rref) tuple. Replace
9298 "descriptor" with "server". Other replacements:
9299 
9300  get_all_servers -> get_connected_servers/get_known_servers
9301  get_servers_for_index -> get_servers_for_psi (now returns IServers)
9302 
9303 This change still needs to be pushed further down: lots of code is now
9304 getting the IServer and then distributing (peerid, rref) internally.
9305 Instead, it ought to distribute the IServer internally and delay
9306 extracting a serverid or rref until the last moment.
9307 
9308 no_network.py was updated to retain parallelism.
9309]
9310[TAG allmydata-tahoe-1.8.2
9311warner@lothar.com**20110131020101]
9312Patch bundle hash:
931393a6c95829618edc98cbee0ade65749c8f665a4b