Ticket #999: test_backendpasses_Zancas20110729.darcs.patch

File test_backendpasses_Zancas20110729.darcs.patch, 390.5 KB (added by Zancas, at 2011-07-30T03:41:42Z)

5 test_backend tests pass

Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
50  * adding comments to clarify what I'm about to do.
51
52Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
53  * branching back, no longer attempting to mock inside TestServerFSBackend
54
55Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
56  * checkpoint12 TestServerFSBackend no longer mocks filesystem
57
58Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
59  * JACP
60
61Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
62  * testing get incoming
63
64Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
65  * ImmutableShareFile does not know its StorageIndex
66
67Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
68  * get_incoming correctly reports the 0 share after it has arrived
69
70Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
71  * jacp14
72
73Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
74  * jacp14 or so
75
76Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
77  * temporary work-in-progress patch to be unrecorded
78  tidy up a few tests, work done in pair-programming with Zancas
79
80Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
81  * work in progress intended to be unrecorded and never committed to trunk
82  switch from os.path.join to filepath
83  incomplete refactoring of common "stay in your subtree" tester code into a superclass
84 
85
86Fri Jul 15 13:15:00 MDT 2011  zooko@zooko.com
87  * another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
88  In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
89
90Tue Jul 19 23:59:18 MDT 2011  zooko@zooko.com
91  * another temporary patch for sharing work-in-progress
92  A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
93  There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
94  (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
95 
96
97Fri Jul 22 01:00:36 MDT 2011  wilcoxjg@gmail.com
98  * jacp16 or so
99
100Fri Jul 22 14:32:44 MDT 2011  wilcoxjg@gmail.com
101  * jacp17
102
103Fri Jul 22 21:19:15 MDT 2011  wilcoxjg@gmail.com
104  * jacp18
105
106Sat Jul 23 21:42:30 MDT 2011  wilcoxjg@gmail.com
107  * jacp19orso
108
109Wed Jul 27 02:05:53 MDT 2011  wilcoxjg@gmail.com
110  * jacp19
111
112Thu Jul 28 01:25:14 MDT 2011  wilcoxjg@gmail.com
113  * jacp20
114
115Thu Jul 28 22:38:30 MDT 2011  wilcoxjg@gmail.com
116  * Completed FilePath based test_write_and_read_share
117
118Fri Jul 29 17:53:56 MDT 2011  wilcoxjg@gmail.com
119  * TestServerAndFSBackend.test_read_old_share passes
120
121Fri Jul 29 19:00:25 MDT 2011  wilcoxjg@gmail.com
122  * TestServerAndFSBackend passes en total!
123
124Fri Jul 29 21:41:59 MDT 2011  wilcoxjg@gmail.com
125  * current test_backend tests pass
126
127New patches:
128
129[storage: new mocking tests of storage server read and write
130wilcoxjg@gmail.com**20110325203514
131 Ignore-this: df65c3c4f061dd1516f88662023fdb41
132 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
133] {
134addfile ./src/allmydata/test/test_server.py
135hunk ./src/allmydata/test/test_server.py 1
136+from twisted.trial import unittest
137+
138+from StringIO import StringIO
139+
140+from allmydata.test.common_util import ReallyEqualMixin
141+
142+import mock
143+
144+# This is the code that we're going to be testing.
145+from allmydata.storage.server import StorageServer
146+
147+# The following share file contents was generated with
148+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
149+# with share data == 'a'.
150+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
151+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
152+
153+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
154+
155+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
156+    @mock.patch('__builtin__.open')
157+    def test_create_server(self, mockopen):
158+        """ This tests whether a server instance can be constructed. """
159+
160+        def call_open(fname, mode):
161+            if fname == 'testdir/bucket_counter.state':
162+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
163+            elif fname == 'testdir/lease_checker.state':
164+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
165+            elif fname == 'testdir/lease_checker.history':
166+                return StringIO()
167+        mockopen.side_effect = call_open
168+
169+        # Now begin the test.
170+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
171+
172+        # You passed!
173+
174+class TestServer(unittest.TestCase, ReallyEqualMixin):
175+    @mock.patch('__builtin__.open')
176+    def setUp(self, mockopen):
177+        def call_open(fname, mode):
178+            if fname == 'testdir/bucket_counter.state':
179+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
180+            elif fname == 'testdir/lease_checker.state':
181+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
182+            elif fname == 'testdir/lease_checker.history':
183+                return StringIO()
184+        mockopen.side_effect = call_open
185+
186+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
187+
188+
189+    @mock.patch('time.time')
190+    @mock.patch('os.mkdir')
191+    @mock.patch('__builtin__.open')
192+    @mock.patch('os.listdir')
193+    @mock.patch('os.path.isdir')
194+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
195+        """Handle a report of corruption."""
196+
197+        def call_listdir(dirname):
198+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
199+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
200+
201+        mocklistdir.side_effect = call_listdir
202+
203+        class MockFile:
204+            def __init__(self):
205+                self.buffer = ''
206+                self.pos = 0
207+            def write(self, instring):
208+                begin = self.pos
209+                padlen = begin - len(self.buffer)
210+                if padlen > 0:
211+                    self.buffer += '\x00' * padlen
212+                end = self.pos + len(instring)
213+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
214+                self.pos = end
215+            def close(self):
216+                pass
217+            def seek(self, pos):
218+                self.pos = pos
219+            def read(self, numberbytes):
220+                return self.buffer[self.pos:self.pos+numberbytes]
221+            def tell(self):
222+                return self.pos
223+
224+        mocktime.return_value = 0
225+
226+        sharefile = MockFile()
227+        def call_open(fname, mode):
228+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
229+            return sharefile
230+
231+        mockopen.side_effect = call_open
232+        # Now begin the test.
233+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
234+        print bs
235+        bs[0].remote_write(0, 'a')
236+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
237+
238+
239+    @mock.patch('os.path.exists')
240+    @mock.patch('os.path.getsize')
241+    @mock.patch('__builtin__.open')
242+    @mock.patch('os.listdir')
243+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
244+        """ This tests whether the code correctly finds and reads
245+        shares written out by old (Tahoe-LAFS <= v1.8.2)
246+        servers. There is a similar test in test_download, but that one
247+        is from the perspective of the client and exercises a deeper
248+        stack of code. This one is for exercising just the
249+        StorageServer object. """
250+
251+        def call_listdir(dirname):
252+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
253+            return ['0']
254+
255+        mocklistdir.side_effect = call_listdir
256+
257+        def call_open(fname, mode):
258+            self.failUnlessReallyEqual(fname, sharefname)
259+            self.failUnless('r' in mode, mode)
260+            self.failUnless('b' in mode, mode)
261+
262+            return StringIO(share_file_data)
263+        mockopen.side_effect = call_open
264+
265+        datalen = len(share_file_data)
266+        def call_getsize(fname):
267+            self.failUnlessReallyEqual(fname, sharefname)
268+            return datalen
269+        mockgetsize.side_effect = call_getsize
270+
271+        def call_exists(fname):
272+            self.failUnlessReallyEqual(fname, sharefname)
273+            return True
274+        mockexists.side_effect = call_exists
275+
276+        # Now begin the test.
277+        bs = self.s.remote_get_buckets('teststorage_index')
278+
279+        self.failUnlessEqual(len(bs), 1)
280+        b = bs[0]
281+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
282+        # If you try to read past the end you get the as much data as is there.
283+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
284+        # If you start reading past the end of the file you get the empty string.
285+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
286}
287[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
288wilcoxjg@gmail.com**20110624202850
289 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
290 sloppy not for production
291] {
292move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
293hunk ./src/allmydata/storage/crawler.py 13
294     pass
295 
296 class ShareCrawler(service.MultiService):
297-    """A ShareCrawler subclass is attached to a StorageServer, and
298+    """A subcless of ShareCrawler is attached to a StorageServer, and
299     periodically walks all of its shares, processing each one in some
300     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
301     since large servers can easily have a terabyte of shares, in several
302hunk ./src/allmydata/storage/crawler.py 31
303     We assume that the normal upload/download/get_buckets traffic of a tahoe
304     grid will cause the prefixdir contents to be mostly cached in the kernel,
305     or that the number of buckets in each prefixdir will be small enough to
306-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
307+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
308     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
309     prefix. On this server, each prefixdir took 130ms-200ms to list the first
310     time, and 17ms to list the second time.
311hunk ./src/allmydata/storage/crawler.py 68
312     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
313     minimum_cycle_time = 300 # don't run a cycle faster than this
314 
315-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
316+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
317         service.MultiService.__init__(self)
318         if allowed_cpu_percentage is not None:
319             self.allowed_cpu_percentage = allowed_cpu_percentage
320hunk ./src/allmydata/storage/crawler.py 72
321-        self.server = server
322-        self.sharedir = server.sharedir
323-        self.statefile = statefile
324+        self.backend = backend
325         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
326                          for i in range(2**10)]
327         self.prefixes.sort()
328hunk ./src/allmydata/storage/crawler.py 446
329 
330     minimum_cycle_time = 60*60 # we don't need this more than once an hour
331 
332-    def __init__(self, server, statefile, num_sample_prefixes=1):
333-        ShareCrawler.__init__(self, server, statefile)
334+    def __init__(self, statefile, num_sample_prefixes=1):
335+        ShareCrawler.__init__(self, statefile)
336         self.num_sample_prefixes = num_sample_prefixes
337 
338     def add_initial_state(self):
339hunk ./src/allmydata/storage/expirer.py 15
340     removed.
341 
342     I collect statistics on the leases and make these available to a web
343-    status page, including::
344+    status page, including:
345 
346     Space recovered during this cycle-so-far:
347      actual (only if expiration_enabled=True):
348hunk ./src/allmydata/storage/expirer.py 51
349     slow_start = 360 # wait 6 minutes after startup
350     minimum_cycle_time = 12*60*60 # not more than twice per day
351 
352-    def __init__(self, server, statefile, historyfile,
353+    def __init__(self, statefile, historyfile,
354                  expiration_enabled, mode,
355                  override_lease_duration, # used if expiration_mode=="age"
356                  cutoff_date, # used if expiration_mode=="cutoff-date"
357hunk ./src/allmydata/storage/expirer.py 71
358         else:
359             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
360         self.sharetypes_to_expire = sharetypes
361-        ShareCrawler.__init__(self, server, statefile)
362+        ShareCrawler.__init__(self, statefile)
363 
364     def add_initial_state(self):
365         # we fill ["cycle-to-date"] here (even though they will be reset in
366hunk ./src/allmydata/storage/immutable.py 44
367     sharetype = "immutable"
368 
369     def __init__(self, filename, max_size=None, create=False):
370-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
371+        """ If max_size is not None then I won't allow more than
372+        max_size to be written to me. If create=True then max_size
373+        must not be None. """
374         precondition((max_size is not None) or (not create), max_size, create)
375         self.home = filename
376         self._max_size = max_size
377hunk ./src/allmydata/storage/immutable.py 87
378 
379     def read_share_data(self, offset, length):
380         precondition(offset >= 0)
381-        # reads beyond the end of the data are truncated. Reads that start
382-        # beyond the end of the data return an empty string. I wonder why
383-        # Python doesn't do the following computation for me?
384+        # Reads beyond the end of the data are truncated. Reads that start
385+        # beyond the end of the data return an empty string.
386         seekpos = self._data_offset+offset
387         fsize = os.path.getsize(self.home)
388         actuallength = max(0, min(length, fsize-seekpos))
389hunk ./src/allmydata/storage/immutable.py 198
390             space_freed += os.stat(self.home)[stat.ST_SIZE]
391             self.unlink()
392         return space_freed
393+class NullBucketWriter(Referenceable):
394+    implements(RIBucketWriter)
395 
396hunk ./src/allmydata/storage/immutable.py 201
397+    def remote_write(self, offset, data):
398+        return
399 
400 class BucketWriter(Referenceable):
401     implements(RIBucketWriter)
402hunk ./src/allmydata/storage/server.py 7
403 from twisted.application import service
404 
405 from zope.interface import implements
406-from allmydata.interfaces import RIStorageServer, IStatsProducer
407+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
408 from allmydata.util import fileutil, idlib, log, time_format
409 import allmydata # for __full_version__
410 
411hunk ./src/allmydata/storage/server.py 16
412 from allmydata.storage.lease import LeaseInfo
413 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
414      create_mutable_sharefile
415-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
416+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
417 from allmydata.storage.crawler import BucketCountingCrawler
418 from allmydata.storage.expirer import LeaseCheckingCrawler
419 
420hunk ./src/allmydata/storage/server.py 20
421+from zope.interface import implements
422+
423+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
424+# be started and stopped.
425+class Backend(service.MultiService):
426+    implements(IStatsProducer)
427+    def __init__(self):
428+        service.MultiService.__init__(self)
429+
430+    def get_bucket_shares(self):
431+        """XXX"""
432+        raise NotImplementedError
433+
434+    def get_share(self):
435+        """XXX"""
436+        raise NotImplementedError
437+
438+    def make_bucket_writer(self):
439+        """XXX"""
440+        raise NotImplementedError
441+
442+class NullBackend(Backend):
443+    def __init__(self):
444+        Backend.__init__(self)
445+
446+    def get_available_space(self):
447+        return None
448+
449+    def get_bucket_shares(self, storage_index):
450+        return set()
451+
452+    def get_share(self, storage_index, sharenum):
453+        return None
454+
455+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
456+        return NullBucketWriter()
457+
458+class FSBackend(Backend):
459+    def __init__(self, storedir, readonly=False, reserved_space=0):
460+        Backend.__init__(self)
461+
462+        self._setup_storage(storedir, readonly, reserved_space)
463+        self._setup_corruption_advisory()
464+        self._setup_bucket_counter()
465+        self._setup_lease_checkerf()
466+
467+    def _setup_storage(self, storedir, readonly, reserved_space):
468+        self.storedir = storedir
469+        self.readonly = readonly
470+        self.reserved_space = int(reserved_space)
471+        if self.reserved_space:
472+            if self.get_available_space() is None:
473+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
474+                        umid="0wZ27w", level=log.UNUSUAL)
475+
476+        self.sharedir = os.path.join(self.storedir, "shares")
477+        fileutil.make_dirs(self.sharedir)
478+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
479+        self._clean_incomplete()
480+
481+    def _clean_incomplete(self):
482+        fileutil.rm_dir(self.incomingdir)
483+        fileutil.make_dirs(self.incomingdir)
484+
485+    def _setup_corruption_advisory(self):
486+        # we don't actually create the corruption-advisory dir until necessary
487+        self.corruption_advisory_dir = os.path.join(self.storedir,
488+                                                    "corruption-advisories")
489+
490+    def _setup_bucket_counter(self):
491+        statefile = os.path.join(self.storedir, "bucket_counter.state")
492+        self.bucket_counter = BucketCountingCrawler(statefile)
493+        self.bucket_counter.setServiceParent(self)
494+
495+    def _setup_lease_checkerf(self):
496+        statefile = os.path.join(self.storedir, "lease_checker.state")
497+        historyfile = os.path.join(self.storedir, "lease_checker.history")
498+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
499+                                   expiration_enabled, expiration_mode,
500+                                   expiration_override_lease_duration,
501+                                   expiration_cutoff_date,
502+                                   expiration_sharetypes)
503+        self.lease_checker.setServiceParent(self)
504+
505+    def get_available_space(self):
506+        if self.readonly:
507+            return 0
508+        return fileutil.get_available_space(self.storedir, self.reserved_space)
509+
510+    def get_bucket_shares(self, storage_index):
511+        """Return a list of (shnum, pathname) tuples for files that hold
512+        shares for this storage_index. In each tuple, 'shnum' will always be
513+        the integer form of the last component of 'pathname'."""
514+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
515+        try:
516+            for f in os.listdir(storagedir):
517+                if NUM_RE.match(f):
518+                    filename = os.path.join(storagedir, f)
519+                    yield (int(f), filename)
520+        except OSError:
521+            # Commonly caused by there being no buckets at all.
522+            pass
523+
524 # storage/
525 # storage/shares/incoming
526 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
527hunk ./src/allmydata/storage/server.py 143
528     name = 'storage'
529     LeaseCheckerClass = LeaseCheckingCrawler
530 
531-    def __init__(self, storedir, nodeid, reserved_space=0,
532-                 discard_storage=False, readonly_storage=False,
533+    def __init__(self, nodeid, backend, reserved_space=0,
534+                 readonly_storage=False,
535                  stats_provider=None,
536                  expiration_enabled=False,
537                  expiration_mode="age",
538hunk ./src/allmydata/storage/server.py 155
539         assert isinstance(nodeid, str)
540         assert len(nodeid) == 20
541         self.my_nodeid = nodeid
542-        self.storedir = storedir
543-        sharedir = os.path.join(storedir, "shares")
544-        fileutil.make_dirs(sharedir)
545-        self.sharedir = sharedir
546-        # we don't actually create the corruption-advisory dir until necessary
547-        self.corruption_advisory_dir = os.path.join(storedir,
548-                                                    "corruption-advisories")
549-        self.reserved_space = int(reserved_space)
550-        self.no_storage = discard_storage
551-        self.readonly_storage = readonly_storage
552         self.stats_provider = stats_provider
553         if self.stats_provider:
554             self.stats_provider.register_producer(self)
555hunk ./src/allmydata/storage/server.py 158
556-        self.incomingdir = os.path.join(sharedir, 'incoming')
557-        self._clean_incomplete()
558-        fileutil.make_dirs(self.incomingdir)
559         self._active_writers = weakref.WeakKeyDictionary()
560hunk ./src/allmydata/storage/server.py 159
561+        self.backend = backend
562+        self.backend.setServiceParent(self)
563         log.msg("StorageServer created", facility="tahoe.storage")
564 
565hunk ./src/allmydata/storage/server.py 163
566-        if reserved_space:
567-            if self.get_available_space() is None:
568-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
569-                        umin="0wZ27w", level=log.UNUSUAL)
570-
571         self.latencies = {"allocate": [], # immutable
572                           "write": [],
573                           "close": [],
574hunk ./src/allmydata/storage/server.py 174
575                           "renew": [],
576                           "cancel": [],
577                           }
578-        self.add_bucket_counter()
579-
580-        statefile = os.path.join(self.storedir, "lease_checker.state")
581-        historyfile = os.path.join(self.storedir, "lease_checker.history")
582-        klass = self.LeaseCheckerClass
583-        self.lease_checker = klass(self, statefile, historyfile,
584-                                   expiration_enabled, expiration_mode,
585-                                   expiration_override_lease_duration,
586-                                   expiration_cutoff_date,
587-                                   expiration_sharetypes)
588-        self.lease_checker.setServiceParent(self)
589 
590     def __repr__(self):
591         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
592hunk ./src/allmydata/storage/server.py 178
593 
594-    def add_bucket_counter(self):
595-        statefile = os.path.join(self.storedir, "bucket_counter.state")
596-        self.bucket_counter = BucketCountingCrawler(self, statefile)
597-        self.bucket_counter.setServiceParent(self)
598-
599     def count(self, name, delta=1):
600         if self.stats_provider:
601             self.stats_provider.count("storage_server." + name, delta)
602hunk ./src/allmydata/storage/server.py 233
603             kwargs["facility"] = "tahoe.storage"
604         return log.msg(*args, **kwargs)
605 
606-    def _clean_incomplete(self):
607-        fileutil.rm_dir(self.incomingdir)
608-
609     def get_stats(self):
610         # remember: RIStatsProvider requires that our return dict
611         # contains numeric values.
612hunk ./src/allmydata/storage/server.py 269
613             stats['storage_server.total_bucket_count'] = bucket_count
614         return stats
615 
616-    def get_available_space(self):
617-        """Returns available space for share storage in bytes, or None if no
618-        API to get this information is available."""
619-
620-        if self.readonly_storage:
621-            return 0
622-        return fileutil.get_available_space(self.storedir, self.reserved_space)
623-
624     def allocated_size(self):
625         space = 0
626         for bw in self._active_writers:
627hunk ./src/allmydata/storage/server.py 276
628         return space
629 
630     def remote_get_version(self):
631-        remaining_space = self.get_available_space()
632+        remaining_space = self.backend.get_available_space()
633         if remaining_space is None:
634             # We're on a platform that has no API to get disk stats.
635             remaining_space = 2**64
636hunk ./src/allmydata/storage/server.py 301
637         self.count("allocate")
638         alreadygot = set()
639         bucketwriters = {} # k: shnum, v: BucketWriter
640-        si_dir = storage_index_to_dir(storage_index)
641-        si_s = si_b2a(storage_index)
642 
643hunk ./src/allmydata/storage/server.py 302
644+        si_s = si_b2a(storage_index)
645         log.msg("storage: allocate_buckets %s" % si_s)
646 
647         # in this implementation, the lease information (including secrets)
648hunk ./src/allmydata/storage/server.py 316
649 
650         max_space_per_bucket = allocated_size
651 
652-        remaining_space = self.get_available_space()
653+        remaining_space = self.backend.get_available_space()
654         limited = remaining_space is not None
655         if limited:
656             # this is a bit conservative, since some of this allocated_size()
657hunk ./src/allmydata/storage/server.py 329
658         # they asked about: this will save them a lot of work. Add or update
659         # leases for all of them: if they want us to hold shares for this
660         # file, they'll want us to hold leases for this file.
661-        for (shnum, fn) in self._get_bucket_shares(storage_index):
662+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
663             alreadygot.add(shnum)
664             sf = ShareFile(fn)
665             sf.add_or_renew_lease(lease_info)
666hunk ./src/allmydata/storage/server.py 335
667 
668         for shnum in sharenums:
669-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
670-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
671-            if os.path.exists(finalhome):
672+            share = self.backend.get_share(storage_index, shnum)
673+
674+            if not share:
675+                if (not limited) or (remaining_space >= max_space_per_bucket):
676+                    # ok! we need to create the new share file.
677+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
678+                                      max_space_per_bucket, lease_info, canary)
679+                    bucketwriters[shnum] = bw
680+                    self._active_writers[bw] = 1
681+                    if limited:
682+                        remaining_space -= max_space_per_bucket
683+                else:
684+                    # bummer! not enough space to accept this bucket
685+                    pass
686+
687+            elif share.is_complete():
688                 # great! we already have it. easy.
689                 pass
690hunk ./src/allmydata/storage/server.py 353
691-            elif os.path.exists(incominghome):
692+            elif not share.is_complete():
693                 # Note that we don't create BucketWriters for shnums that
694                 # have a partial share (in incoming/), so if a second upload
695                 # occurs while the first is still in progress, the second
696hunk ./src/allmydata/storage/server.py 359
697                 # uploader will use different storage servers.
698                 pass
699-            elif (not limited) or (remaining_space >= max_space_per_bucket):
700-                # ok! we need to create the new share file.
701-                bw = BucketWriter(self, incominghome, finalhome,
702-                                  max_space_per_bucket, lease_info, canary)
703-                if self.no_storage:
704-                    bw.throw_out_all_data = True
705-                bucketwriters[shnum] = bw
706-                self._active_writers[bw] = 1
707-                if limited:
708-                    remaining_space -= max_space_per_bucket
709-            else:
710-                # bummer! not enough space to accept this bucket
711-                pass
712-
713-        if bucketwriters:
714-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
715 
716         self.add_latency("allocate", time.time() - start)
717         return alreadygot, bucketwriters
718hunk ./src/allmydata/storage/server.py 437
719             self.stats_provider.count('storage_server.bytes_added', consumed_size)
720         del self._active_writers[bw]
721 
722-    def _get_bucket_shares(self, storage_index):
723-        """Return a list of (shnum, pathname) tuples for files that hold
724-        shares for this storage_index. In each tuple, 'shnum' will always be
725-        the integer form of the last component of 'pathname'."""
726-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
727-        try:
728-            for f in os.listdir(storagedir):
729-                if NUM_RE.match(f):
730-                    filename = os.path.join(storagedir, f)
731-                    yield (int(f), filename)
732-        except OSError:
733-            # Commonly caused by there being no buckets at all.
734-            pass
735 
736     def remote_get_buckets(self, storage_index):
737         start = time.time()
738hunk ./src/allmydata/storage/server.py 444
739         si_s = si_b2a(storage_index)
740         log.msg("storage: get_buckets %s" % si_s)
741         bucketreaders = {} # k: sharenum, v: BucketReader
742-        for shnum, filename in self._get_bucket_shares(storage_index):
743+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
744             bucketreaders[shnum] = BucketReader(self, filename,
745                                                 storage_index, shnum)
746         self.add_latency("get", time.time() - start)
747hunk ./src/allmydata/test/test_backends.py 10
748 import mock
749 
750 # This is the code that we're going to be testing.
751-from allmydata.storage.server import StorageServer
752+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
753 
754 # The following share file contents was generated with
755 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
756hunk ./src/allmydata/test/test_backends.py 21
757 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
758 
759 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
760+    @mock.patch('time.time')
761+    @mock.patch('os.mkdir')
762+    @mock.patch('__builtin__.open')
763+    @mock.patch('os.listdir')
764+    @mock.patch('os.path.isdir')
765+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
766+        """ This tests whether a server instance can be constructed
767+        with a null backend. The server instance fails the test if it
768+        tries to read or write to the file system. """
769+
770+        # Now begin the test.
771+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
772+
773+        self.failIf(mockisdir.called)
774+        self.failIf(mocklistdir.called)
775+        self.failIf(mockopen.called)
776+        self.failIf(mockmkdir.called)
777+
778+        # You passed!
779+
780+    @mock.patch('time.time')
781+    @mock.patch('os.mkdir')
782     @mock.patch('__builtin__.open')
783hunk ./src/allmydata/test/test_backends.py 44
784-    def test_create_server(self, mockopen):
785-        """ This tests whether a server instance can be constructed. """
786+    @mock.patch('os.listdir')
787+    @mock.patch('os.path.isdir')
788+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
789+        """ This tests whether a server instance can be constructed
790+        with a filesystem backend. To pass the test, it has to use the
791+        filesystem in only the prescribed ways. """
792 
793         def call_open(fname, mode):
794             if fname == 'testdir/bucket_counter.state':
795hunk ./src/allmydata/test/test_backends.py 58
796                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
797             elif fname == 'testdir/lease_checker.history':
798                 return StringIO()
799+            else:
800+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
801         mockopen.side_effect = call_open
802 
803         # Now begin the test.
804hunk ./src/allmydata/test/test_backends.py 63
805-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
806+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
807+
808+        self.failIf(mockisdir.called)
809+        self.failIf(mocklistdir.called)
810+        self.failIf(mockopen.called)
811+        self.failIf(mockmkdir.called)
812+        self.failIf(mocktime.called)
813 
814         # You passed!
815 
816hunk ./src/allmydata/test/test_backends.py 73
817-class TestServer(unittest.TestCase, ReallyEqualMixin):
818+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
819+    def setUp(self):
820+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
821+
822+    @mock.patch('os.mkdir')
823+    @mock.patch('__builtin__.open')
824+    @mock.patch('os.listdir')
825+    @mock.patch('os.path.isdir')
826+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
827+        """ Write a new share. """
828+
829+        # Now begin the test.
830+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
831+        bs[0].remote_write(0, 'a')
832+        self.failIf(mockisdir.called)
833+        self.failIf(mocklistdir.called)
834+        self.failIf(mockopen.called)
835+        self.failIf(mockmkdir.called)
836+
837+    @mock.patch('os.path.exists')
838+    @mock.patch('os.path.getsize')
839+    @mock.patch('__builtin__.open')
840+    @mock.patch('os.listdir')
841+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
842+        """ This tests whether the code correctly finds and reads
843+        shares written out by old (Tahoe-LAFS <= v1.8.2)
844+        servers. There is a similar test in test_download, but that one
845+        is from the perspective of the client and exercises a deeper
846+        stack of code. This one is for exercising just the
847+        StorageServer object. """
848+
849+        # Now begin the test.
850+        bs = self.s.remote_get_buckets('teststorage_index')
851+
852+        self.failUnlessEqual(len(bs), 0)
853+        self.failIf(mocklistdir.called)
854+        self.failIf(mockopen.called)
855+        self.failIf(mockgetsize.called)
856+        self.failIf(mockexists.called)
857+
858+
859+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
860     @mock.patch('__builtin__.open')
861     def setUp(self, mockopen):
862         def call_open(fname, mode):
863hunk ./src/allmydata/test/test_backends.py 126
864                 return StringIO()
865         mockopen.side_effect = call_open
866 
867-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
868-
869+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
870 
871     @mock.patch('time.time')
872     @mock.patch('os.mkdir')
873hunk ./src/allmydata/test/test_backends.py 134
874     @mock.patch('os.listdir')
875     @mock.patch('os.path.isdir')
876     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
877-        """Handle a report of corruption."""
878+        """ Write a new share. """
879 
880         def call_listdir(dirname):
881             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
882hunk ./src/allmydata/test/test_backends.py 173
883         mockopen.side_effect = call_open
884         # Now begin the test.
885         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
886-        print bs
887         bs[0].remote_write(0, 'a')
888         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
889 
890hunk ./src/allmydata/test/test_backends.py 176
891-
892     @mock.patch('os.path.exists')
893     @mock.patch('os.path.getsize')
894     @mock.patch('__builtin__.open')
895hunk ./src/allmydata/test/test_backends.py 218
896 
897         self.failUnlessEqual(len(bs), 1)
898         b = bs[0]
899+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
900         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
901         # If you try to read past the end you get the as much data as is there.
902         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
903hunk ./src/allmydata/test/test_backends.py 224
904         # If you start reading past the end of the file you get the empty string.
905         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
906+
907+
908}
909[a temp patch used as a snapshot
910wilcoxjg@gmail.com**20110626052732
911 Ignore-this: 95f05e314eaec870afa04c76d979aa44
912] {
913hunk ./docs/configuration.rst 637
914   [storage]
915   enabled = True
916   readonly = True
917-  sizelimit = 10000000000
918 
919 
920   [helper]
921hunk ./docs/garbage-collection.rst 16
922 
923 When a file or directory in the virtual filesystem is no longer referenced,
924 the space that its shares occupied on each storage server can be freed,
925-making room for other shares. Tahoe currently uses a garbage collection
926+making room for other shares. Tahoe uses a garbage collection
927 ("GC") mechanism to implement this space-reclamation process. Each share has
928 one or more "leases", which are managed by clients who want the
929 file/directory to be retained. The storage server accepts each share for a
930hunk ./docs/garbage-collection.rst 34
931 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
932 If lease renewal occurs quickly and with 100% reliability, than any renewal
933 time that is shorter than the lease duration will suffice, but a larger ratio
934-of duration-over-renewal-time will be more robust in the face of occasional
935+of lease duration to renewal time will be more robust in the face of occasional
936 delays or failures.
937 
938 The current recommended values for a small Tahoe grid are to renew the leases
939replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
940hunk ./src/allmydata/client.py 260
941             sharetypes.append("mutable")
942         expiration_sharetypes = tuple(sharetypes)
943 
944+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
945+            xyz
946+        xyz
947         ss = StorageServer(storedir, self.nodeid,
948                            reserved_space=reserved,
949                            discard_storage=discard,
950hunk ./src/allmydata/storage/crawler.py 234
951         f = open(tmpfile, "wb")
952         pickle.dump(self.state, f)
953         f.close()
954-        fileutil.move_into_place(tmpfile, self.statefile)
955+        fileutil.move_into_place(tmpfile, self.statefname)
956 
957     def startService(self):
958         # arrange things to look like we were just sleeping, so
959}
960[snapshot of progress on backend implementation (not suitable for trunk)
961wilcoxjg@gmail.com**20110626053244
962 Ignore-this: 50c764af791c2b99ada8289546806a0a
963] {
964adddir ./src/allmydata/storage/backends
965adddir ./src/allmydata/storage/backends/das
966move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
967adddir ./src/allmydata/storage/backends/null
968hunk ./src/allmydata/interfaces.py 270
969         store that on disk.
970         """
971 
972+class IStorageBackend(Interface):
973+    """
974+    Objects of this kind live on the server side and are used by the
975+    storage server object.
976+    """
977+    def get_available_space(self, reserved_space):
978+        """ Returns available space for share storage in bytes, or
979+        None if this information is not available or if the available
980+        space is unlimited.
981+
982+        If the backend is configured for read-only mode then this will
983+        return 0.
984+
985+        reserved_space is how many bytes to subtract from the answer, so
986+        you can pass how many bytes you would like to leave unused on this
987+        filesystem as reserved_space. """
988+
989+    def get_bucket_shares(self):
990+        """XXX"""
991+
992+    def get_share(self):
993+        """XXX"""
994+
995+    def make_bucket_writer(self):
996+        """XXX"""
997+
998+class IStorageBackendShare(Interface):
999+    """
1000+    This object contains as much as all of the share data.  It is intended
1001+    for lazy evaluation such that in many use cases substantially less than
1002+    all of the share data will be accessed.
1003+    """
1004+    def is_complete(self):
1005+        """
1006+        Returns the share state, or None if the share does not exist.
1007+        """
1008+
1009 class IStorageBucketWriter(Interface):
1010     """
1011     Objects of this kind live on the client side.
1012hunk ./src/allmydata/interfaces.py 2492
1013 
1014 class EmptyPathnameComponentError(Exception):
1015     """The webapi disallows empty pathname components."""
1016+
1017+class IShareStore(Interface):
1018+    pass
1019+
1020addfile ./src/allmydata/storage/backends/__init__.py
1021addfile ./src/allmydata/storage/backends/das/__init__.py
1022addfile ./src/allmydata/storage/backends/das/core.py
1023hunk ./src/allmydata/storage/backends/das/core.py 1
1024+from allmydata.interfaces import IStorageBackend
1025+from allmydata.storage.backends.base import Backend
1026+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1027+from allmydata.util.assertutil import precondition
1028+
1029+import os, re, weakref, struct, time
1030+
1031+from foolscap.api import Referenceable
1032+from twisted.application import service
1033+
1034+from zope.interface import implements
1035+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
1036+from allmydata.util import fileutil, idlib, log, time_format
1037+import allmydata # for __full_version__
1038+
1039+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1040+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
1041+from allmydata.storage.lease import LeaseInfo
1042+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1043+     create_mutable_sharefile
1044+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1045+from allmydata.storage.crawler import FSBucketCountingCrawler
1046+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1047+
1048+from zope.interface import implements
1049+
1050+class DASCore(Backend):
1051+    implements(IStorageBackend)
1052+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1053+        Backend.__init__(self)
1054+
1055+        self._setup_storage(storedir, readonly, reserved_space)
1056+        self._setup_corruption_advisory()
1057+        self._setup_bucket_counter()
1058+        self._setup_lease_checkerf(expiration_policy)
1059+
1060+    def _setup_storage(self, storedir, readonly, reserved_space):
1061+        self.storedir = storedir
1062+        self.readonly = readonly
1063+        self.reserved_space = int(reserved_space)
1064+        if self.reserved_space:
1065+            if self.get_available_space() is None:
1066+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1067+                        umid="0wZ27w", level=log.UNUSUAL)
1068+
1069+        self.sharedir = os.path.join(self.storedir, "shares")
1070+        fileutil.make_dirs(self.sharedir)
1071+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1072+        self._clean_incomplete()
1073+
1074+    def _clean_incomplete(self):
1075+        fileutil.rm_dir(self.incomingdir)
1076+        fileutil.make_dirs(self.incomingdir)
1077+
1078+    def _setup_corruption_advisory(self):
1079+        # we don't actually create the corruption-advisory dir until necessary
1080+        self.corruption_advisory_dir = os.path.join(self.storedir,
1081+                                                    "corruption-advisories")
1082+
1083+    def _setup_bucket_counter(self):
1084+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1085+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1086+        self.bucket_counter.setServiceParent(self)
1087+
1088+    def _setup_lease_checkerf(self, expiration_policy):
1089+        statefile = os.path.join(self.storedir, "lease_checker.state")
1090+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1091+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1092+        self.lease_checker.setServiceParent(self)
1093+
1094+    def get_available_space(self):
1095+        if self.readonly:
1096+            return 0
1097+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1098+
1099+    def get_shares(self, storage_index):
1100+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1101+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1102+        try:
1103+            for f in os.listdir(finalstoragedir):
1104+                if NUM_RE.match(f):
1105+                    filename = os.path.join(finalstoragedir, f)
1106+                    yield FSBShare(filename, int(f))
1107+        except OSError:
1108+            # Commonly caused by there being no buckets at all.
1109+            pass
1110+       
1111+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1112+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1113+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1114+        return bw
1115+       
1116+
1117+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1118+# and share data. The share data is accessed by RIBucketWriter.write and
1119+# RIBucketReader.read . The lease information is not accessible through these
1120+# interfaces.
1121+
1122+# The share file has the following layout:
1123+#  0x00: share file version number, four bytes, current version is 1
1124+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1125+#  0x08: number of leases, four bytes big-endian
1126+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1127+#  A+0x0c = B: first lease. Lease format is:
1128+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1129+#   B+0x04: renew secret, 32 bytes (SHA256)
1130+#   B+0x24: cancel secret, 32 bytes (SHA256)
1131+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1132+#   B+0x48: next lease, or end of record
1133+
1134+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1135+# but it is still filled in by storage servers in case the storage server
1136+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1137+# share file is moved from one storage server to another. The value stored in
1138+# this field is truncated, so if the actual share data length is >= 2**32,
1139+# then the value stored in this field will be the actual share data length
1140+# modulo 2**32.
1141+
1142+class ImmutableShare:
1143+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1144+    sharetype = "immutable"
1145+
1146+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1147+        """ If max_size is not None then I won't allow more than
1148+        max_size to be written to me. If create=True then max_size
1149+        must not be None. """
1150+        precondition((max_size is not None) or (not create), max_size, create)
1151+        self.shnum = shnum
1152+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1153+        self._max_size = max_size
1154+        if create:
1155+            # touch the file, so later callers will see that we're working on
1156+            # it. Also construct the metadata.
1157+            assert not os.path.exists(self.fname)
1158+            fileutil.make_dirs(os.path.dirname(self.fname))
1159+            f = open(self.fname, 'wb')
1160+            # The second field -- the four-byte share data length -- is no
1161+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1162+            # there in case someone downgrades a storage server from >=
1163+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1164+            # server to another, etc. We do saturation -- a share data length
1165+            # larger than 2**32-1 (what can fit into the field) is marked as
1166+            # the largest length that can fit into the field. That way, even
1167+            # if this does happen, the old < v1.3.0 server will still allow
1168+            # clients to read the first part of the share.
1169+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1170+            f.close()
1171+            self._lease_offset = max_size + 0x0c
1172+            self._num_leases = 0
1173+        else:
1174+            f = open(self.fname, 'rb')
1175+            filesize = os.path.getsize(self.fname)
1176+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1177+            f.close()
1178+            if version != 1:
1179+                msg = "sharefile %s had version %d but we wanted 1" % \
1180+                      (self.fname, version)
1181+                raise UnknownImmutableContainerVersionError(msg)
1182+            self._num_leases = num_leases
1183+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1184+        self._data_offset = 0xc
1185+
1186+    def unlink(self):
1187+        os.unlink(self.fname)
1188+
1189+    def read_share_data(self, offset, length):
1190+        precondition(offset >= 0)
1191+        # Reads beyond the end of the data are truncated. Reads that start
1192+        # beyond the end of the data return an empty string.
1193+        seekpos = self._data_offset+offset
1194+        fsize = os.path.getsize(self.fname)
1195+        actuallength = max(0, min(length, fsize-seekpos))
1196+        if actuallength == 0:
1197+            return ""
1198+        f = open(self.fname, 'rb')
1199+        f.seek(seekpos)
1200+        return f.read(actuallength)
1201+
1202+    def write_share_data(self, offset, data):
1203+        length = len(data)
1204+        precondition(offset >= 0, offset)
1205+        if self._max_size is not None and offset+length > self._max_size:
1206+            raise DataTooLargeError(self._max_size, offset, length)
1207+        f = open(self.fname, 'rb+')
1208+        real_offset = self._data_offset+offset
1209+        f.seek(real_offset)
1210+        assert f.tell() == real_offset
1211+        f.write(data)
1212+        f.close()
1213+
1214+    def _write_lease_record(self, f, lease_number, lease_info):
1215+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1216+        f.seek(offset)
1217+        assert f.tell() == offset
1218+        f.write(lease_info.to_immutable_data())
1219+
1220+    def _read_num_leases(self, f):
1221+        f.seek(0x08)
1222+        (num_leases,) = struct.unpack(">L", f.read(4))
1223+        return num_leases
1224+
1225+    def _write_num_leases(self, f, num_leases):
1226+        f.seek(0x08)
1227+        f.write(struct.pack(">L", num_leases))
1228+
1229+    def _truncate_leases(self, f, num_leases):
1230+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1231+
1232+    def get_leases(self):
1233+        """Yields a LeaseInfo instance for all leases."""
1234+        f = open(self.fname, 'rb')
1235+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1236+        f.seek(self._lease_offset)
1237+        for i in range(num_leases):
1238+            data = f.read(self.LEASE_SIZE)
1239+            if data:
1240+                yield LeaseInfo().from_immutable_data(data)
1241+
1242+    def add_lease(self, lease_info):
1243+        f = open(self.fname, 'rb+')
1244+        num_leases = self._read_num_leases(f)
1245+        self._write_lease_record(f, num_leases, lease_info)
1246+        self._write_num_leases(f, num_leases+1)
1247+        f.close()
1248+
1249+    def renew_lease(self, renew_secret, new_expire_time):
1250+        for i,lease in enumerate(self.get_leases()):
1251+            if constant_time_compare(lease.renew_secret, renew_secret):
1252+                # yup. See if we need to update the owner time.
1253+                if new_expire_time > lease.expiration_time:
1254+                    # yes
1255+                    lease.expiration_time = new_expire_time
1256+                    f = open(self.fname, 'rb+')
1257+                    self._write_lease_record(f, i, lease)
1258+                    f.close()
1259+                return
1260+        raise IndexError("unable to renew non-existent lease")
1261+
1262+    def add_or_renew_lease(self, lease_info):
1263+        try:
1264+            self.renew_lease(lease_info.renew_secret,
1265+                             lease_info.expiration_time)
1266+        except IndexError:
1267+            self.add_lease(lease_info)
1268+
1269+
1270+    def cancel_lease(self, cancel_secret):
1271+        """Remove a lease with the given cancel_secret. If the last lease is
1272+        cancelled, the file will be removed. Return the number of bytes that
1273+        were freed (by truncating the list of leases, and possibly by
1274+        deleting the file. Raise IndexError if there was no lease with the
1275+        given cancel_secret.
1276+        """
1277+
1278+        leases = list(self.get_leases())
1279+        num_leases_removed = 0
1280+        for i,lease in enumerate(leases):
1281+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1282+                leases[i] = None
1283+                num_leases_removed += 1
1284+        if not num_leases_removed:
1285+            raise IndexError("unable to find matching lease to cancel")
1286+        if num_leases_removed:
1287+            # pack and write out the remaining leases. We write these out in
1288+            # the same order as they were added, so that if we crash while
1289+            # doing this, we won't lose any non-cancelled leases.
1290+            leases = [l for l in leases if l] # remove the cancelled leases
1291+            f = open(self.fname, 'rb+')
1292+            for i,lease in enumerate(leases):
1293+                self._write_lease_record(f, i, lease)
1294+            self._write_num_leases(f, len(leases))
1295+            self._truncate_leases(f, len(leases))
1296+            f.close()
1297+        space_freed = self.LEASE_SIZE * num_leases_removed
1298+        if not len(leases):
1299+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1300+            self.unlink()
1301+        return space_freed
1302hunk ./src/allmydata/storage/backends/das/expirer.py 2
1303 import time, os, pickle, struct
1304-from allmydata.storage.crawler import ShareCrawler
1305-from allmydata.storage.shares import get_share_file
1306+from allmydata.storage.crawler import FSShareCrawler
1307 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1308      UnknownImmutableContainerVersionError
1309 from twisted.python import log as twlog
1310hunk ./src/allmydata/storage/backends/das/expirer.py 7
1311 
1312-class LeaseCheckingCrawler(ShareCrawler):
1313+class FSLeaseCheckingCrawler(FSShareCrawler):
1314     """I examine the leases on all shares, determining which are still valid
1315     and which have expired. I can remove the expired leases (if so
1316     configured), and the share will be deleted when the last lease is
1317hunk ./src/allmydata/storage/backends/das/expirer.py 50
1318     slow_start = 360 # wait 6 minutes after startup
1319     minimum_cycle_time = 12*60*60 # not more than twice per day
1320 
1321-    def __init__(self, statefile, historyfile,
1322-                 expiration_enabled, mode,
1323-                 override_lease_duration, # used if expiration_mode=="age"
1324-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1325-                 sharetypes):
1326+    def __init__(self, statefile, historyfile, expiration_policy):
1327         self.historyfile = historyfile
1328hunk ./src/allmydata/storage/backends/das/expirer.py 52
1329-        self.expiration_enabled = expiration_enabled
1330-        self.mode = mode
1331+        self.expiration_enabled = expiration_policy['enabled']
1332+        self.mode = expiration_policy['mode']
1333         self.override_lease_duration = None
1334         self.cutoff_date = None
1335         if self.mode == "age":
1336hunk ./src/allmydata/storage/backends/das/expirer.py 57
1337-            assert isinstance(override_lease_duration, (int, type(None)))
1338-            self.override_lease_duration = override_lease_duration # seconds
1339+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1340+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1341         elif self.mode == "cutoff-date":
1342hunk ./src/allmydata/storage/backends/das/expirer.py 60
1343-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1344+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1345             assert cutoff_date is not None
1346hunk ./src/allmydata/storage/backends/das/expirer.py 62
1347-            self.cutoff_date = cutoff_date
1348+            self.cutoff_date = expiration_policy['cutoff_date']
1349         else:
1350hunk ./src/allmydata/storage/backends/das/expirer.py 64
1351-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1352-        self.sharetypes_to_expire = sharetypes
1353-        ShareCrawler.__init__(self, statefile)
1354+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1355+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1356+        FSShareCrawler.__init__(self, statefile)
1357 
1358     def add_initial_state(self):
1359         # we fill ["cycle-to-date"] here (even though they will be reset in
1360hunk ./src/allmydata/storage/backends/das/expirer.py 156
1361 
1362     def process_share(self, sharefilename):
1363         # first, find out what kind of a share it is
1364-        sf = get_share_file(sharefilename)
1365+        f = open(sharefilename, "rb")
1366+        prefix = f.read(32)
1367+        f.close()
1368+        if prefix == MutableShareFile.MAGIC:
1369+            sf = MutableShareFile(sharefilename)
1370+        else:
1371+            # otherwise assume it's immutable
1372+            sf = FSBShare(sharefilename)
1373         sharetype = sf.sharetype
1374         now = time.time()
1375         s = self.stat(sharefilename)
1376addfile ./src/allmydata/storage/backends/null/__init__.py
1377addfile ./src/allmydata/storage/backends/null/core.py
1378hunk ./src/allmydata/storage/backends/null/core.py 1
1379+from allmydata.storage.backends.base import Backend
1380+
1381+class NullCore(Backend):
1382+    def __init__(self):
1383+        Backend.__init__(self)
1384+
1385+    def get_available_space(self):
1386+        return None
1387+
1388+    def get_shares(self, storage_index):
1389+        return set()
1390+
1391+    def get_share(self, storage_index, sharenum):
1392+        return None
1393+
1394+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1395+        return NullBucketWriter()
1396hunk ./src/allmydata/storage/crawler.py 12
1397 class TimeSliceExceeded(Exception):
1398     pass
1399 
1400-class ShareCrawler(service.MultiService):
1401+class FSShareCrawler(service.MultiService):
1402     """A subcless of ShareCrawler is attached to a StorageServer, and
1403     periodically walks all of its shares, processing each one in some
1404     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1405hunk ./src/allmydata/storage/crawler.py 68
1406     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1407     minimum_cycle_time = 300 # don't run a cycle faster than this
1408 
1409-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1410+    def __init__(self, statefname, allowed_cpu_percentage=None):
1411         service.MultiService.__init__(self)
1412         if allowed_cpu_percentage is not None:
1413             self.allowed_cpu_percentage = allowed_cpu_percentage
1414hunk ./src/allmydata/storage/crawler.py 72
1415-        self.backend = backend
1416+        self.statefname = statefname
1417         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1418                          for i in range(2**10)]
1419         self.prefixes.sort()
1420hunk ./src/allmydata/storage/crawler.py 192
1421         #                            of the last bucket to be processed, or
1422         #                            None if we are sleeping between cycles
1423         try:
1424-            f = open(self.statefile, "rb")
1425+            f = open(self.statefname, "rb")
1426             state = pickle.load(f)
1427             f.close()
1428         except EnvironmentError:
1429hunk ./src/allmydata/storage/crawler.py 230
1430         else:
1431             last_complete_prefix = self.prefixes[lcpi]
1432         self.state["last-complete-prefix"] = last_complete_prefix
1433-        tmpfile = self.statefile + ".tmp"
1434+        tmpfile = self.statefname + ".tmp"
1435         f = open(tmpfile, "wb")
1436         pickle.dump(self.state, f)
1437         f.close()
1438hunk ./src/allmydata/storage/crawler.py 433
1439         pass
1440 
1441 
1442-class BucketCountingCrawler(ShareCrawler):
1443+class FSBucketCountingCrawler(FSShareCrawler):
1444     """I keep track of how many buckets are being managed by this server.
1445     This is equivalent to the number of distributed files and directories for
1446     which I am providing storage. The actual number of files+directories in
1447hunk ./src/allmydata/storage/crawler.py 446
1448 
1449     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1450 
1451-    def __init__(self, statefile, num_sample_prefixes=1):
1452-        ShareCrawler.__init__(self, statefile)
1453+    def __init__(self, statefname, num_sample_prefixes=1):
1454+        FSShareCrawler.__init__(self, statefname)
1455         self.num_sample_prefixes = num_sample_prefixes
1456 
1457     def add_initial_state(self):
1458hunk ./src/allmydata/storage/immutable.py 14
1459 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1460      DataTooLargeError
1461 
1462-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1463-# and share data. The share data is accessed by RIBucketWriter.write and
1464-# RIBucketReader.read . The lease information is not accessible through these
1465-# interfaces.
1466-
1467-# The share file has the following layout:
1468-#  0x00: share file version number, four bytes, current version is 1
1469-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1470-#  0x08: number of leases, four bytes big-endian
1471-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1472-#  A+0x0c = B: first lease. Lease format is:
1473-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1474-#   B+0x04: renew secret, 32 bytes (SHA256)
1475-#   B+0x24: cancel secret, 32 bytes (SHA256)
1476-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1477-#   B+0x48: next lease, or end of record
1478-
1479-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1480-# but it is still filled in by storage servers in case the storage server
1481-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1482-# share file is moved from one storage server to another. The value stored in
1483-# this field is truncated, so if the actual share data length is >= 2**32,
1484-# then the value stored in this field will be the actual share data length
1485-# modulo 2**32.
1486-
1487-class ShareFile:
1488-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1489-    sharetype = "immutable"
1490-
1491-    def __init__(self, filename, max_size=None, create=False):
1492-        """ If max_size is not None then I won't allow more than
1493-        max_size to be written to me. If create=True then max_size
1494-        must not be None. """
1495-        precondition((max_size is not None) or (not create), max_size, create)
1496-        self.home = filename
1497-        self._max_size = max_size
1498-        if create:
1499-            # touch the file, so later callers will see that we're working on
1500-            # it. Also construct the metadata.
1501-            assert not os.path.exists(self.home)
1502-            fileutil.make_dirs(os.path.dirname(self.home))
1503-            f = open(self.home, 'wb')
1504-            # The second field -- the four-byte share data length -- is no
1505-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1506-            # there in case someone downgrades a storage server from >=
1507-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1508-            # server to another, etc. We do saturation -- a share data length
1509-            # larger than 2**32-1 (what can fit into the field) is marked as
1510-            # the largest length that can fit into the field. That way, even
1511-            # if this does happen, the old < v1.3.0 server will still allow
1512-            # clients to read the first part of the share.
1513-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1514-            f.close()
1515-            self._lease_offset = max_size + 0x0c
1516-            self._num_leases = 0
1517-        else:
1518-            f = open(self.home, 'rb')
1519-            filesize = os.path.getsize(self.home)
1520-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1521-            f.close()
1522-            if version != 1:
1523-                msg = "sharefile %s had version %d but we wanted 1" % \
1524-                      (filename, version)
1525-                raise UnknownImmutableContainerVersionError(msg)
1526-            self._num_leases = num_leases
1527-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1528-        self._data_offset = 0xc
1529-
1530-    def unlink(self):
1531-        os.unlink(self.home)
1532-
1533-    def read_share_data(self, offset, length):
1534-        precondition(offset >= 0)
1535-        # Reads beyond the end of the data are truncated. Reads that start
1536-        # beyond the end of the data return an empty string.
1537-        seekpos = self._data_offset+offset
1538-        fsize = os.path.getsize(self.home)
1539-        actuallength = max(0, min(length, fsize-seekpos))
1540-        if actuallength == 0:
1541-            return ""
1542-        f = open(self.home, 'rb')
1543-        f.seek(seekpos)
1544-        return f.read(actuallength)
1545-
1546-    def write_share_data(self, offset, data):
1547-        length = len(data)
1548-        precondition(offset >= 0, offset)
1549-        if self._max_size is not None and offset+length > self._max_size:
1550-            raise DataTooLargeError(self._max_size, offset, length)
1551-        f = open(self.home, 'rb+')
1552-        real_offset = self._data_offset+offset
1553-        f.seek(real_offset)
1554-        assert f.tell() == real_offset
1555-        f.write(data)
1556-        f.close()
1557-
1558-    def _write_lease_record(self, f, lease_number, lease_info):
1559-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1560-        f.seek(offset)
1561-        assert f.tell() == offset
1562-        f.write(lease_info.to_immutable_data())
1563-
1564-    def _read_num_leases(self, f):
1565-        f.seek(0x08)
1566-        (num_leases,) = struct.unpack(">L", f.read(4))
1567-        return num_leases
1568-
1569-    def _write_num_leases(self, f, num_leases):
1570-        f.seek(0x08)
1571-        f.write(struct.pack(">L", num_leases))
1572-
1573-    def _truncate_leases(self, f, num_leases):
1574-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1575-
1576-    def get_leases(self):
1577-        """Yields a LeaseInfo instance for all leases."""
1578-        f = open(self.home, 'rb')
1579-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1580-        f.seek(self._lease_offset)
1581-        for i in range(num_leases):
1582-            data = f.read(self.LEASE_SIZE)
1583-            if data:
1584-                yield LeaseInfo().from_immutable_data(data)
1585-
1586-    def add_lease(self, lease_info):
1587-        f = open(self.home, 'rb+')
1588-        num_leases = self._read_num_leases(f)
1589-        self._write_lease_record(f, num_leases, lease_info)
1590-        self._write_num_leases(f, num_leases+1)
1591-        f.close()
1592-
1593-    def renew_lease(self, renew_secret, new_expire_time):
1594-        for i,lease in enumerate(self.get_leases()):
1595-            if constant_time_compare(lease.renew_secret, renew_secret):
1596-                # yup. See if we need to update the owner time.
1597-                if new_expire_time > lease.expiration_time:
1598-                    # yes
1599-                    lease.expiration_time = new_expire_time
1600-                    f = open(self.home, 'rb+')
1601-                    self._write_lease_record(f, i, lease)
1602-                    f.close()
1603-                return
1604-        raise IndexError("unable to renew non-existent lease")
1605-
1606-    def add_or_renew_lease(self, lease_info):
1607-        try:
1608-            self.renew_lease(lease_info.renew_secret,
1609-                             lease_info.expiration_time)
1610-        except IndexError:
1611-            self.add_lease(lease_info)
1612-
1613-
1614-    def cancel_lease(self, cancel_secret):
1615-        """Remove a lease with the given cancel_secret. If the last lease is
1616-        cancelled, the file will be removed. Return the number of bytes that
1617-        were freed (by truncating the list of leases, and possibly by
1618-        deleting the file. Raise IndexError if there was no lease with the
1619-        given cancel_secret.
1620-        """
1621-
1622-        leases = list(self.get_leases())
1623-        num_leases_removed = 0
1624-        for i,lease in enumerate(leases):
1625-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1626-                leases[i] = None
1627-                num_leases_removed += 1
1628-        if not num_leases_removed:
1629-            raise IndexError("unable to find matching lease to cancel")
1630-        if num_leases_removed:
1631-            # pack and write out the remaining leases. We write these out in
1632-            # the same order as they were added, so that if we crash while
1633-            # doing this, we won't lose any non-cancelled leases.
1634-            leases = [l for l in leases if l] # remove the cancelled leases
1635-            f = open(self.home, 'rb+')
1636-            for i,lease in enumerate(leases):
1637-                self._write_lease_record(f, i, lease)
1638-            self._write_num_leases(f, len(leases))
1639-            self._truncate_leases(f, len(leases))
1640-            f.close()
1641-        space_freed = self.LEASE_SIZE * num_leases_removed
1642-        if not len(leases):
1643-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1644-            self.unlink()
1645-        return space_freed
1646-class NullBucketWriter(Referenceable):
1647-    implements(RIBucketWriter)
1648-
1649-    def remote_write(self, offset, data):
1650-        return
1651-
1652 class BucketWriter(Referenceable):
1653     implements(RIBucketWriter)
1654 
1655hunk ./src/allmydata/storage/immutable.py 17
1656-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1657+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1658         self.ss = ss
1659hunk ./src/allmydata/storage/immutable.py 19
1660-        self.incominghome = incominghome
1661-        self.finalhome = finalhome
1662         self._max_size = max_size # don't allow the client to write more than this
1663         self._canary = canary
1664         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1665hunk ./src/allmydata/storage/immutable.py 24
1666         self.closed = False
1667         self.throw_out_all_data = False
1668-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1669+        self._sharefile = immutableshare
1670         # also, add our lease to the file now, so that other ones can be
1671         # added by simultaneous uploaders
1672         self._sharefile.add_lease(lease_info)
1673hunk ./src/allmydata/storage/server.py 16
1674 from allmydata.storage.lease import LeaseInfo
1675 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1676      create_mutable_sharefile
1677-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1678-from allmydata.storage.crawler import BucketCountingCrawler
1679-from allmydata.storage.expirer import LeaseCheckingCrawler
1680 
1681 from zope.interface import implements
1682 
1683hunk ./src/allmydata/storage/server.py 19
1684-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1685-# be started and stopped.
1686-class Backend(service.MultiService):
1687-    implements(IStatsProducer)
1688-    def __init__(self):
1689-        service.MultiService.__init__(self)
1690-
1691-    def get_bucket_shares(self):
1692-        """XXX"""
1693-        raise NotImplementedError
1694-
1695-    def get_share(self):
1696-        """XXX"""
1697-        raise NotImplementedError
1698-
1699-    def make_bucket_writer(self):
1700-        """XXX"""
1701-        raise NotImplementedError
1702-
1703-class NullBackend(Backend):
1704-    def __init__(self):
1705-        Backend.__init__(self)
1706-
1707-    def get_available_space(self):
1708-        return None
1709-
1710-    def get_bucket_shares(self, storage_index):
1711-        return set()
1712-
1713-    def get_share(self, storage_index, sharenum):
1714-        return None
1715-
1716-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1717-        return NullBucketWriter()
1718-
1719-class FSBackend(Backend):
1720-    def __init__(self, storedir, readonly=False, reserved_space=0):
1721-        Backend.__init__(self)
1722-
1723-        self._setup_storage(storedir, readonly, reserved_space)
1724-        self._setup_corruption_advisory()
1725-        self._setup_bucket_counter()
1726-        self._setup_lease_checkerf()
1727-
1728-    def _setup_storage(self, storedir, readonly, reserved_space):
1729-        self.storedir = storedir
1730-        self.readonly = readonly
1731-        self.reserved_space = int(reserved_space)
1732-        if self.reserved_space:
1733-            if self.get_available_space() is None:
1734-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1735-                        umid="0wZ27w", level=log.UNUSUAL)
1736-
1737-        self.sharedir = os.path.join(self.storedir, "shares")
1738-        fileutil.make_dirs(self.sharedir)
1739-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1740-        self._clean_incomplete()
1741-
1742-    def _clean_incomplete(self):
1743-        fileutil.rm_dir(self.incomingdir)
1744-        fileutil.make_dirs(self.incomingdir)
1745-
1746-    def _setup_corruption_advisory(self):
1747-        # we don't actually create the corruption-advisory dir until necessary
1748-        self.corruption_advisory_dir = os.path.join(self.storedir,
1749-                                                    "corruption-advisories")
1750-
1751-    def _setup_bucket_counter(self):
1752-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1753-        self.bucket_counter = BucketCountingCrawler(statefile)
1754-        self.bucket_counter.setServiceParent(self)
1755-
1756-    def _setup_lease_checkerf(self):
1757-        statefile = os.path.join(self.storedir, "lease_checker.state")
1758-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1759-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1760-                                   expiration_enabled, expiration_mode,
1761-                                   expiration_override_lease_duration,
1762-                                   expiration_cutoff_date,
1763-                                   expiration_sharetypes)
1764-        self.lease_checker.setServiceParent(self)
1765-
1766-    def get_available_space(self):
1767-        if self.readonly:
1768-            return 0
1769-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1770-
1771-    def get_bucket_shares(self, storage_index):
1772-        """Return a list of (shnum, pathname) tuples for files that hold
1773-        shares for this storage_index. In each tuple, 'shnum' will always be
1774-        the integer form of the last component of 'pathname'."""
1775-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1776-        try:
1777-            for f in os.listdir(storagedir):
1778-                if NUM_RE.match(f):
1779-                    filename = os.path.join(storagedir, f)
1780-                    yield (int(f), filename)
1781-        except OSError:
1782-            # Commonly caused by there being no buckets at all.
1783-            pass
1784-
1785 # storage/
1786 # storage/shares/incoming
1787 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1788hunk ./src/allmydata/storage/server.py 32
1789 # $SHARENUM matches this regex:
1790 NUM_RE=re.compile("^[0-9]+$")
1791 
1792-
1793-
1794 class StorageServer(service.MultiService, Referenceable):
1795     implements(RIStorageServer, IStatsProducer)
1796     name = 'storage'
1797hunk ./src/allmydata/storage/server.py 35
1798-    LeaseCheckerClass = LeaseCheckingCrawler
1799 
1800     def __init__(self, nodeid, backend, reserved_space=0,
1801                  readonly_storage=False,
1802hunk ./src/allmydata/storage/server.py 38
1803-                 stats_provider=None,
1804-                 expiration_enabled=False,
1805-                 expiration_mode="age",
1806-                 expiration_override_lease_duration=None,
1807-                 expiration_cutoff_date=None,
1808-                 expiration_sharetypes=("mutable", "immutable")):
1809+                 stats_provider=None ):
1810         service.MultiService.__init__(self)
1811         assert isinstance(nodeid, str)
1812         assert len(nodeid) == 20
1813hunk ./src/allmydata/storage/server.py 217
1814         # they asked about: this will save them a lot of work. Add or update
1815         # leases for all of them: if they want us to hold shares for this
1816         # file, they'll want us to hold leases for this file.
1817-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1818-            alreadygot.add(shnum)
1819-            sf = ShareFile(fn)
1820-            sf.add_or_renew_lease(lease_info)
1821-
1822-        for shnum in sharenums:
1823-            share = self.backend.get_share(storage_index, shnum)
1824+        for share in self.backend.get_shares(storage_index):
1825+            alreadygot.add(share.shnum)
1826+            share.add_or_renew_lease(lease_info)
1827 
1828hunk ./src/allmydata/storage/server.py 221
1829-            if not share:
1830-                if (not limited) or (remaining_space >= max_space_per_bucket):
1831-                    # ok! we need to create the new share file.
1832-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1833-                                      max_space_per_bucket, lease_info, canary)
1834-                    bucketwriters[shnum] = bw
1835-                    self._active_writers[bw] = 1
1836-                    if limited:
1837-                        remaining_space -= max_space_per_bucket
1838-                else:
1839-                    # bummer! not enough space to accept this bucket
1840-                    pass
1841+        for shnum in (sharenums - alreadygot):
1842+            if (not limited) or (remaining_space >= max_space_per_bucket):
1843+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1844+                self.backend.set_storage_server(self)
1845+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1846+                                                     max_space_per_bucket, lease_info, canary)
1847+                bucketwriters[shnum] = bw
1848+                self._active_writers[bw] = 1
1849+                if limited:
1850+                    remaining_space -= max_space_per_bucket
1851 
1852hunk ./src/allmydata/storage/server.py 232
1853-            elif share.is_complete():
1854-                # great! we already have it. easy.
1855-                pass
1856-            elif not share.is_complete():
1857-                # Note that we don't create BucketWriters for shnums that
1858-                # have a partial share (in incoming/), so if a second upload
1859-                # occurs while the first is still in progress, the second
1860-                # uploader will use different storage servers.
1861-                pass
1862+        #XXX We SHOULD DOCUMENT LATER.
1863 
1864         self.add_latency("allocate", time.time() - start)
1865         return alreadygot, bucketwriters
1866hunk ./src/allmydata/storage/server.py 238
1867 
1868     def _iter_share_files(self, storage_index):
1869-        for shnum, filename in self._get_bucket_shares(storage_index):
1870+        for shnum, filename in self._get_shares(storage_index):
1871             f = open(filename, 'rb')
1872             header = f.read(32)
1873             f.close()
1874hunk ./src/allmydata/storage/server.py 318
1875         si_s = si_b2a(storage_index)
1876         log.msg("storage: get_buckets %s" % si_s)
1877         bucketreaders = {} # k: sharenum, v: BucketReader
1878-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1879+        for shnum, filename in self.backend.get_shares(storage_index):
1880             bucketreaders[shnum] = BucketReader(self, filename,
1881                                                 storage_index, shnum)
1882         self.add_latency("get", time.time() - start)
1883hunk ./src/allmydata/storage/server.py 334
1884         # since all shares get the same lease data, we just grab the leases
1885         # from the first share
1886         try:
1887-            shnum, filename = self._get_bucket_shares(storage_index).next()
1888+            shnum, filename = self._get_shares(storage_index).next()
1889             sf = ShareFile(filename)
1890             return sf.get_leases()
1891         except StopIteration:
1892hunk ./src/allmydata/storage/shares.py 1
1893-#! /usr/bin/python
1894-
1895-from allmydata.storage.mutable import MutableShareFile
1896-from allmydata.storage.immutable import ShareFile
1897-
1898-def get_share_file(filename):
1899-    f = open(filename, "rb")
1900-    prefix = f.read(32)
1901-    f.close()
1902-    if prefix == MutableShareFile.MAGIC:
1903-        return MutableShareFile(filename)
1904-    # otherwise assume it's immutable
1905-    return ShareFile(filename)
1906-
1907rmfile ./src/allmydata/storage/shares.py
1908hunk ./src/allmydata/test/common_util.py 20
1909 
1910 def flip_one_bit(s, offset=0, size=None):
1911     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1912-    than offset+size. """
1913+    than offset+size. Return the new string. """
1914     if size is None:
1915         size=len(s)-offset
1916     i = randrange(offset, offset+size)
1917hunk ./src/allmydata/test/test_backends.py 7
1918 
1919 from allmydata.test.common_util import ReallyEqualMixin
1920 
1921-import mock
1922+import mock, os
1923 
1924 # This is the code that we're going to be testing.
1925hunk ./src/allmydata/test/test_backends.py 10
1926-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1927+from allmydata.storage.server import StorageServer
1928+
1929+from allmydata.storage.backends.das.core import DASCore
1930+from allmydata.storage.backends.null.core import NullCore
1931+
1932 
1933 # The following share file contents was generated with
1934 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1935hunk ./src/allmydata/test/test_backends.py 22
1936 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1937 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1938 
1939-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1940+tempdir = 'teststoredir'
1941+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1942+sharefname = os.path.join(sharedirname, '0')
1943 
1944 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1945     @mock.patch('time.time')
1946hunk ./src/allmydata/test/test_backends.py 58
1947         filesystem in only the prescribed ways. """
1948 
1949         def call_open(fname, mode):
1950-            if fname == 'testdir/bucket_counter.state':
1951-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1952-            elif fname == 'testdir/lease_checker.state':
1953-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1954-            elif fname == 'testdir/lease_checker.history':
1955+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1956+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1957+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1958+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1959+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1960                 return StringIO()
1961             else:
1962                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1963hunk ./src/allmydata/test/test_backends.py 124
1964     @mock.patch('__builtin__.open')
1965     def setUp(self, mockopen):
1966         def call_open(fname, mode):
1967-            if fname == 'testdir/bucket_counter.state':
1968-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1969-            elif fname == 'testdir/lease_checker.state':
1970-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1971-            elif fname == 'testdir/lease_checker.history':
1972+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1973+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1974+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1975+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1976+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1977                 return StringIO()
1978         mockopen.side_effect = call_open
1979hunk ./src/allmydata/test/test_backends.py 131
1980-
1981-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1982+        expiration_policy = {'enabled' : False,
1983+                             'mode' : 'age',
1984+                             'override_lease_duration' : None,
1985+                             'cutoff_date' : None,
1986+                             'sharetypes' : None}
1987+        testbackend = DASCore(tempdir, expiration_policy)
1988+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1989 
1990     @mock.patch('time.time')
1991     @mock.patch('os.mkdir')
1992hunk ./src/allmydata/test/test_backends.py 148
1993         """ Write a new share. """
1994 
1995         def call_listdir(dirname):
1996-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1997-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1998+            self.failUnlessReallyEqual(dirname, sharedirname)
1999+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2000 
2001         mocklistdir.side_effect = call_listdir
2002 
2003hunk ./src/allmydata/test/test_backends.py 178
2004 
2005         sharefile = MockFile()
2006         def call_open(fname, mode):
2007-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
2008+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
2009             return sharefile
2010 
2011         mockopen.side_effect = call_open
2012hunk ./src/allmydata/test/test_backends.py 200
2013         StorageServer object. """
2014 
2015         def call_listdir(dirname):
2016-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
2017+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2018             return ['0']
2019 
2020         mocklistdir.side_effect = call_listdir
2021}
2022[checkpoint patch
2023wilcoxjg@gmail.com**20110626165715
2024 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
2025] {
2026hunk ./src/allmydata/storage/backends/das/core.py 21
2027 from allmydata.storage.lease import LeaseInfo
2028 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2029      create_mutable_sharefile
2030-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
2031+from allmydata.storage.immutable import BucketWriter, BucketReader
2032 from allmydata.storage.crawler import FSBucketCountingCrawler
2033 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
2034 
2035hunk ./src/allmydata/storage/backends/das/core.py 27
2036 from zope.interface import implements
2037 
2038+# $SHARENUM matches this regex:
2039+NUM_RE=re.compile("^[0-9]+$")
2040+
2041 class DASCore(Backend):
2042     implements(IStorageBackend)
2043     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
2044hunk ./src/allmydata/storage/backends/das/core.py 80
2045         return fileutil.get_available_space(self.storedir, self.reserved_space)
2046 
2047     def get_shares(self, storage_index):
2048-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
2049+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
2050         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
2051         try:
2052             for f in os.listdir(finalstoragedir):
2053hunk ./src/allmydata/storage/backends/das/core.py 86
2054                 if NUM_RE.match(f):
2055                     filename = os.path.join(finalstoragedir, f)
2056-                    yield FSBShare(filename, int(f))
2057+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
2058         except OSError:
2059             # Commonly caused by there being no buckets at all.
2060             pass
2061hunk ./src/allmydata/storage/backends/das/core.py 95
2062         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
2063         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2064         return bw
2065+
2066+    def set_storage_server(self, ss):
2067+        self.ss = ss
2068         
2069 
2070 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2071hunk ./src/allmydata/storage/server.py 29
2072 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2073 # base-32 chars).
2074 
2075-# $SHARENUM matches this regex:
2076-NUM_RE=re.compile("^[0-9]+$")
2077 
2078 class StorageServer(service.MultiService, Referenceable):
2079     implements(RIStorageServer, IStatsProducer)
2080}
2081[checkpoint4
2082wilcoxjg@gmail.com**20110628202202
2083 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2084] {
2085hunk ./src/allmydata/storage/backends/das/core.py 96
2086         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2087         return bw
2088 
2089+    def make_bucket_reader(self, share):
2090+        return BucketReader(self.ss, share)
2091+
2092     def set_storage_server(self, ss):
2093         self.ss = ss
2094         
2095hunk ./src/allmydata/storage/backends/das/core.py 138
2096         must not be None. """
2097         precondition((max_size is not None) or (not create), max_size, create)
2098         self.shnum = shnum
2099+        self.storage_index = storageindex
2100         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2101         self._max_size = max_size
2102         if create:
2103hunk ./src/allmydata/storage/backends/das/core.py 173
2104             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2105         self._data_offset = 0xc
2106 
2107+    def get_shnum(self):
2108+        return self.shnum
2109+
2110     def unlink(self):
2111         os.unlink(self.fname)
2112 
2113hunk ./src/allmydata/storage/backends/null/core.py 2
2114 from allmydata.storage.backends.base import Backend
2115+from allmydata.storage.immutable import BucketWriter, BucketReader
2116 
2117 class NullCore(Backend):
2118     def __init__(self):
2119hunk ./src/allmydata/storage/backends/null/core.py 17
2120     def get_share(self, storage_index, sharenum):
2121         return None
2122 
2123-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2124-        return NullBucketWriter()
2125+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2126+       
2127+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2128+
2129+    def set_storage_server(self, ss):
2130+        self.ss = ss
2131+
2132+class ImmutableShare:
2133+    sharetype = "immutable"
2134+
2135+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2136+        """ If max_size is not None then I won't allow more than
2137+        max_size to be written to me. If create=True then max_size
2138+        must not be None. """
2139+        precondition((max_size is not None) or (not create), max_size, create)
2140+        self.shnum = shnum
2141+        self.storage_index = storageindex
2142+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2143+        self._max_size = max_size
2144+        if create:
2145+            # touch the file, so later callers will see that we're working on
2146+            # it. Also construct the metadata.
2147+            assert not os.path.exists(self.fname)
2148+            fileutil.make_dirs(os.path.dirname(self.fname))
2149+            f = open(self.fname, 'wb')
2150+            # The second field -- the four-byte share data length -- is no
2151+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2152+            # there in case someone downgrades a storage server from >=
2153+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2154+            # server to another, etc. We do saturation -- a share data length
2155+            # larger than 2**32-1 (what can fit into the field) is marked as
2156+            # the largest length that can fit into the field. That way, even
2157+            # if this does happen, the old < v1.3.0 server will still allow
2158+            # clients to read the first part of the share.
2159+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2160+            f.close()
2161+            self._lease_offset = max_size + 0x0c
2162+            self._num_leases = 0
2163+        else:
2164+            f = open(self.fname, 'rb')
2165+            filesize = os.path.getsize(self.fname)
2166+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2167+            f.close()
2168+            if version != 1:
2169+                msg = "sharefile %s had version %d but we wanted 1" % \
2170+                      (self.fname, version)
2171+                raise UnknownImmutableContainerVersionError(msg)
2172+            self._num_leases = num_leases
2173+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2174+        self._data_offset = 0xc
2175+
2176+    def get_shnum(self):
2177+        return self.shnum
2178+
2179+    def unlink(self):
2180+        os.unlink(self.fname)
2181+
2182+    def read_share_data(self, offset, length):
2183+        precondition(offset >= 0)
2184+        # Reads beyond the end of the data are truncated. Reads that start
2185+        # beyond the end of the data return an empty string.
2186+        seekpos = self._data_offset+offset
2187+        fsize = os.path.getsize(self.fname)
2188+        actuallength = max(0, min(length, fsize-seekpos))
2189+        if actuallength == 0:
2190+            return ""
2191+        f = open(self.fname, 'rb')
2192+        f.seek(seekpos)
2193+        return f.read(actuallength)
2194+
2195+    def write_share_data(self, offset, data):
2196+        length = len(data)
2197+        precondition(offset >= 0, offset)
2198+        if self._max_size is not None and offset+length > self._max_size:
2199+            raise DataTooLargeError(self._max_size, offset, length)
2200+        f = open(self.fname, 'rb+')
2201+        real_offset = self._data_offset+offset
2202+        f.seek(real_offset)
2203+        assert f.tell() == real_offset
2204+        f.write(data)
2205+        f.close()
2206+
2207+    def _write_lease_record(self, f, lease_number, lease_info):
2208+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2209+        f.seek(offset)
2210+        assert f.tell() == offset
2211+        f.write(lease_info.to_immutable_data())
2212+
2213+    def _read_num_leases(self, f):
2214+        f.seek(0x08)
2215+        (num_leases,) = struct.unpack(">L", f.read(4))
2216+        return num_leases
2217+
2218+    def _write_num_leases(self, f, num_leases):
2219+        f.seek(0x08)
2220+        f.write(struct.pack(">L", num_leases))
2221+
2222+    def _truncate_leases(self, f, num_leases):
2223+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2224+
2225+    def get_leases(self):
2226+        """Yields a LeaseInfo instance for all leases."""
2227+        f = open(self.fname, 'rb')
2228+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2229+        f.seek(self._lease_offset)
2230+        for i in range(num_leases):
2231+            data = f.read(self.LEASE_SIZE)
2232+            if data:
2233+                yield LeaseInfo().from_immutable_data(data)
2234+
2235+    def add_lease(self, lease_info):
2236+        f = open(self.fname, 'rb+')
2237+        num_leases = self._read_num_leases(f)
2238+        self._write_lease_record(f, num_leases, lease_info)
2239+        self._write_num_leases(f, num_leases+1)
2240+        f.close()
2241+
2242+    def renew_lease(self, renew_secret, new_expire_time):
2243+        for i,lease in enumerate(self.get_leases()):
2244+            if constant_time_compare(lease.renew_secret, renew_secret):
2245+                # yup. See if we need to update the owner time.
2246+                if new_expire_time > lease.expiration_time:
2247+                    # yes
2248+                    lease.expiration_time = new_expire_time
2249+                    f = open(self.fname, 'rb+')
2250+                    self._write_lease_record(f, i, lease)
2251+                    f.close()
2252+                return
2253+        raise IndexError("unable to renew non-existent lease")
2254+
2255+    def add_or_renew_lease(self, lease_info):
2256+        try:
2257+            self.renew_lease(lease_info.renew_secret,
2258+                             lease_info.expiration_time)
2259+        except IndexError:
2260+            self.add_lease(lease_info)
2261+
2262+
2263+    def cancel_lease(self, cancel_secret):
2264+        """Remove a lease with the given cancel_secret. If the last lease is
2265+        cancelled, the file will be removed. Return the number of bytes that
2266+        were freed (by truncating the list of leases, and possibly by
2267+        deleting the file. Raise IndexError if there was no lease with the
2268+        given cancel_secret.
2269+        """
2270+
2271+        leases = list(self.get_leases())
2272+        num_leases_removed = 0
2273+        for i,lease in enumerate(leases):
2274+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2275+                leases[i] = None
2276+                num_leases_removed += 1
2277+        if not num_leases_removed:
2278+            raise IndexError("unable to find matching lease to cancel")
2279+        if num_leases_removed:
2280+            # pack and write out the remaining leases. We write these out in
2281+            # the same order as they were added, so that if we crash while
2282+            # doing this, we won't lose any non-cancelled leases.
2283+            leases = [l for l in leases if l] # remove the cancelled leases
2284+            f = open(self.fname, 'rb+')
2285+            for i,lease in enumerate(leases):
2286+                self._write_lease_record(f, i, lease)
2287+            self._write_num_leases(f, len(leases))
2288+            self._truncate_leases(f, len(leases))
2289+            f.close()
2290+        space_freed = self.LEASE_SIZE * num_leases_removed
2291+        if not len(leases):
2292+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2293+            self.unlink()
2294+        return space_freed
2295hunk ./src/allmydata/storage/immutable.py 114
2296 class BucketReader(Referenceable):
2297     implements(RIBucketReader)
2298 
2299-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2300+    def __init__(self, ss, share):
2301         self.ss = ss
2302hunk ./src/allmydata/storage/immutable.py 116
2303-        self._share_file = ShareFile(sharefname)
2304-        self.storage_index = storage_index
2305-        self.shnum = shnum
2306+        self._share_file = share
2307+        self.storage_index = share.storage_index
2308+        self.shnum = share.shnum
2309 
2310     def __repr__(self):
2311         return "<%s %s %s>" % (self.__class__.__name__,
2312hunk ./src/allmydata/storage/server.py 316
2313         si_s = si_b2a(storage_index)
2314         log.msg("storage: get_buckets %s" % si_s)
2315         bucketreaders = {} # k: sharenum, v: BucketReader
2316-        for shnum, filename in self.backend.get_shares(storage_index):
2317-            bucketreaders[shnum] = BucketReader(self, filename,
2318-                                                storage_index, shnum)
2319+        self.backend.set_storage_server(self)
2320+        for share in self.backend.get_shares(storage_index):
2321+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2322         self.add_latency("get", time.time() - start)
2323         return bucketreaders
2324 
2325hunk ./src/allmydata/test/test_backends.py 25
2326 tempdir = 'teststoredir'
2327 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2328 sharefname = os.path.join(sharedirname, '0')
2329+expiration_policy = {'enabled' : False,
2330+                     'mode' : 'age',
2331+                     'override_lease_duration' : None,
2332+                     'cutoff_date' : None,
2333+                     'sharetypes' : None}
2334 
2335 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2336     @mock.patch('time.time')
2337hunk ./src/allmydata/test/test_backends.py 43
2338         tries to read or write to the file system. """
2339 
2340         # Now begin the test.
2341-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2342+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2343 
2344         self.failIf(mockisdir.called)
2345         self.failIf(mocklistdir.called)
2346hunk ./src/allmydata/test/test_backends.py 74
2347         mockopen.side_effect = call_open
2348 
2349         # Now begin the test.
2350-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2351+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2352 
2353         self.failIf(mockisdir.called)
2354         self.failIf(mocklistdir.called)
2355hunk ./src/allmydata/test/test_backends.py 86
2356 
2357 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2358     def setUp(self):
2359-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2360+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2361 
2362     @mock.patch('os.mkdir')
2363     @mock.patch('__builtin__.open')
2364hunk ./src/allmydata/test/test_backends.py 136
2365             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2366                 return StringIO()
2367         mockopen.side_effect = call_open
2368-        expiration_policy = {'enabled' : False,
2369-                             'mode' : 'age',
2370-                             'override_lease_duration' : None,
2371-                             'cutoff_date' : None,
2372-                             'sharetypes' : None}
2373         testbackend = DASCore(tempdir, expiration_policy)
2374         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2375 
2376}
2377[checkpoint5
2378wilcoxjg@gmail.com**20110705034626
2379 Ignore-this: 255780bd58299b0aa33c027e9d008262
2380] {
2381addfile ./src/allmydata/storage/backends/base.py
2382hunk ./src/allmydata/storage/backends/base.py 1
2383+from twisted.application import service
2384+
2385+class Backend(service.MultiService):
2386+    def __init__(self):
2387+        service.MultiService.__init__(self)
2388hunk ./src/allmydata/storage/backends/null/core.py 19
2389 
2390     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2391         
2392+        immutableshare = ImmutableShare()
2393         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2394 
2395     def set_storage_server(self, ss):
2396hunk ./src/allmydata/storage/backends/null/core.py 28
2397 class ImmutableShare:
2398     sharetype = "immutable"
2399 
2400-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2401+    def __init__(self):
2402         """ If max_size is not None then I won't allow more than
2403         max_size to be written to me. If create=True then max_size
2404         must not be None. """
2405hunk ./src/allmydata/storage/backends/null/core.py 32
2406-        precondition((max_size is not None) or (not create), max_size, create)
2407-        self.shnum = shnum
2408-        self.storage_index = storageindex
2409-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2410-        self._max_size = max_size
2411-        if create:
2412-            # touch the file, so later callers will see that we're working on
2413-            # it. Also construct the metadata.
2414-            assert not os.path.exists(self.fname)
2415-            fileutil.make_dirs(os.path.dirname(self.fname))
2416-            f = open(self.fname, 'wb')
2417-            # The second field -- the four-byte share data length -- is no
2418-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2419-            # there in case someone downgrades a storage server from >=
2420-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2421-            # server to another, etc. We do saturation -- a share data length
2422-            # larger than 2**32-1 (what can fit into the field) is marked as
2423-            # the largest length that can fit into the field. That way, even
2424-            # if this does happen, the old < v1.3.0 server will still allow
2425-            # clients to read the first part of the share.
2426-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2427-            f.close()
2428-            self._lease_offset = max_size + 0x0c
2429-            self._num_leases = 0
2430-        else:
2431-            f = open(self.fname, 'rb')
2432-            filesize = os.path.getsize(self.fname)
2433-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2434-            f.close()
2435-            if version != 1:
2436-                msg = "sharefile %s had version %d but we wanted 1" % \
2437-                      (self.fname, version)
2438-                raise UnknownImmutableContainerVersionError(msg)
2439-            self._num_leases = num_leases
2440-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2441-        self._data_offset = 0xc
2442+        pass
2443 
2444     def get_shnum(self):
2445         return self.shnum
2446hunk ./src/allmydata/storage/backends/null/core.py 54
2447         return f.read(actuallength)
2448 
2449     def write_share_data(self, offset, data):
2450-        length = len(data)
2451-        precondition(offset >= 0, offset)
2452-        if self._max_size is not None and offset+length > self._max_size:
2453-            raise DataTooLargeError(self._max_size, offset, length)
2454-        f = open(self.fname, 'rb+')
2455-        real_offset = self._data_offset+offset
2456-        f.seek(real_offset)
2457-        assert f.tell() == real_offset
2458-        f.write(data)
2459-        f.close()
2460+        pass
2461 
2462     def _write_lease_record(self, f, lease_number, lease_info):
2463         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2464hunk ./src/allmydata/storage/backends/null/core.py 84
2465             if data:
2466                 yield LeaseInfo().from_immutable_data(data)
2467 
2468-    def add_lease(self, lease_info):
2469-        f = open(self.fname, 'rb+')
2470-        num_leases = self._read_num_leases(f)
2471-        self._write_lease_record(f, num_leases, lease_info)
2472-        self._write_num_leases(f, num_leases+1)
2473-        f.close()
2474+    def add_lease(self, lease):
2475+        pass
2476 
2477     def renew_lease(self, renew_secret, new_expire_time):
2478         for i,lease in enumerate(self.get_leases()):
2479hunk ./src/allmydata/test/test_backends.py 32
2480                      'sharetypes' : None}
2481 
2482 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2483-    @mock.patch('time.time')
2484-    @mock.patch('os.mkdir')
2485-    @mock.patch('__builtin__.open')
2486-    @mock.patch('os.listdir')
2487-    @mock.patch('os.path.isdir')
2488-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2489-        """ This tests whether a server instance can be constructed
2490-        with a null backend. The server instance fails the test if it
2491-        tries to read or write to the file system. """
2492-
2493-        # Now begin the test.
2494-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2495-
2496-        self.failIf(mockisdir.called)
2497-        self.failIf(mocklistdir.called)
2498-        self.failIf(mockopen.called)
2499-        self.failIf(mockmkdir.called)
2500-
2501-        # You passed!
2502-
2503     @mock.patch('time.time')
2504     @mock.patch('os.mkdir')
2505     @mock.patch('__builtin__.open')
2506hunk ./src/allmydata/test/test_backends.py 53
2507                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2508         mockopen.side_effect = call_open
2509 
2510-        # Now begin the test.
2511-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2512-
2513-        self.failIf(mockisdir.called)
2514-        self.failIf(mocklistdir.called)
2515-        self.failIf(mockopen.called)
2516-        self.failIf(mockmkdir.called)
2517-        self.failIf(mocktime.called)
2518-
2519-        # You passed!
2520-
2521-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2522-    def setUp(self):
2523-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2524-
2525-    @mock.patch('os.mkdir')
2526-    @mock.patch('__builtin__.open')
2527-    @mock.patch('os.listdir')
2528-    @mock.patch('os.path.isdir')
2529-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2530-        """ Write a new share. """
2531-
2532-        # Now begin the test.
2533-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2534-        bs[0].remote_write(0, 'a')
2535-        self.failIf(mockisdir.called)
2536-        self.failIf(mocklistdir.called)
2537-        self.failIf(mockopen.called)
2538-        self.failIf(mockmkdir.called)
2539+        def call_isdir(fname):
2540+            if fname == os.path.join(tempdir,'shares'):
2541+                return True
2542+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2543+                return True
2544+            else:
2545+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2546+        mockisdir.side_effect = call_isdir
2547 
2548hunk ./src/allmydata/test/test_backends.py 62
2549-    @mock.patch('os.path.exists')
2550-    @mock.patch('os.path.getsize')
2551-    @mock.patch('__builtin__.open')
2552-    @mock.patch('os.listdir')
2553-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2554-        """ This tests whether the code correctly finds and reads
2555-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2556-        servers. There is a similar test in test_download, but that one
2557-        is from the perspective of the client and exercises a deeper
2558-        stack of code. This one is for exercising just the
2559-        StorageServer object. """
2560+        def call_mkdir(fname, mode):
2561+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2562+            self.failUnlessEqual(0777, mode)
2563+            if fname == tempdir:
2564+                return None
2565+            elif fname == os.path.join(tempdir,'shares'):
2566+                return None
2567+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2568+                return None
2569+            else:
2570+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2571+        mockmkdir.side_effect = call_mkdir
2572 
2573         # Now begin the test.
2574hunk ./src/allmydata/test/test_backends.py 76
2575-        bs = self.s.remote_get_buckets('teststorage_index')
2576+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2577 
2578hunk ./src/allmydata/test/test_backends.py 78
2579-        self.failUnlessEqual(len(bs), 0)
2580-        self.failIf(mocklistdir.called)
2581-        self.failIf(mockopen.called)
2582-        self.failIf(mockgetsize.called)
2583-        self.failIf(mockexists.called)
2584+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2585 
2586 
2587 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2588hunk ./src/allmydata/test/test_backends.py 193
2589         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2590 
2591 
2592+
2593+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2594+    @mock.patch('time.time')
2595+    @mock.patch('os.mkdir')
2596+    @mock.patch('__builtin__.open')
2597+    @mock.patch('os.listdir')
2598+    @mock.patch('os.path.isdir')
2599+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2600+        """ This tests whether a file system backend instance can be
2601+        constructed. To pass the test, it has to use the
2602+        filesystem in only the prescribed ways. """
2603+
2604+        def call_open(fname, mode):
2605+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2606+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2607+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2608+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2609+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2610+                return StringIO()
2611+            else:
2612+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2613+        mockopen.side_effect = call_open
2614+
2615+        def call_isdir(fname):
2616+            if fname == os.path.join(tempdir,'shares'):
2617+                return True
2618+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2619+                return True
2620+            else:
2621+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2622+        mockisdir.side_effect = call_isdir
2623+
2624+        def call_mkdir(fname, mode):
2625+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2626+            self.failUnlessEqual(0777, mode)
2627+            if fname == tempdir:
2628+                return None
2629+            elif fname == os.path.join(tempdir,'shares'):
2630+                return None
2631+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2632+                return None
2633+            else:
2634+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2635+        mockmkdir.side_effect = call_mkdir
2636+
2637+        # Now begin the test.
2638+        DASCore('teststoredir', expiration_policy)
2639+
2640+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2641}
2642[checkpoint 6
2643wilcoxjg@gmail.com**20110706190824
2644 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2645] {
2646hunk ./src/allmydata/interfaces.py 100
2647                          renew_secret=LeaseRenewSecret,
2648                          cancel_secret=LeaseCancelSecret,
2649                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2650-                         allocated_size=Offset, canary=Referenceable):
2651+                         allocated_size=Offset,
2652+                         canary=Referenceable):
2653         """
2654hunk ./src/allmydata/interfaces.py 103
2655-        @param storage_index: the index of the bucket to be created or
2656+        @param storage_index: the index of the shares to be created or
2657                               increfed.
2658hunk ./src/allmydata/interfaces.py 105
2659-        @param sharenums: these are the share numbers (probably between 0 and
2660-                          99) that the sender is proposing to store on this
2661-                          server.
2662-        @param renew_secret: This is the secret used to protect bucket refresh
2663+        @param renew_secret: This is the secret used to protect shares refresh
2664                              This secret is generated by the client and
2665                              stored for later comparison by the server. Each
2666                              server is given a different secret.
2667hunk ./src/allmydata/interfaces.py 109
2668-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2669-        @param canary: If the canary is lost before close(), the bucket is
2670+        @param cancel_secret: Like renew_secret, but protects shares decref.
2671+        @param sharenums: these are the share numbers (probably between 0 and
2672+                          99) that the sender is proposing to store on this
2673+                          server.
2674+        @param allocated_size: XXX The size of the shares the client wishes to store.
2675+        @param canary: If the canary is lost before close(), the shares are
2676                        deleted.
2677hunk ./src/allmydata/interfaces.py 116
2678+
2679         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2680                  already have and allocated is what we hereby agree to accept.
2681                  New leases are added for shares in both lists.
2682hunk ./src/allmydata/interfaces.py 128
2683                   renew_secret=LeaseRenewSecret,
2684                   cancel_secret=LeaseCancelSecret):
2685         """
2686-        Add a new lease on the given bucket. If the renew_secret matches an
2687+        Add a new lease on the given shares. If the renew_secret matches an
2688         existing lease, that lease will be renewed instead. If there is no
2689         bucket for the given storage_index, return silently. (note that in
2690         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2691hunk ./src/allmydata/storage/server.py 17
2692 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2693      create_mutable_sharefile
2694 
2695-from zope.interface import implements
2696-
2697 # storage/
2698 # storage/shares/incoming
2699 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2700hunk ./src/allmydata/test/test_backends.py 6
2701 from StringIO import StringIO
2702 
2703 from allmydata.test.common_util import ReallyEqualMixin
2704+from allmydata.util.assertutil import _assert
2705 
2706 import mock, os
2707 
2708hunk ./src/allmydata/test/test_backends.py 92
2709                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2710             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2711                 return StringIO()
2712+            else:
2713+                _assert(False, "The tester code doesn't recognize this case.") 
2714+
2715         mockopen.side_effect = call_open
2716         testbackend = DASCore(tempdir, expiration_policy)
2717         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2718hunk ./src/allmydata/test/test_backends.py 109
2719 
2720         def call_listdir(dirname):
2721             self.failUnlessReallyEqual(dirname, sharedirname)
2722-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2723+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2724 
2725         mocklistdir.side_effect = call_listdir
2726 
2727hunk ./src/allmydata/test/test_backends.py 113
2728+        def call_isdir(dirname):
2729+            self.failUnlessReallyEqual(dirname, sharedirname)
2730+            return True
2731+
2732+        mockisdir.side_effect = call_isdir
2733+
2734+        def call_mkdir(dirname, permissions):
2735+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2736+                self.Fail
2737+            else:
2738+                return True
2739+
2740+        mockmkdir.side_effect = call_mkdir
2741+
2742         class MockFile:
2743             def __init__(self):
2744                 self.buffer = ''
2745hunk ./src/allmydata/test/test_backends.py 156
2746             return sharefile
2747 
2748         mockopen.side_effect = call_open
2749+
2750         # Now begin the test.
2751         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2752         bs[0].remote_write(0, 'a')
2753hunk ./src/allmydata/test/test_backends.py 161
2754         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2755+       
2756+        # Now test the allocated_size method.
2757+        spaceint = self.s.allocated_size()
2758 
2759     @mock.patch('os.path.exists')
2760     @mock.patch('os.path.getsize')
2761}
2762[checkpoint 7
2763wilcoxjg@gmail.com**20110706200820
2764 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2765] hunk ./src/allmydata/test/test_backends.py 164
2766         
2767         # Now test the allocated_size method.
2768         spaceint = self.s.allocated_size()
2769+        self.failUnlessReallyEqual(spaceint, 1)
2770 
2771     @mock.patch('os.path.exists')
2772     @mock.patch('os.path.getsize')
2773[checkpoint8
2774wilcoxjg@gmail.com**20110706223126
2775 Ignore-this: 97336180883cb798b16f15411179f827
2776   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2777] hunk ./src/allmydata/test/test_backends.py 32
2778                      'cutoff_date' : None,
2779                      'sharetypes' : None}
2780 
2781+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2782+    def setUp(self):
2783+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2784+
2785+    @mock.patch('os.mkdir')
2786+    @mock.patch('__builtin__.open')
2787+    @mock.patch('os.listdir')
2788+    @mock.patch('os.path.isdir')
2789+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2790+        """ Write a new share. """
2791+
2792+        # Now begin the test.
2793+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2794+        bs[0].remote_write(0, 'a')
2795+        self.failIf(mockisdir.called)
2796+        self.failIf(mocklistdir.called)
2797+        self.failIf(mockopen.called)
2798+        self.failIf(mockmkdir.called)
2799+
2800 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2801     @mock.patch('time.time')
2802     @mock.patch('os.mkdir')
2803[checkpoint 9
2804wilcoxjg@gmail.com**20110707042942
2805 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2806] {
2807hunk ./src/allmydata/storage/backends/das/core.py 88
2808                     filename = os.path.join(finalstoragedir, f)
2809                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2810         except OSError:
2811-            # Commonly caused by there being no buckets at all.
2812+            # Commonly caused by there being no shares at all.
2813             pass
2814         
2815     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2816hunk ./src/allmydata/storage/backends/das/core.py 141
2817         self.storage_index = storageindex
2818         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2819         self._max_size = max_size
2820+        self.incomingdir = os.path.join(sharedir, 'incoming')
2821+        si_dir = storage_index_to_dir(storageindex)
2822+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2823+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2824         if create:
2825             # touch the file, so later callers will see that we're working on
2826             # it. Also construct the metadata.
2827hunk ./src/allmydata/storage/backends/das/core.py 177
2828             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2829         self._data_offset = 0xc
2830 
2831+    def close(self):
2832+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2833+        fileutil.rename(self.incominghome, self.finalhome)
2834+        try:
2835+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2836+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2837+            # these directories lying around forever, but the delete might
2838+            # fail if we're working on another share for the same storage
2839+            # index (like ab/abcde/5). The alternative approach would be to
2840+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2841+            # ShareWriter), each of which is responsible for a single
2842+            # directory on disk, and have them use reference counting of
2843+            # their children to know when they should do the rmdir. This
2844+            # approach is simpler, but relies on os.rmdir refusing to delete
2845+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2846+            os.rmdir(os.path.dirname(self.incominghome))
2847+            # we also delete the grandparent (prefix) directory, .../ab ,
2848+            # again to avoid leaving directories lying around. This might
2849+            # fail if there is another bucket open that shares a prefix (like
2850+            # ab/abfff).
2851+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2852+            # we leave the great-grandparent (incoming/) directory in place.
2853+        except EnvironmentError:
2854+            # ignore the "can't rmdir because the directory is not empty"
2855+            # exceptions, those are normal consequences of the
2856+            # above-mentioned conditions.
2857+            pass
2858+        pass
2859+       
2860+    def stat(self):
2861+        return os.stat(self.finalhome)[stat.ST_SIZE]
2862+
2863     def get_shnum(self):
2864         return self.shnum
2865 
2866hunk ./src/allmydata/storage/immutable.py 7
2867 
2868 from zope.interface import implements
2869 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2870-from allmydata.util import base32, fileutil, log
2871+from allmydata.util import base32, log
2872 from allmydata.util.assertutil import precondition
2873 from allmydata.util.hashutil import constant_time_compare
2874 from allmydata.storage.lease import LeaseInfo
2875hunk ./src/allmydata/storage/immutable.py 44
2876     def remote_close(self):
2877         precondition(not self.closed)
2878         start = time.time()
2879-
2880-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2881-        fileutil.rename(self.incominghome, self.finalhome)
2882-        try:
2883-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2884-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2885-            # these directories lying around forever, but the delete might
2886-            # fail if we're working on another share for the same storage
2887-            # index (like ab/abcde/5). The alternative approach would be to
2888-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2889-            # ShareWriter), each of which is responsible for a single
2890-            # directory on disk, and have them use reference counting of
2891-            # their children to know when they should do the rmdir. This
2892-            # approach is simpler, but relies on os.rmdir refusing to delete
2893-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2894-            os.rmdir(os.path.dirname(self.incominghome))
2895-            # we also delete the grandparent (prefix) directory, .../ab ,
2896-            # again to avoid leaving directories lying around. This might
2897-            # fail if there is another bucket open that shares a prefix (like
2898-            # ab/abfff).
2899-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2900-            # we leave the great-grandparent (incoming/) directory in place.
2901-        except EnvironmentError:
2902-            # ignore the "can't rmdir because the directory is not empty"
2903-            # exceptions, those are normal consequences of the
2904-            # above-mentioned conditions.
2905-            pass
2906+        self._sharefile.close()
2907         self._sharefile = None
2908         self.closed = True
2909         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2910hunk ./src/allmydata/storage/immutable.py 49
2911 
2912-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2913+        filelen = self._sharefile.stat()
2914         self.ss.bucket_writer_closed(self, filelen)
2915         self.ss.add_latency("close", time.time() - start)
2916         self.ss.count("close")
2917hunk ./src/allmydata/storage/server.py 45
2918         self._active_writers = weakref.WeakKeyDictionary()
2919         self.backend = backend
2920         self.backend.setServiceParent(self)
2921+        self.backend.set_storage_server(self)
2922         log.msg("StorageServer created", facility="tahoe.storage")
2923 
2924         self.latencies = {"allocate": [], # immutable
2925hunk ./src/allmydata/storage/server.py 220
2926 
2927         for shnum in (sharenums - alreadygot):
2928             if (not limited) or (remaining_space >= max_space_per_bucket):
2929-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2930-                self.backend.set_storage_server(self)
2931                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2932                                                      max_space_per_bucket, lease_info, canary)
2933                 bucketwriters[shnum] = bw
2934hunk ./src/allmydata/test/test_backends.py 117
2935         mockopen.side_effect = call_open
2936         testbackend = DASCore(tempdir, expiration_policy)
2937         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2938-
2939+   
2940+    @mock.patch('allmydata.util.fileutil.get_available_space')
2941     @mock.patch('time.time')
2942     @mock.patch('os.mkdir')
2943     @mock.patch('__builtin__.open')
2944hunk ./src/allmydata/test/test_backends.py 124
2945     @mock.patch('os.listdir')
2946     @mock.patch('os.path.isdir')
2947-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2948+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2949+                             mockget_available_space):
2950         """ Write a new share. """
2951 
2952         def call_listdir(dirname):
2953hunk ./src/allmydata/test/test_backends.py 148
2954 
2955         mockmkdir.side_effect = call_mkdir
2956 
2957+        def call_get_available_space(storedir, reserved_space):
2958+            self.failUnlessReallyEqual(storedir, tempdir)
2959+            return 1
2960+
2961+        mockget_available_space.side_effect = call_get_available_space
2962+
2963         class MockFile:
2964             def __init__(self):
2965                 self.buffer = ''
2966hunk ./src/allmydata/test/test_backends.py 188
2967         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2968         bs[0].remote_write(0, 'a')
2969         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2970-       
2971+
2972+        # What happens when there's not enough space for the client's request?
2973+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2974+
2975         # Now test the allocated_size method.
2976         spaceint = self.s.allocated_size()
2977         self.failUnlessReallyEqual(spaceint, 1)
2978}
2979[checkpoint10
2980wilcoxjg@gmail.com**20110707172049
2981 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2982] {
2983hunk ./src/allmydata/test/test_backends.py 20
2984 # The following share file contents was generated with
2985 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2986 # with share data == 'a'.
2987-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2988+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2989+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2990+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2991 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2992 
2993hunk ./src/allmydata/test/test_backends.py 25
2994+testnodeid = 'testnodeidxxxxxxxxxx'
2995 tempdir = 'teststoredir'
2996 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2997 sharefname = os.path.join(sharedirname, '0')
2998hunk ./src/allmydata/test/test_backends.py 37
2999 
3000 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
3001     def setUp(self):
3002-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
3003+        self.s = StorageServer(testnodeid, backend=NullCore())
3004 
3005     @mock.patch('os.mkdir')
3006     @mock.patch('__builtin__.open')
3007hunk ./src/allmydata/test/test_backends.py 99
3008         mockmkdir.side_effect = call_mkdir
3009 
3010         # Now begin the test.
3011-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
3012+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
3013 
3014         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
3015 
3016hunk ./src/allmydata/test/test_backends.py 119
3017 
3018         mockopen.side_effect = call_open
3019         testbackend = DASCore(tempdir, expiration_policy)
3020-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
3021-   
3022+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3023+       
3024+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3025     @mock.patch('allmydata.util.fileutil.get_available_space')
3026     @mock.patch('time.time')
3027     @mock.patch('os.mkdir')
3028hunk ./src/allmydata/test/test_backends.py 129
3029     @mock.patch('os.listdir')
3030     @mock.patch('os.path.isdir')
3031     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3032-                             mockget_available_space):
3033+                             mockget_available_space, mockget_shares):
3034         """ Write a new share. """
3035 
3036         def call_listdir(dirname):
3037hunk ./src/allmydata/test/test_backends.py 139
3038         mocklistdir.side_effect = call_listdir
3039 
3040         def call_isdir(dirname):
3041+            #XXX Should there be any other tests here?
3042             self.failUnlessReallyEqual(dirname, sharedirname)
3043             return True
3044 
3045hunk ./src/allmydata/test/test_backends.py 159
3046 
3047         mockget_available_space.side_effect = call_get_available_space
3048 
3049+        mocktime.return_value = 0
3050+        class MockShare:
3051+            def __init__(self):
3052+                self.shnum = 1
3053+               
3054+            def add_or_renew_lease(elf, lease_info):
3055+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3056+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3057+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3058+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3059+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3060+               
3061+
3062+        share = MockShare()
3063+        def call_get_shares(storageindex):
3064+            return [share]
3065+
3066+        mockget_shares.side_effect = call_get_shares
3067+
3068         class MockFile:
3069             def __init__(self):
3070                 self.buffer = ''
3071hunk ./src/allmydata/test/test_backends.py 199
3072             def tell(self):
3073                 return self.pos
3074 
3075-        mocktime.return_value = 0
3076 
3077         sharefile = MockFile()
3078         def call_open(fname, mode):
3079}
3080[jacp 11
3081wilcoxjg@gmail.com**20110708213919
3082 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3083] {
3084hunk ./src/allmydata/storage/backends/das/core.py 144
3085         self.incomingdir = os.path.join(sharedir, 'incoming')
3086         si_dir = storage_index_to_dir(storageindex)
3087         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3088+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3089         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3090         if create:
3091             # touch the file, so later callers will see that we're working on
3092hunk ./src/allmydata/storage/backends/das/core.py 208
3093         pass
3094         
3095     def stat(self):
3096-        return os.stat(self.finalhome)[stat.ST_SIZE]
3097+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3098 
3099     def get_shnum(self):
3100         return self.shnum
3101hunk ./src/allmydata/storage/immutable.py 44
3102     def remote_close(self):
3103         precondition(not self.closed)
3104         start = time.time()
3105+
3106         self._sharefile.close()
3107hunk ./src/allmydata/storage/immutable.py 46
3108+        filelen = self._sharefile.stat()
3109         self._sharefile = None
3110hunk ./src/allmydata/storage/immutable.py 48
3111+
3112         self.closed = True
3113         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3114 
3115hunk ./src/allmydata/storage/immutable.py 52
3116-        filelen = self._sharefile.stat()
3117         self.ss.bucket_writer_closed(self, filelen)
3118         self.ss.add_latency("close", time.time() - start)
3119         self.ss.count("close")
3120hunk ./src/allmydata/storage/server.py 220
3121 
3122         for shnum in (sharenums - alreadygot):
3123             if (not limited) or (remaining_space >= max_space_per_bucket):
3124-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3125-                                                     max_space_per_bucket, lease_info, canary)
3126+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3127                 bucketwriters[shnum] = bw
3128                 self._active_writers[bw] = 1
3129                 if limited:
3130hunk ./src/allmydata/test/test_backends.py 20
3131 # The following share file contents was generated with
3132 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3133 # with share data == 'a'.
3134-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3135-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3136+renew_secret  = 'x'*32
3137+cancel_secret = 'y'*32
3138 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3139 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3140 
3141hunk ./src/allmydata/test/test_backends.py 27
3142 testnodeid = 'testnodeidxxxxxxxxxx'
3143 tempdir = 'teststoredir'
3144-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3145-sharefname = os.path.join(sharedirname, '0')
3146+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3147+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3148+shareincomingname = os.path.join(sharedirincomingname, '0')
3149+sharefname = os.path.join(sharedirfinalname, '0')
3150+
3151 expiration_policy = {'enabled' : False,
3152                      'mode' : 'age',
3153                      'override_lease_duration' : None,
3154hunk ./src/allmydata/test/test_backends.py 123
3155         mockopen.side_effect = call_open
3156         testbackend = DASCore(tempdir, expiration_policy)
3157         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3158-       
3159+
3160+    @mock.patch('allmydata.util.fileutil.rename')
3161+    @mock.patch('allmydata.util.fileutil.make_dirs')
3162+    @mock.patch('os.path.exists')
3163+    @mock.patch('os.stat')
3164     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3165     @mock.patch('allmydata.util.fileutil.get_available_space')
3166     @mock.patch('time.time')
3167hunk ./src/allmydata/test/test_backends.py 136
3168     @mock.patch('os.listdir')
3169     @mock.patch('os.path.isdir')
3170     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3171-                             mockget_available_space, mockget_shares):
3172+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3173+                             mockmake_dirs, mockrename):
3174         """ Write a new share. """
3175 
3176         def call_listdir(dirname):
3177hunk ./src/allmydata/test/test_backends.py 141
3178-            self.failUnlessReallyEqual(dirname, sharedirname)
3179+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3180             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3181 
3182         mocklistdir.side_effect = call_listdir
3183hunk ./src/allmydata/test/test_backends.py 148
3184 
3185         def call_isdir(dirname):
3186             #XXX Should there be any other tests here?
3187-            self.failUnlessReallyEqual(dirname, sharedirname)
3188+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3189             return True
3190 
3191         mockisdir.side_effect = call_isdir
3192hunk ./src/allmydata/test/test_backends.py 154
3193 
3194         def call_mkdir(dirname, permissions):
3195-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3196+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3197                 self.Fail
3198             else:
3199                 return True
3200hunk ./src/allmydata/test/test_backends.py 208
3201                 return self.pos
3202 
3203 
3204-        sharefile = MockFile()
3205+        fobj = MockFile()
3206         def call_open(fname, mode):
3207             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3208hunk ./src/allmydata/test/test_backends.py 211
3209-            return sharefile
3210+            return fobj
3211 
3212         mockopen.side_effect = call_open
3213 
3214hunk ./src/allmydata/test/test_backends.py 215
3215+        def call_make_dirs(dname):
3216+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3217+           
3218+        mockmake_dirs.side_effect = call_make_dirs
3219+
3220+        def call_rename(src, dst):
3221+           self.failUnlessReallyEqual(src, shareincomingname)
3222+           self.failUnlessReallyEqual(dst, sharefname)
3223+           
3224+        mockrename.side_effect = call_rename
3225+
3226+        def call_exists(fname):
3227+            self.failUnlessReallyEqual(fname, sharefname)
3228+
3229+        mockexists.side_effect = call_exists
3230+
3231         # Now begin the test.
3232         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3233         bs[0].remote_write(0, 'a')
3234hunk ./src/allmydata/test/test_backends.py 234
3235-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3236+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3237+        spaceint = self.s.allocated_size()
3238+        self.failUnlessReallyEqual(spaceint, 1)
3239+
3240+        bs[0].remote_close()
3241 
3242         # What happens when there's not enough space for the client's request?
3243hunk ./src/allmydata/test/test_backends.py 241
3244-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3245+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3246 
3247         # Now test the allocated_size method.
3248hunk ./src/allmydata/test/test_backends.py 244
3249-        spaceint = self.s.allocated_size()
3250-        self.failUnlessReallyEqual(spaceint, 1)
3251+        #self.failIf(mockexists.called, mockexists.call_args_list)
3252+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3253+        #self.failIf(mockrename.called, mockrename.call_args_list)
3254+        #self.failIf(mockstat.called, mockstat.call_args_list)
3255 
3256     @mock.patch('os.path.exists')
3257     @mock.patch('os.path.getsize')
3258}
3259[checkpoint12 testing correct behavior with regard to incoming and final
3260wilcoxjg@gmail.com**20110710191915
3261 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3262] {
3263hunk ./src/allmydata/storage/backends/das/core.py 74
3264         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3265         self.lease_checker.setServiceParent(self)
3266 
3267+    def get_incoming(self, storageindex):
3268+        return set((1,))
3269+
3270     def get_available_space(self):
3271         if self.readonly:
3272             return 0
3273hunk ./src/allmydata/storage/server.py 77
3274         """Return a dict, indexed by category, that contains a dict of
3275         latency numbers for each category. If there are sufficient samples
3276         for unambiguous interpretation, each dict will contain the
3277-        following keys: mean, 01_0_percentile, 10_0_percentile,
3278+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3279         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3280         99_0_percentile, 99_9_percentile.  If there are insufficient
3281         samples for a given percentile to be interpreted unambiguously
3282hunk ./src/allmydata/storage/server.py 120
3283 
3284     def get_stats(self):
3285         # remember: RIStatsProvider requires that our return dict
3286-        # contains numeric values.
3287+        # contains numeric, or None values.
3288         stats = { 'storage_server.allocated': self.allocated_size(), }
3289         stats['storage_server.reserved_space'] = self.reserved_space
3290         for category,ld in self.get_latencies().items():
3291hunk ./src/allmydata/storage/server.py 185
3292         start = time.time()
3293         self.count("allocate")
3294         alreadygot = set()
3295+        incoming = set()
3296         bucketwriters = {} # k: shnum, v: BucketWriter
3297 
3298         si_s = si_b2a(storage_index)
3299hunk ./src/allmydata/storage/server.py 219
3300             alreadygot.add(share.shnum)
3301             share.add_or_renew_lease(lease_info)
3302 
3303-        for shnum in (sharenums - alreadygot):
3304+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3305+        incoming = self.backend.get_incoming(storageindex)
3306+
3307+        for shnum in ((sharenums - alreadygot) - incoming):
3308             if (not limited) or (remaining_space >= max_space_per_bucket):
3309                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3310                 bucketwriters[shnum] = bw
3311hunk ./src/allmydata/storage/server.py 229
3312                 self._active_writers[bw] = 1
3313                 if limited:
3314                     remaining_space -= max_space_per_bucket
3315-
3316-        #XXX We SHOULD DOCUMENT LATER.
3317+            else:
3318+                # Bummer not enough space to accept this share.
3319+                pass
3320 
3321         self.add_latency("allocate", time.time() - start)
3322         return alreadygot, bucketwriters
3323hunk ./src/allmydata/storage/server.py 323
3324         self.add_latency("get", time.time() - start)
3325         return bucketreaders
3326 
3327-    def get_leases(self, storage_index):
3328+    def remote_get_incoming(self, storageindex):
3329+        incoming_share_set = self.backend.get_incoming(storageindex)
3330+        return incoming_share_set
3331+
3332+    def get_leases(self, storageindex):
3333         """Provide an iterator that yields all of the leases attached to this
3334         bucket. Each lease is returned as a LeaseInfo instance.
3335 
3336hunk ./src/allmydata/storage/server.py 337
3337         # since all shares get the same lease data, we just grab the leases
3338         # from the first share
3339         try:
3340-            shnum, filename = self._get_shares(storage_index).next()
3341+            shnum, filename = self._get_shares(storageindex).next()
3342             sf = ShareFile(filename)
3343             return sf.get_leases()
3344         except StopIteration:
3345hunk ./src/allmydata/test/test_backends.py 182
3346 
3347         share = MockShare()
3348         def call_get_shares(storageindex):
3349-            return [share]
3350+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3351+            return []#share]
3352 
3353         mockget_shares.side_effect = call_get_shares
3354 
3355hunk ./src/allmydata/test/test_backends.py 222
3356         mockmake_dirs.side_effect = call_make_dirs
3357 
3358         def call_rename(src, dst):
3359-           self.failUnlessReallyEqual(src, shareincomingname)
3360-           self.failUnlessReallyEqual(dst, sharefname)
3361+            self.failUnlessReallyEqual(src, shareincomingname)
3362+            self.failUnlessReallyEqual(dst, sharefname)
3363             
3364         mockrename.side_effect = call_rename
3365 
3366hunk ./src/allmydata/test/test_backends.py 233
3367         mockexists.side_effect = call_exists
3368 
3369         # Now begin the test.
3370+
3371+        # XXX (0) ???  Fail unless something is not properly set-up?
3372         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3373hunk ./src/allmydata/test/test_backends.py 236
3374+
3375+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3376+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3377+
3378+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3379+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3380+        # with the same si, until BucketWriter.remote_close() has been called.
3381+        # self.failIf(bsa)
3382+
3383+        # XXX (3) Inspect final and fail unless there's nothing there.
3384         bs[0].remote_write(0, 'a')
3385hunk ./src/allmydata/test/test_backends.py 247
3386+        # XXX (4a) Inspect final and fail unless share 0 is there.
3387+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3388         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3389         spaceint = self.s.allocated_size()
3390         self.failUnlessReallyEqual(spaceint, 1)
3391hunk ./src/allmydata/test/test_backends.py 253
3392 
3393+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3394         bs[0].remote_close()
3395 
3396         # What happens when there's not enough space for the client's request?
3397hunk ./src/allmydata/test/test_backends.py 260
3398         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3399 
3400         # Now test the allocated_size method.
3401-        #self.failIf(mockexists.called, mockexists.call_args_list)
3402+        # self.failIf(mockexists.called, mockexists.call_args_list)
3403         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3404         #self.failIf(mockrename.called, mockrename.call_args_list)
3405         #self.failIf(mockstat.called, mockstat.call_args_list)
3406}
3407[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3408wilcoxjg@gmail.com**20110710195139
3409 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3410] {
3411hunk ./src/allmydata/storage/server.py 220
3412             share.add_or_renew_lease(lease_info)
3413 
3414         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3415-        incoming = self.backend.get_incoming(storageindex)
3416+        incoming = self.backend.get_incoming(storage_index)
3417 
3418         for shnum in ((sharenums - alreadygot) - incoming):
3419             if (not limited) or (remaining_space >= max_space_per_bucket):
3420hunk ./src/allmydata/storage/server.py 323
3421         self.add_latency("get", time.time() - start)
3422         return bucketreaders
3423 
3424-    def remote_get_incoming(self, storageindex):
3425-        incoming_share_set = self.backend.get_incoming(storageindex)
3426+    def remote_get_incoming(self, storage_index):
3427+        incoming_share_set = self.backend.get_incoming(storage_index)
3428         return incoming_share_set
3429 
3430hunk ./src/allmydata/storage/server.py 327
3431-    def get_leases(self, storageindex):
3432+    def get_leases(self, storage_index):
3433         """Provide an iterator that yields all of the leases attached to this
3434         bucket. Each lease is returned as a LeaseInfo instance.
3435 
3436hunk ./src/allmydata/storage/server.py 337
3437         # since all shares get the same lease data, we just grab the leases
3438         # from the first share
3439         try:
3440-            shnum, filename = self._get_shares(storageindex).next()
3441+            shnum, filename = self._get_shares(storage_index).next()
3442             sf = ShareFile(filename)
3443             return sf.get_leases()
3444         except StopIteration:
3445replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3446}
3447[adding comments to clarify what I'm about to do.
3448wilcoxjg@gmail.com**20110710220623
3449 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3450] {
3451hunk ./src/allmydata/storage/backends/das/core.py 8
3452 
3453 import os, re, weakref, struct, time
3454 
3455-from foolscap.api import Referenceable
3456+#from foolscap.api import Referenceable
3457 from twisted.application import service
3458 
3459 from zope.interface import implements
3460hunk ./src/allmydata/storage/backends/das/core.py 12
3461-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3462+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3463 from allmydata.util import fileutil, idlib, log, time_format
3464 import allmydata # for __full_version__
3465 
3466hunk ./src/allmydata/storage/server.py 219
3467             alreadygot.add(share.shnum)
3468             share.add_or_renew_lease(lease_info)
3469 
3470-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3471+        # fill incoming with all shares that are incoming use a set operation
3472+        # since there's no need to operate on individual pieces
3473         incoming = self.backend.get_incoming(storageindex)
3474 
3475         for shnum in ((sharenums - alreadygot) - incoming):
3476hunk ./src/allmydata/test/test_backends.py 245
3477         # with the same si, until BucketWriter.remote_close() has been called.
3478         # self.failIf(bsa)
3479 
3480-        # XXX (3) Inspect final and fail unless there's nothing there.
3481         bs[0].remote_write(0, 'a')
3482hunk ./src/allmydata/test/test_backends.py 246
3483-        # XXX (4a) Inspect final and fail unless share 0 is there.
3484-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3485         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3486         spaceint = self.s.allocated_size()
3487         self.failUnlessReallyEqual(spaceint, 1)
3488hunk ./src/allmydata/test/test_backends.py 250
3489 
3490-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3491+        # XXX (3) Inspect final and fail unless there's nothing there.
3492         bs[0].remote_close()
3493hunk ./src/allmydata/test/test_backends.py 252
3494+        # XXX (4a) Inspect final and fail unless share 0 is there.
3495+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3496 
3497         # What happens when there's not enough space for the client's request?
3498         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3499}
3500[branching back, no longer attempting to mock inside TestServerFSBackend
3501wilcoxjg@gmail.com**20110711190849
3502 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3503] {
3504hunk ./src/allmydata/storage/backends/das/core.py 75
3505         self.lease_checker.setServiceParent(self)
3506 
3507     def get_incoming(self, storageindex):
3508-        return set((1,))
3509-
3510-    def get_available_space(self):
3511-        if self.readonly:
3512-            return 0
3513-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3514+        """Return the set of incoming shnums."""
3515+        return set(os.listdir(self.incomingdir))
3516 
3517     def get_shares(self, storage_index):
3518         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3519hunk ./src/allmydata/storage/backends/das/core.py 90
3520             # Commonly caused by there being no shares at all.
3521             pass
3522         
3523+    def get_available_space(self):
3524+        if self.readonly:
3525+            return 0
3526+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3527+
3528     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3529         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3530         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3531hunk ./src/allmydata/test/test_backends.py 27
3532 
3533 testnodeid = 'testnodeidxxxxxxxxxx'
3534 tempdir = 'teststoredir'
3535-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3536-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3537+basedir = os.path.join(tempdir, 'shares')
3538+baseincdir = os.path.join(basedir, 'incoming')
3539+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3540+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3541 shareincomingname = os.path.join(sharedirincomingname, '0')
3542 sharefname = os.path.join(sharedirfinalname, '0')
3543 
3544hunk ./src/allmydata/test/test_backends.py 142
3545                              mockmake_dirs, mockrename):
3546         """ Write a new share. """
3547 
3548-        def call_listdir(dirname):
3549-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3550-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3551-
3552-        mocklistdir.side_effect = call_listdir
3553-
3554-        def call_isdir(dirname):
3555-            #XXX Should there be any other tests here?
3556-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3557-            return True
3558-
3559-        mockisdir.side_effect = call_isdir
3560-
3561-        def call_mkdir(dirname, permissions):
3562-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3563-                self.Fail
3564-            else:
3565-                return True
3566-
3567-        mockmkdir.side_effect = call_mkdir
3568-
3569-        def call_get_available_space(storedir, reserved_space):
3570-            self.failUnlessReallyEqual(storedir, tempdir)
3571-            return 1
3572-
3573-        mockget_available_space.side_effect = call_get_available_space
3574-
3575-        mocktime.return_value = 0
3576         class MockShare:
3577             def __init__(self):
3578                 self.shnum = 1
3579hunk ./src/allmydata/test/test_backends.py 152
3580                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3581                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3582                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3583-               
3584 
3585         share = MockShare()
3586hunk ./src/allmydata/test/test_backends.py 154
3587-        def call_get_shares(storageindex):
3588-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3589-            return []#share]
3590-
3591-        mockget_shares.side_effect = call_get_shares
3592 
3593         class MockFile:
3594             def __init__(self):
3595hunk ./src/allmydata/test/test_backends.py 176
3596             def tell(self):
3597                 return self.pos
3598 
3599-
3600         fobj = MockFile()
3601hunk ./src/allmydata/test/test_backends.py 177
3602+
3603+        directories = {}
3604+        def call_listdir(dirname):
3605+            if dirname not in directories:
3606+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3607+            else:
3608+                return directories[dirname].get_contents()
3609+
3610+        mocklistdir.side_effect = call_listdir
3611+
3612+        class MockDir:
3613+            def __init__(self, dirname):
3614+                self.name = dirname
3615+                self.contents = []
3616+   
3617+            def get_contents(self):
3618+                return self.contents
3619+
3620+        def call_isdir(dirname):
3621+            #XXX Should there be any other tests here?
3622+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3623+            return True
3624+
3625+        mockisdir.side_effect = call_isdir
3626+
3627+        def call_mkdir(dirname, permissions):
3628+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3629+                self.Fail
3630+            if dirname in directories:
3631+                raise OSError(17, "File exists: '%s'" % dirname)
3632+                self.Fail
3633+            elif dirname not in directories:
3634+                directories[dirname] = MockDir(dirname)
3635+                return True
3636+
3637+        mockmkdir.side_effect = call_mkdir
3638+
3639+        def call_get_available_space(storedir, reserved_space):
3640+            self.failUnlessReallyEqual(storedir, tempdir)
3641+            return 1
3642+
3643+        mockget_available_space.side_effect = call_get_available_space
3644+
3645+        mocktime.return_value = 0
3646+        def call_get_shares(storageindex):
3647+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3648+            return []#share]
3649+
3650+        mockget_shares.side_effect = call_get_shares
3651+
3652         def call_open(fname, mode):
3653             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3654             return fobj
3655}
3656[checkpoint12 TestServerFSBackend no longer mocks filesystem
3657wilcoxjg@gmail.com**20110711193357
3658 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3659] {
3660hunk ./src/allmydata/storage/backends/das/core.py 23
3661      create_mutable_sharefile
3662 from allmydata.storage.immutable import BucketWriter, BucketReader
3663 from allmydata.storage.crawler import FSBucketCountingCrawler
3664+from allmydata.util.hashutil import constant_time_compare
3665 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3666 
3667 from zope.interface import implements
3668hunk ./src/allmydata/storage/backends/das/core.py 28
3669 
3670+# storage/
3671+# storage/shares/incoming
3672+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3673+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3674+# storage/shares/$START/$STORAGEINDEX
3675+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3676+
3677+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3678+# base-32 chars).
3679 # $SHARENUM matches this regex:
3680 NUM_RE=re.compile("^[0-9]+$")
3681 
3682hunk ./src/allmydata/test/test_backends.py 126
3683         testbackend = DASCore(tempdir, expiration_policy)
3684         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3685 
3686-    @mock.patch('allmydata.util.fileutil.rename')
3687-    @mock.patch('allmydata.util.fileutil.make_dirs')
3688-    @mock.patch('os.path.exists')
3689-    @mock.patch('os.stat')
3690-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3691-    @mock.patch('allmydata.util.fileutil.get_available_space')
3692     @mock.patch('time.time')
3693hunk ./src/allmydata/test/test_backends.py 127
3694-    @mock.patch('os.mkdir')
3695-    @mock.patch('__builtin__.open')
3696-    @mock.patch('os.listdir')
3697-    @mock.patch('os.path.isdir')
3698-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3699-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3700-                             mockmake_dirs, mockrename):
3701+    def test_write_share(self, mocktime):
3702         """ Write a new share. """
3703 
3704         class MockShare:
3705hunk ./src/allmydata/test/test_backends.py 143
3706 
3707         share = MockShare()
3708 
3709-        class MockFile:
3710-            def __init__(self):
3711-                self.buffer = ''
3712-                self.pos = 0
3713-            def write(self, instring):
3714-                begin = self.pos
3715-                padlen = begin - len(self.buffer)
3716-                if padlen > 0:
3717-                    self.buffer += '\x00' * padlen
3718-                end = self.pos + len(instring)
3719-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3720-                self.pos = end
3721-            def close(self):
3722-                pass
3723-            def seek(self, pos):
3724-                self.pos = pos
3725-            def read(self, numberbytes):
3726-                return self.buffer[self.pos:self.pos+numberbytes]
3727-            def tell(self):
3728-                return self.pos
3729-
3730-        fobj = MockFile()
3731-
3732-        directories = {}
3733-        def call_listdir(dirname):
3734-            if dirname not in directories:
3735-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3736-            else:
3737-                return directories[dirname].get_contents()
3738-
3739-        mocklistdir.side_effect = call_listdir
3740-
3741-        class MockDir:
3742-            def __init__(self, dirname):
3743-                self.name = dirname
3744-                self.contents = []
3745-   
3746-            def get_contents(self):
3747-                return self.contents
3748-
3749-        def call_isdir(dirname):
3750-            #XXX Should there be any other tests here?
3751-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3752-            return True
3753-
3754-        mockisdir.side_effect = call_isdir
3755-
3756-        def call_mkdir(dirname, permissions):
3757-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3758-                self.Fail
3759-            if dirname in directories:
3760-                raise OSError(17, "File exists: '%s'" % dirname)
3761-                self.Fail
3762-            elif dirname not in directories:
3763-                directories[dirname] = MockDir(dirname)
3764-                return True
3765-
3766-        mockmkdir.side_effect = call_mkdir
3767-
3768-        def call_get_available_space(storedir, reserved_space):
3769-            self.failUnlessReallyEqual(storedir, tempdir)
3770-            return 1
3771-
3772-        mockget_available_space.side_effect = call_get_available_space
3773-
3774-        mocktime.return_value = 0
3775-        def call_get_shares(storageindex):
3776-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3777-            return []#share]
3778-
3779-        mockget_shares.side_effect = call_get_shares
3780-
3781-        def call_open(fname, mode):
3782-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3783-            return fobj
3784-
3785-        mockopen.side_effect = call_open
3786-
3787-        def call_make_dirs(dname):
3788-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3789-           
3790-        mockmake_dirs.side_effect = call_make_dirs
3791-
3792-        def call_rename(src, dst):
3793-            self.failUnlessReallyEqual(src, shareincomingname)
3794-            self.failUnlessReallyEqual(dst, sharefname)
3795-           
3796-        mockrename.side_effect = call_rename
3797-
3798-        def call_exists(fname):
3799-            self.failUnlessReallyEqual(fname, sharefname)
3800-
3801-        mockexists.side_effect = call_exists
3802-
3803         # Now begin the test.
3804 
3805         # XXX (0) ???  Fail unless something is not properly set-up?
3806}
3807[JACP
3808wilcoxjg@gmail.com**20110711194407
3809 Ignore-this: b54745de777c4bb58d68d708f010bbb
3810] {
3811hunk ./src/allmydata/storage/backends/das/core.py 86
3812 
3813     def get_incoming(self, storageindex):
3814         """Return the set of incoming shnums."""
3815-        return set(os.listdir(self.incomingdir))
3816+        try:
3817+            incominglist = os.listdir(self.incomingdir)
3818+            print "incominglist: ", incominglist
3819+            return set(incominglist)
3820+        except OSError:
3821+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3822+            pass
3823 
3824     def get_shares(self, storage_index):
3825         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3826hunk ./src/allmydata/storage/server.py 17
3827 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3828      create_mutable_sharefile
3829 
3830-# storage/
3831-# storage/shares/incoming
3832-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3833-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3834-# storage/shares/$START/$STORAGEINDEX
3835-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3836-
3837-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3838-# base-32 chars).
3839-
3840-
3841 class StorageServer(service.MultiService, Referenceable):
3842     implements(RIStorageServer, IStatsProducer)
3843     name = 'storage'
3844}
3845[testing get incoming
3846wilcoxjg@gmail.com**20110711210224
3847 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3848] {
3849hunk ./src/allmydata/storage/backends/das/core.py 87
3850     def get_incoming(self, storageindex):
3851         """Return the set of incoming shnums."""
3852         try:
3853-            incominglist = os.listdir(self.incomingdir)
3854+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3855+            incominglist = os.listdir(incomingsharesdir)
3856             print "incominglist: ", incominglist
3857             return set(incominglist)
3858         except OSError:
3859hunk ./src/allmydata/storage/backends/das/core.py 92
3860-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3861-            pass
3862-
3863+            # XXX I'd like to make this more specific. If there are no shares at all.
3864+            return set()
3865+           
3866     def get_shares(self, storage_index):
3867         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3868         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3869hunk ./src/allmydata/test/test_backends.py 149
3870         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3871 
3872         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3873+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3874         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3875 
3876hunk ./src/allmydata/test/test_backends.py 152
3877-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3878         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3879         # with the same si, until BucketWriter.remote_close() has been called.
3880         # self.failIf(bsa)
3881}
3882[ImmutableShareFile does not know its StorageIndex
3883wilcoxjg@gmail.com**20110711211424
3884 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3885] {
3886hunk ./src/allmydata/storage/backends/das/core.py 112
3887             return 0
3888         return fileutil.get_available_space(self.storedir, self.reserved_space)
3889 
3890-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3891-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3892+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3893+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3894+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3895+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3896         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3897         return bw
3898 
3899hunk ./src/allmydata/storage/backends/das/core.py 155
3900     LEASE_SIZE = struct.calcsize(">L32s32sL")
3901     sharetype = "immutable"
3902 
3903-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3904+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3905         """ If max_size is not None then I won't allow more than
3906         max_size to be written to me. If create=True then max_size
3907         must not be None. """
3908}
3909[get_incoming correctly reports the 0 share after it has arrived
3910wilcoxjg@gmail.com**20110712025157
3911 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3912] {
3913hunk ./src/allmydata/storage/backends/das/core.py 1
3914+import os, re, weakref, struct, time, stat
3915+
3916 from allmydata.interfaces import IStorageBackend
3917 from allmydata.storage.backends.base import Backend
3918 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3919hunk ./src/allmydata/storage/backends/das/core.py 8
3920 from allmydata.util.assertutil import precondition
3921 
3922-import os, re, weakref, struct, time
3923-
3924 #from foolscap.api import Referenceable
3925 from twisted.application import service
3926 
3927hunk ./src/allmydata/storage/backends/das/core.py 89
3928         try:
3929             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3930             incominglist = os.listdir(incomingsharesdir)
3931-            print "incominglist: ", incominglist
3932-            return set(incominglist)
3933+            incomingshnums = [int(x) for x in incominglist]
3934+            return set(incomingshnums)
3935         except OSError:
3936             # XXX I'd like to make this more specific. If there are no shares at all.
3937             return set()
3938hunk ./src/allmydata/storage/backends/das/core.py 113
3939         return fileutil.get_available_space(self.storedir, self.reserved_space)
3940 
3941     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3942-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3943-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3944-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3945+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3946+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3947+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3948         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3949         return bw
3950 
3951hunk ./src/allmydata/storage/backends/das/core.py 160
3952         max_size to be written to me. If create=True then max_size
3953         must not be None. """
3954         precondition((max_size is not None) or (not create), max_size, create)
3955-        self.shnum = shnum
3956-        self.storage_index = storageindex
3957-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3958         self._max_size = max_size
3959hunk ./src/allmydata/storage/backends/das/core.py 161
3960-        self.incomingdir = os.path.join(sharedir, 'incoming')
3961-        si_dir = storage_index_to_dir(storageindex)
3962-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3963-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3964-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3965+        self.incominghome = incominghome
3966+        self.finalhome = finalhome
3967         if create:
3968             # touch the file, so later callers will see that we're working on
3969             # it. Also construct the metadata.
3970hunk ./src/allmydata/storage/backends/das/core.py 166
3971-            assert not os.path.exists(self.fname)
3972-            fileutil.make_dirs(os.path.dirname(self.fname))
3973-            f = open(self.fname, 'wb')
3974+            assert not os.path.exists(self.finalhome)
3975+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3976+            f = open(self.incominghome, 'wb')
3977             # The second field -- the four-byte share data length -- is no
3978             # longer used as of Tahoe v1.3.0, but we continue to write it in
3979             # there in case someone downgrades a storage server from >=
3980hunk ./src/allmydata/storage/backends/das/core.py 183
3981             self._lease_offset = max_size + 0x0c
3982             self._num_leases = 0
3983         else:
3984-            f = open(self.fname, 'rb')
3985-            filesize = os.path.getsize(self.fname)
3986+            f = open(self.finalhome, 'rb')
3987+            filesize = os.path.getsize(self.finalhome)
3988             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3989             f.close()
3990             if version != 1:
3991hunk ./src/allmydata/storage/backends/das/core.py 189
3992                 msg = "sharefile %s had version %d but we wanted 1" % \
3993-                      (self.fname, version)
3994+                      (self.finalhome, version)
3995                 raise UnknownImmutableContainerVersionError(msg)
3996             self._num_leases = num_leases
3997             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3998hunk ./src/allmydata/storage/backends/das/core.py 225
3999         pass
4000         
4001     def stat(self):
4002-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
4003+        return os.stat(self.finalhome)[stat.ST_SIZE]
4004+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
4005 
4006     def get_shnum(self):
4007         return self.shnum
4008hunk ./src/allmydata/storage/backends/das/core.py 232
4009 
4010     def unlink(self):
4011-        os.unlink(self.fname)
4012+        os.unlink(self.finalhome)
4013 
4014     def read_share_data(self, offset, length):
4015         precondition(offset >= 0)
4016hunk ./src/allmydata/storage/backends/das/core.py 239
4017         # Reads beyond the end of the data are truncated. Reads that start
4018         # beyond the end of the data return an empty string.
4019         seekpos = self._data_offset+offset
4020-        fsize = os.path.getsize(self.fname)
4021+        fsize = os.path.getsize(self.finalhome)
4022         actuallength = max(0, min(length, fsize-seekpos))
4023         if actuallength == 0:
4024             return ""
4025hunk ./src/allmydata/storage/backends/das/core.py 243
4026-        f = open(self.fname, 'rb')
4027+        f = open(self.finalhome, 'rb')
4028         f.seek(seekpos)
4029         return f.read(actuallength)
4030 
4031hunk ./src/allmydata/storage/backends/das/core.py 252
4032         precondition(offset >= 0, offset)
4033         if self._max_size is not None and offset+length > self._max_size:
4034             raise DataTooLargeError(self._max_size, offset, length)
4035-        f = open(self.fname, 'rb+')
4036+        f = open(self.incominghome, 'rb+')
4037         real_offset = self._data_offset+offset
4038         f.seek(real_offset)
4039         assert f.tell() == real_offset
4040hunk ./src/allmydata/storage/backends/das/core.py 279
4041 
4042     def get_leases(self):
4043         """Yields a LeaseInfo instance for all leases."""
4044-        f = open(self.fname, 'rb')
4045+        f = open(self.finalhome, 'rb')
4046         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
4047         f.seek(self._lease_offset)
4048         for i in range(num_leases):
4049hunk ./src/allmydata/storage/backends/das/core.py 288
4050                 yield LeaseInfo().from_immutable_data(data)
4051 
4052     def add_lease(self, lease_info):
4053-        f = open(self.fname, 'rb+')
4054+        f = open(self.incominghome, 'rb+')
4055         num_leases = self._read_num_leases(f)
4056         self._write_lease_record(f, num_leases, lease_info)
4057         self._write_num_leases(f, num_leases+1)
4058hunk ./src/allmydata/storage/backends/das/core.py 301
4059                 if new_expire_time > lease.expiration_time:
4060                     # yes
4061                     lease.expiration_time = new_expire_time
4062-                    f = open(self.fname, 'rb+')
4063+                    f = open(self.finalhome, 'rb+')
4064                     self._write_lease_record(f, i, lease)
4065                     f.close()
4066                 return
4067hunk ./src/allmydata/storage/backends/das/core.py 336
4068             # the same order as they were added, so that if we crash while
4069             # doing this, we won't lose any non-cancelled leases.
4070             leases = [l for l in leases if l] # remove the cancelled leases
4071-            f = open(self.fname, 'rb+')
4072+            f = open(self.finalhome, 'rb+')
4073             for i,lease in enumerate(leases):
4074                 self._write_lease_record(f, i, lease)
4075             self._write_num_leases(f, len(leases))
4076hunk ./src/allmydata/storage/backends/das/core.py 344
4077             f.close()
4078         space_freed = self.LEASE_SIZE * num_leases_removed
4079         if not len(leases):
4080-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
4081+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
4082             self.unlink()
4083         return space_freed
4084hunk ./src/allmydata/test/test_backends.py 129
4085     @mock.patch('time.time')
4086     def test_write_share(self, mocktime):
4087         """ Write a new share. """
4088-
4089-        class MockShare:
4090-            def __init__(self):
4091-                self.shnum = 1
4092-               
4093-            def add_or_renew_lease(elf, lease_info):
4094-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4095-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4096-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4097-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4098-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4099-
4100-        share = MockShare()
4101-
4102         # Now begin the test.
4103 
4104         # XXX (0) ???  Fail unless something is not properly set-up?
4105hunk ./src/allmydata/test/test_backends.py 143
4106         # self.failIf(bsa)
4107 
4108         bs[0].remote_write(0, 'a')
4109-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4110+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4111         spaceint = self.s.allocated_size()
4112         self.failUnlessReallyEqual(spaceint, 1)
4113 
4114hunk ./src/allmydata/test/test_backends.py 161
4115         #self.failIf(mockrename.called, mockrename.call_args_list)
4116         #self.failIf(mockstat.called, mockstat.call_args_list)
4117 
4118+    def test_handle_incoming(self):
4119+        incomingset = self.s.backend.get_incoming('teststorage_index')
4120+        self.failUnlessReallyEqual(incomingset, set())
4121+
4122+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4123+       
4124+        incomingset = self.s.backend.get_incoming('teststorage_index')
4125+        self.failUnlessReallyEqual(incomingset, set((0,)))
4126+
4127+        bs[0].remote_close()
4128+        self.failUnlessReallyEqual(incomingset, set())
4129+
4130     @mock.patch('os.path.exists')
4131     @mock.patch('os.path.getsize')
4132     @mock.patch('__builtin__.open')
4133hunk ./src/allmydata/test/test_backends.py 223
4134         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4135 
4136 
4137-
4138 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4139     @mock.patch('time.time')
4140     @mock.patch('os.mkdir')
4141hunk ./src/allmydata/test/test_backends.py 271
4142         DASCore('teststoredir', expiration_policy)
4143 
4144         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4145+
4146}
4147[jacp14
4148wilcoxjg@gmail.com**20110712061211
4149 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4150] {
4151hunk ./src/allmydata/storage/backends/das/core.py 95
4152             # XXX I'd like to make this more specific. If there are no shares at all.
4153             return set()
4154             
4155-    def get_shares(self, storage_index):
4156+    def get_shares(self, storageindex):
4157         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4158hunk ./src/allmydata/storage/backends/das/core.py 97
4159-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4160+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4161         try:
4162             for f in os.listdir(finalstoragedir):
4163                 if NUM_RE.match(f):
4164hunk ./src/allmydata/storage/backends/das/core.py 102
4165                     filename = os.path.join(finalstoragedir, f)
4166-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4167+                    yield ImmutableShare(filename, storageindex, f)
4168         except OSError:
4169             # Commonly caused by there being no shares at all.
4170             pass
4171hunk ./src/allmydata/storage/backends/das/core.py 115
4172     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4173         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4174         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4175-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4176+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4177         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4178         return bw
4179 
4180hunk ./src/allmydata/storage/backends/das/core.py 155
4181     LEASE_SIZE = struct.calcsize(">L32s32sL")
4182     sharetype = "immutable"
4183 
4184-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4185+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4186         """ If max_size is not None then I won't allow more than
4187         max_size to be written to me. If create=True then max_size
4188         must not be None. """
4189hunk ./src/allmydata/storage/backends/das/core.py 160
4190         precondition((max_size is not None) or (not create), max_size, create)
4191+        self.storageindex = storageindex
4192         self._max_size = max_size
4193         self.incominghome = incominghome
4194         self.finalhome = finalhome
4195hunk ./src/allmydata/storage/backends/das/core.py 164
4196+        self.shnum = shnum
4197         if create:
4198             # touch the file, so later callers will see that we're working on
4199             # it. Also construct the metadata.
4200hunk ./src/allmydata/storage/backends/das/core.py 212
4201             # their children to know when they should do the rmdir. This
4202             # approach is simpler, but relies on os.rmdir refusing to delete
4203             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4204+            #print "os.path.dirname(self.incominghome): "
4205+            #print os.path.dirname(self.incominghome)
4206             os.rmdir(os.path.dirname(self.incominghome))
4207             # we also delete the grandparent (prefix) directory, .../ab ,
4208             # again to avoid leaving directories lying around. This might
4209hunk ./src/allmydata/storage/immutable.py 93
4210     def __init__(self, ss, share):
4211         self.ss = ss
4212         self._share_file = share
4213-        self.storage_index = share.storage_index
4214+        self.storageindex = share.storageindex
4215         self.shnum = share.shnum
4216 
4217     def __repr__(self):
4218hunk ./src/allmydata/storage/immutable.py 98
4219         return "<%s %s %s>" % (self.__class__.__name__,
4220-                               base32.b2a_l(self.storage_index[:8], 60),
4221+                               base32.b2a_l(self.storageindex[:8], 60),
4222                                self.shnum)
4223 
4224     def remote_read(self, offset, length):
4225hunk ./src/allmydata/storage/immutable.py 110
4226 
4227     def remote_advise_corrupt_share(self, reason):
4228         return self.ss.remote_advise_corrupt_share("immutable",
4229-                                                   self.storage_index,
4230+                                                   self.storageindex,
4231                                                    self.shnum,
4232                                                    reason)
4233hunk ./src/allmydata/test/test_backends.py 20
4234 # The following share file contents was generated with
4235 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4236 # with share data == 'a'.
4237-renew_secret  = 'x'*32
4238-cancel_secret = 'y'*32
4239-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4240-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4241+shareversionnumber = '\x00\x00\x00\x01'
4242+sharedatalength = '\x00\x00\x00\x01'
4243+numberofleases = '\x00\x00\x00\x01'
4244+shareinputdata = 'a'
4245+ownernumber = '\x00\x00\x00\x00'
4246+renewsecret  = 'x'*32
4247+cancelsecret = 'y'*32
4248+expirationtime = '\x00(\xde\x80'
4249+nextlease = ''
4250+containerdata = shareversionnumber + sharedatalength + numberofleases
4251+client_data = shareinputdata + ownernumber + renewsecret + \
4252+    cancelsecret + expirationtime + nextlease
4253+share_data = containerdata + client_data
4254+
4255 
4256 testnodeid = 'testnodeidxxxxxxxxxx'
4257 tempdir = 'teststoredir'
4258hunk ./src/allmydata/test/test_backends.py 52
4259 
4260 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4261     def setUp(self):
4262-        self.s = StorageServer(testnodeid, backend=NullCore())
4263+        self.ss = StorageServer(testnodeid, backend=NullCore())
4264 
4265     @mock.patch('os.mkdir')
4266     @mock.patch('__builtin__.open')
4267hunk ./src/allmydata/test/test_backends.py 62
4268         """ Write a new share. """
4269 
4270         # Now begin the test.
4271-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4272+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4273         bs[0].remote_write(0, 'a')
4274         self.failIf(mockisdir.called)
4275         self.failIf(mocklistdir.called)
4276hunk ./src/allmydata/test/test_backends.py 133
4277                 _assert(False, "The tester code doesn't recognize this case.") 
4278 
4279         mockopen.side_effect = call_open
4280-        testbackend = DASCore(tempdir, expiration_policy)
4281-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4282+        self.backend = DASCore(tempdir, expiration_policy)
4283+        self.ss = StorageServer(testnodeid, self.backend)
4284+        self.ssinf = StorageServer(testnodeid, self.backend)
4285 
4286     @mock.patch('time.time')
4287     def test_write_share(self, mocktime):
4288hunk ./src/allmydata/test/test_backends.py 142
4289         """ Write a new share. """
4290         # Now begin the test.
4291 
4292-        # XXX (0) ???  Fail unless something is not properly set-up?
4293-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4294+        mocktime.return_value = 0
4295+        # Inspect incoming and fail unless it's empty.
4296+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4297+        self.failUnlessReallyEqual(incomingset, set())
4298+       
4299+        # Among other things, populate incoming with the sharenum: 0.
4300+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4301 
4302hunk ./src/allmydata/test/test_backends.py 150
4303-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4304-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4305-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4306+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4307+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4308+       
4309+        # Attempt to create a second share writer with the same share.
4310+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4311 
4312hunk ./src/allmydata/test/test_backends.py 156
4313-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4314+        # Show that no sharewriter results from a remote_allocate_buckets
4315         # with the same si, until BucketWriter.remote_close() has been called.
4316hunk ./src/allmydata/test/test_backends.py 158
4317-        # self.failIf(bsa)
4318+        self.failIf(bsa)
4319 
4320hunk ./src/allmydata/test/test_backends.py 160
4321+        # Write 'a' to shnum 0. Only tested together with close and read.
4322         bs[0].remote_write(0, 'a')
4323hunk ./src/allmydata/test/test_backends.py 162
4324-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4325-        spaceint = self.s.allocated_size()
4326+
4327+        # Test allocated size.
4328+        spaceint = self.ss.allocated_size()
4329         self.failUnlessReallyEqual(spaceint, 1)
4330 
4331         # XXX (3) Inspect final and fail unless there's nothing there.
4332hunk ./src/allmydata/test/test_backends.py 168
4333+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4334         bs[0].remote_close()
4335         # XXX (4a) Inspect final and fail unless share 0 is there.
4336hunk ./src/allmydata/test/test_backends.py 171
4337+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4338+        #contents = sharesinfinal[0].read_share_data(0,999)
4339+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4340         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4341 
4342         # What happens when there's not enough space for the client's request?
4343hunk ./src/allmydata/test/test_backends.py 177
4344-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4345+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4346 
4347         # Now test the allocated_size method.
4348         # self.failIf(mockexists.called, mockexists.call_args_list)
4349hunk ./src/allmydata/test/test_backends.py 185
4350         #self.failIf(mockrename.called, mockrename.call_args_list)
4351         #self.failIf(mockstat.called, mockstat.call_args_list)
4352 
4353-    def test_handle_incoming(self):
4354-        incomingset = self.s.backend.get_incoming('teststorage_index')
4355-        self.failUnlessReallyEqual(incomingset, set())
4356-
4357-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4358-       
4359-        incomingset = self.s.backend.get_incoming('teststorage_index')
4360-        self.failUnlessReallyEqual(incomingset, set((0,)))
4361-
4362-        bs[0].remote_close()
4363-        self.failUnlessReallyEqual(incomingset, set())
4364-
4365     @mock.patch('os.path.exists')
4366     @mock.patch('os.path.getsize')
4367     @mock.patch('__builtin__.open')
4368hunk ./src/allmydata/test/test_backends.py 208
4369             self.failUnless('r' in mode, mode)
4370             self.failUnless('b' in mode, mode)
4371 
4372-            return StringIO(share_file_data)
4373+            return StringIO(share_data)
4374         mockopen.side_effect = call_open
4375 
4376hunk ./src/allmydata/test/test_backends.py 211
4377-        datalen = len(share_file_data)
4378+        datalen = len(share_data)
4379         def call_getsize(fname):
4380             self.failUnlessReallyEqual(fname, sharefname)
4381             return datalen
4382hunk ./src/allmydata/test/test_backends.py 223
4383         mockexists.side_effect = call_exists
4384 
4385         # Now begin the test.
4386-        bs = self.s.remote_get_buckets('teststorage_index')
4387+        bs = self.ss.remote_get_buckets('teststorage_index')
4388 
4389         self.failUnlessEqual(len(bs), 1)
4390hunk ./src/allmydata/test/test_backends.py 226
4391-        b = bs[0]
4392+        b = bs['0']
4393         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4394hunk ./src/allmydata/test/test_backends.py 228
4395-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4396+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4397         # If you try to read past the end you get the as much data as is there.
4398hunk ./src/allmydata/test/test_backends.py 230
4399-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4400+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4401         # If you start reading past the end of the file you get the empty string.
4402         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4403 
4404}
4405[jacp14 or so
4406wilcoxjg@gmail.com**20110713060346
4407 Ignore-this: 7026810f60879d65b525d450e43ff87a
4408] {
4409hunk ./src/allmydata/storage/backends/das/core.py 102
4410             for f in os.listdir(finalstoragedir):
4411                 if NUM_RE.match(f):
4412                     filename = os.path.join(finalstoragedir, f)
4413-                    yield ImmutableShare(filename, storageindex, f)
4414+                    yield ImmutableShare(filename, storageindex, int(f))
4415         except OSError:
4416             # Commonly caused by there being no shares at all.
4417             pass
4418hunk ./src/allmydata/storage/backends/null/core.py 25
4419     def set_storage_server(self, ss):
4420         self.ss = ss
4421 
4422+    def get_incoming(self, storageindex):
4423+        return set()
4424+
4425 class ImmutableShare:
4426     sharetype = "immutable"
4427 
4428hunk ./src/allmydata/storage/immutable.py 19
4429 
4430     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4431         self.ss = ss
4432-        self._max_size = max_size # don't allow the client to write more than this
4433+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4434+
4435         self._canary = canary
4436         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4437         self.closed = False
4438hunk ./src/allmydata/test/test_backends.py 135
4439         mockopen.side_effect = call_open
4440         self.backend = DASCore(tempdir, expiration_policy)
4441         self.ss = StorageServer(testnodeid, self.backend)
4442-        self.ssinf = StorageServer(testnodeid, self.backend)
4443+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4444+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4445 
4446     @mock.patch('time.time')
4447     def test_write_share(self, mocktime):
4448hunk ./src/allmydata/test/test_backends.py 161
4449         # with the same si, until BucketWriter.remote_close() has been called.
4450         self.failIf(bsa)
4451 
4452-        # Write 'a' to shnum 0. Only tested together with close and read.
4453-        bs[0].remote_write(0, 'a')
4454-
4455         # Test allocated size.
4456         spaceint = self.ss.allocated_size()
4457         self.failUnlessReallyEqual(spaceint, 1)
4458hunk ./src/allmydata/test/test_backends.py 165
4459 
4460-        # XXX (3) Inspect final and fail unless there's nothing there.
4461+        # Write 'a' to shnum 0. Only tested together with close and read.
4462+        bs[0].remote_write(0, 'a')
4463+       
4464+        # Preclose: Inspect final, failUnless nothing there.
4465         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4466         bs[0].remote_close()
4467hunk ./src/allmydata/test/test_backends.py 171
4468-        # XXX (4a) Inspect final and fail unless share 0 is there.
4469-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4470-        #contents = sharesinfinal[0].read_share_data(0,999)
4471-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4472-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4473 
4474hunk ./src/allmydata/test/test_backends.py 172
4475-        # What happens when there's not enough space for the client's request?
4476-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4477+        # Postclose: (Omnibus) failUnless written data is in final.
4478+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4479+        contents = sharesinfinal[0].read_share_data(0,73)
4480+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4481 
4482hunk ./src/allmydata/test/test_backends.py 177
4483-        # Now test the allocated_size method.
4484-        # self.failIf(mockexists.called, mockexists.call_args_list)
4485-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4486-        #self.failIf(mockrename.called, mockrename.call_args_list)
4487-        #self.failIf(mockstat.called, mockstat.call_args_list)
4488+        # Cover interior of for share in get_shares loop.
4489+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4490+       
4491+    @mock.patch('time.time')
4492+    @mock.patch('allmydata.util.fileutil.get_available_space')
4493+    def test_out_of_space(self, mockget_available_space, mocktime):
4494+        mocktime.return_value = 0
4495+       
4496+        def call_get_available_space(dir, reserve):
4497+            return 0
4498+
4499+        mockget_available_space.side_effect = call_get_available_space
4500+       
4501+       
4502+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4503 
4504     @mock.patch('os.path.exists')
4505     @mock.patch('os.path.getsize')
4506hunk ./src/allmydata/test/test_backends.py 234
4507         bs = self.ss.remote_get_buckets('teststorage_index')
4508 
4509         self.failUnlessEqual(len(bs), 1)
4510-        b = bs['0']
4511+        b = bs[0]
4512         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4513         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4514         # If you try to read past the end you get the as much data as is there.
4515}
4516[temporary work-in-progress patch to be unrecorded
4517zooko@zooko.com**20110714003008
4518 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4519 tidy up a few tests, work done in pair-programming with Zancas
4520] {
4521hunk ./src/allmydata/storage/backends/das/core.py 65
4522         self._clean_incomplete()
4523 
4524     def _clean_incomplete(self):
4525-        fileutil.rm_dir(self.incomingdir)
4526+        fileutil.rmtree(self.incomingdir)
4527         fileutil.make_dirs(self.incomingdir)
4528 
4529     def _setup_corruption_advisory(self):
4530hunk ./src/allmydata/storage/immutable.py 1
4531-import os, stat, struct, time
4532+import os, time
4533 
4534 from foolscap.api import Referenceable
4535 
4536hunk ./src/allmydata/storage/server.py 1
4537-import os, re, weakref, struct, time
4538+import os, weakref, struct, time
4539 
4540 from foolscap.api import Referenceable
4541 from twisted.application import service
4542hunk ./src/allmydata/storage/server.py 7
4543 
4544 from zope.interface import implements
4545-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4546+from allmydata.interfaces import RIStorageServer, IStatsProducer
4547 from allmydata.util import fileutil, idlib, log, time_format
4548 import allmydata # for __full_version__
4549 
4550hunk ./src/allmydata/storage/server.py 313
4551         self.add_latency("get", time.time() - start)
4552         return bucketreaders
4553 
4554-    def remote_get_incoming(self, storageindex):
4555-        incoming_share_set = self.backend.get_incoming(storageindex)
4556-        return incoming_share_set
4557-
4558     def get_leases(self, storageindex):
4559         """Provide an iterator that yields all of the leases attached to this
4560         bucket. Each lease is returned as a LeaseInfo instance.
4561hunk ./src/allmydata/test/test_backends.py 3
4562 from twisted.trial import unittest
4563 
4564+from twisted.path.filepath import FilePath
4565+
4566 from StringIO import StringIO
4567 
4568 from allmydata.test.common_util import ReallyEqualMixin
4569hunk ./src/allmydata/test/test_backends.py 38
4570 
4571 
4572 testnodeid = 'testnodeidxxxxxxxxxx'
4573-tempdir = 'teststoredir'
4574-basedir = os.path.join(tempdir, 'shares')
4575+storedir = 'teststoredir'
4576+storedirfp = FilePath(storedir)
4577+basedir = os.path.join(storedir, 'shares')
4578 baseincdir = os.path.join(basedir, 'incoming')
4579 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4580 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4581hunk ./src/allmydata/test/test_backends.py 53
4582                      'cutoff_date' : None,
4583                      'sharetypes' : None}
4584 
4585-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4586+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4587+    """ NullBackend is just for testing and executable documentation, so
4588+    this test is actually a test of StorageServer in which we're using
4589+    NullBackend as helper code for the test, rather than a test of
4590+    NullBackend. """
4591     def setUp(self):
4592         self.ss = StorageServer(testnodeid, backend=NullCore())
4593 
4594hunk ./src/allmydata/test/test_backends.py 62
4595     @mock.patch('os.mkdir')
4596+
4597     @mock.patch('__builtin__.open')
4598     @mock.patch('os.listdir')
4599     @mock.patch('os.path.isdir')
4600hunk ./src/allmydata/test/test_backends.py 69
4601     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4602         """ Write a new share. """
4603 
4604-        # Now begin the test.
4605         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4606         bs[0].remote_write(0, 'a')
4607         self.failIf(mockisdir.called)
4608hunk ./src/allmydata/test/test_backends.py 83
4609     @mock.patch('os.listdir')
4610     @mock.patch('os.path.isdir')
4611     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4612-        """ This tests whether a server instance can be constructed
4613-        with a filesystem backend. To pass the test, it has to use the
4614-        filesystem in only the prescribed ways. """
4615+        """ This tests whether a server instance can be constructed with a
4616+        filesystem backend. To pass the test, it mustn't use the filesystem
4617+        outside of its configured storedir. """
4618 
4619         def call_open(fname, mode):
4620hunk ./src/allmydata/test/test_backends.py 88
4621-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4622-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4623-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4624-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4625-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4626+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4627+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4628+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4629+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4630+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4631                 return StringIO()
4632             else:
4633hunk ./src/allmydata/test/test_backends.py 95
4634-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4635+                fnamefp = FilePath(fname)
4636+                self.failUnless(storedirfp in fnamefp.parents(),
4637+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4638         mockopen.side_effect = call_open
4639 
4640         def call_isdir(fname):
4641hunk ./src/allmydata/test/test_backends.py 101
4642-            if fname == os.path.join(tempdir,'shares'):
4643+            if fname == os.path.join(storedir, 'shares'):
4644                 return True
4645hunk ./src/allmydata/test/test_backends.py 103
4646-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4647+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4648                 return True
4649             else:
4650                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4651hunk ./src/allmydata/test/test_backends.py 109
4652         mockisdir.side_effect = call_isdir
4653 
4654+        mocklistdir.return_value = []
4655+
4656         def call_mkdir(fname, mode):
4657hunk ./src/allmydata/test/test_backends.py 112
4658-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4659             self.failUnlessEqual(0777, mode)
4660hunk ./src/allmydata/test/test_backends.py 113
4661-            if fname == tempdir:
4662-                return None
4663-            elif fname == os.path.join(tempdir,'shares'):
4664-                return None
4665-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4666-                return None
4667-            else:
4668-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4669+            self.failUnlessIn(fname,
4670+                              [storedir,
4671+                               os.path.join(storedir, 'shares'),
4672+                               os.path.join(storedir, 'shares', 'incoming')],
4673+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4674         mockmkdir.side_effect = call_mkdir
4675 
4676         # Now begin the test.
4677hunk ./src/allmydata/test/test_backends.py 121
4678-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4679+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4680 
4681         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4682 
4683hunk ./src/allmydata/test/test_backends.py 126
4684 
4685-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4686+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4687+    """ This tests both the StorageServer xyz """
4688     @mock.patch('__builtin__.open')
4689     def setUp(self, mockopen):
4690         def call_open(fname, mode):
4691hunk ./src/allmydata/test/test_backends.py 131
4692-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4693-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4694-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4695-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4696-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4697+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4698+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4699+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4700+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4701+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4702                 return StringIO()
4703             else:
4704                 _assert(False, "The tester code doesn't recognize this case.") 
4705hunk ./src/allmydata/test/test_backends.py 141
4706 
4707         mockopen.side_effect = call_open
4708-        self.backend = DASCore(tempdir, expiration_policy)
4709+        self.backend = DASCore(storedir, expiration_policy)
4710         self.ss = StorageServer(testnodeid, self.backend)
4711hunk ./src/allmydata/test/test_backends.py 143
4712-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4713+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4714         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4715 
4716     @mock.patch('time.time')
4717hunk ./src/allmydata/test/test_backends.py 147
4718-    def test_write_share(self, mocktime):
4719-        """ Write a new share. """
4720-        # Now begin the test.
4721+    def test_write_and_read_share(self, mocktime):
4722+        """
4723+        Write a new share, read it, and test the server's (and FS backend's)
4724+        handling of simultaneous and successive attempts to write the same
4725+        share.
4726+        """
4727 
4728         mocktime.return_value = 0
4729         # Inspect incoming and fail unless it's empty.
4730hunk ./src/allmydata/test/test_backends.py 159
4731         incomingset = self.ss.backend.get_incoming('teststorage_index')
4732         self.failUnlessReallyEqual(incomingset, set())
4733         
4734-        # Among other things, populate incoming with the sharenum: 0.
4735+        # Populate incoming with the sharenum: 0.
4736         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4737 
4738         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4739hunk ./src/allmydata/test/test_backends.py 163
4740-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4741+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4742         
4743hunk ./src/allmydata/test/test_backends.py 165
4744-        # Attempt to create a second share writer with the same share.
4745+        # Attempt to create a second share writer with the same sharenum.
4746         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4747 
4748         # Show that no sharewriter results from a remote_allocate_buckets
4749hunk ./src/allmydata/test/test_backends.py 169
4750-        # with the same si, until BucketWriter.remote_close() has been called.
4751+        # with the same si and sharenum, until BucketWriter.remote_close()
4752+        # has been called.
4753         self.failIf(bsa)
4754 
4755         # Test allocated size.
4756hunk ./src/allmydata/test/test_backends.py 187
4757         # Postclose: (Omnibus) failUnless written data is in final.
4758         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4759         contents = sharesinfinal[0].read_share_data(0,73)
4760-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4761+        self.failUnlessReallyEqual(contents, client_data)
4762 
4763hunk ./src/allmydata/test/test_backends.py 189
4764-        # Cover interior of for share in get_shares loop.
4765-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4766+        # Exercise the case that the share we're asking to allocate is
4767+        # already (completely) uploaded.
4768+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4769         
4770     @mock.patch('time.time')
4771     @mock.patch('allmydata.util.fileutil.get_available_space')
4772hunk ./src/allmydata/test/test_backends.py 210
4773     @mock.patch('os.path.getsize')
4774     @mock.patch('__builtin__.open')
4775     @mock.patch('os.listdir')
4776-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4777+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4778         """ This tests whether the code correctly finds and reads
4779         shares written out by old (Tahoe-LAFS <= v1.8.2)
4780         servers. There is a similar test in test_download, but that one
4781hunk ./src/allmydata/test/test_backends.py 219
4782         StorageServer object. """
4783 
4784         def call_listdir(dirname):
4785-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4786+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4787             return ['0']
4788 
4789         mocklistdir.side_effect = call_listdir
4790hunk ./src/allmydata/test/test_backends.py 226
4791 
4792         def call_open(fname, mode):
4793             self.failUnlessReallyEqual(fname, sharefname)
4794-            self.failUnless('r' in mode, mode)
4795+            self.failUnlessEqual(mode[0], 'r', mode)
4796             self.failUnless('b' in mode, mode)
4797 
4798             return StringIO(share_data)
4799hunk ./src/allmydata/test/test_backends.py 268
4800         filesystem in only the prescribed ways. """
4801 
4802         def call_open(fname, mode):
4803-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4804-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4805-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4806-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4807-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4808+            if fname == os.path.join(storedir,'bucket_counter.state'):
4809+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4810+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4811+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4812+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4813                 return StringIO()
4814             else:
4815                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4816hunk ./src/allmydata/test/test_backends.py 279
4817         mockopen.side_effect = call_open
4818 
4819         def call_isdir(fname):
4820-            if fname == os.path.join(tempdir,'shares'):
4821+            if fname == os.path.join(storedir,'shares'):
4822                 return True
4823hunk ./src/allmydata/test/test_backends.py 281
4824-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4825+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4826                 return True
4827             else:
4828                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4829hunk ./src/allmydata/test/test_backends.py 290
4830         def call_mkdir(fname, mode):
4831             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4832             self.failUnlessEqual(0777, mode)
4833-            if fname == tempdir:
4834+            if fname == storedir:
4835                 return None
4836hunk ./src/allmydata/test/test_backends.py 292
4837-            elif fname == os.path.join(tempdir,'shares'):
4838+            elif fname == os.path.join(storedir,'shares'):
4839                 return None
4840hunk ./src/allmydata/test/test_backends.py 294
4841-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4842+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4843                 return None
4844             else:
4845                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4846hunk ./src/allmydata/util/fileutil.py 5
4847 Futz with files like a pro.
4848 """
4849 
4850-import sys, exceptions, os, stat, tempfile, time, binascii
4851+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4852 
4853 from twisted.python import log
4854 
4855hunk ./src/allmydata/util/fileutil.py 186
4856             raise tx
4857         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4858 
4859-def rm_dir(dirname):
4860+def rmtree(dirname):
4861     """
4862     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4863     already gone, do nothing and return without raising an exception.  If this
4864hunk ./src/allmydata/util/fileutil.py 205
4865             else:
4866                 remove(fullname)
4867         os.rmdir(dirname)
4868-    except Exception, le:
4869-        # Ignore "No such file or directory"
4870-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4871+    except EnvironmentError, le:
4872+        # Ignore "No such file or directory", collect any other exception.
4873+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4874             excs.append(le)
4875hunk ./src/allmydata/util/fileutil.py 209
4876+    except Exception, le:
4877+        excs.append(le)
4878 
4879     # Okay, now we've recursively removed everything, ignoring any "No
4880     # such file or directory" errors, and collecting any other errors.
4881hunk ./src/allmydata/util/fileutil.py 222
4882             raise OSError, "Failed to remove dir for unknown reason."
4883         raise OSError, excs
4884 
4885+def rm_dir(dirname):
4886+    # Renamed to be like shutil.rmtree and unlike rmdir.
4887+    return rmtree(dirname)
4888 
4889 def remove_if_possible(f):
4890     try:
4891}
4892[work in progress intended to be unrecorded and never committed to trunk
4893zooko@zooko.com**20110714212139
4894 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
4895 switch from os.path.join to filepath
4896 incomplete refactoring of common "stay in your subtree" tester code into a superclass
4897 
4898] {
4899hunk ./src/allmydata/test/test_backends.py 3
4900 from twisted.trial import unittest
4901 
4902-from twisted.path.filepath import FilePath
4903+from twisted.python.filepath import FilePath
4904 
4905 from StringIO import StringIO
4906 
4907hunk ./src/allmydata/test/test_backends.py 10
4908 from allmydata.test.common_util import ReallyEqualMixin
4909 from allmydata.util.assertutil import _assert
4910 
4911-import mock, os
4912+import mock
4913 
4914 # This is the code that we're going to be testing.
4915 from allmydata.storage.server import StorageServer
4916hunk ./src/allmydata/test/test_backends.py 25
4917 shareversionnumber = '\x00\x00\x00\x01'
4918 sharedatalength = '\x00\x00\x00\x01'
4919 numberofleases = '\x00\x00\x00\x01'
4920+
4921 shareinputdata = 'a'
4922 ownernumber = '\x00\x00\x00\x00'
4923 renewsecret  = 'x'*32
4924hunk ./src/allmydata/test/test_backends.py 39
4925 
4926 
4927 testnodeid = 'testnodeidxxxxxxxxxx'
4928-storedir = 'teststoredir'
4929-storedirfp = FilePath(storedir)
4930-basedir = os.path.join(storedir, 'shares')
4931-baseincdir = os.path.join(basedir, 'incoming')
4932-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4933-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4934-shareincomingname = os.path.join(sharedirincomingname, '0')
4935-sharefname = os.path.join(sharedirfinalname, '0')
4936+
4937+class TestFilesMixin(unittest.TestCase):
4938+    def setUp(self):
4939+        self.storedir = FilePath('teststoredir')
4940+        self.basedir = self.storedir.child('shares')
4941+        self.baseincdir = self.basedir.child('incoming')
4942+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4943+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4944+        self.shareincomingname = self.sharedirincomingname.child('0')
4945+        self.sharefname = self.sharedirfinalname.child('0')
4946+
4947+    def call_open(self, fname, mode):
4948+        fnamefp = FilePath(fname)
4949+        if fnamefp == self.storedir.child('bucket_counter.state'):
4950+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
4951+        elif fnamefp == self.storedir.child('lease_checker.state'):
4952+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
4953+        elif fnamefp == self.storedir.child('lease_checker.history'):
4954+            return StringIO()
4955+        else:
4956+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4957+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
4958+
4959+    def call_isdir(self, fname):
4960+        fnamefp = FilePath(fname)
4961+        if fnamefp == self.storedir.child('shares'):
4962+            return True
4963+        elif fnamefp == self.storedir.child('shares').child('incoming'):
4964+            return True
4965+        else:
4966+            self.failUnless(self.storedir in fnamefp.parents(),
4967+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4968+
4969+    def call_mkdir(self, fname, mode):
4970+        self.failUnlessEqual(0777, mode)
4971+        fnamefp = FilePath(fname)
4972+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4973+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4974+
4975+
4976+    @mock.patch('os.mkdir')
4977+    @mock.patch('__builtin__.open')
4978+    @mock.patch('os.listdir')
4979+    @mock.patch('os.path.isdir')
4980+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4981+        mocklistdir.return_value = []
4982+        mockmkdir.side_effect = self.call_mkdir
4983+        mockisdir.side_effect = self.call_isdir
4984+        mockopen.side_effect = self.call_open
4985+        mocklistdir.return_value = []
4986+       
4987+        test_func()
4988+       
4989+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4990 
4991 expiration_policy = {'enabled' : False,
4992                      'mode' : 'age',
4993hunk ./src/allmydata/test/test_backends.py 123
4994         self.failIf(mockopen.called)
4995         self.failIf(mockmkdir.called)
4996 
4997-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
4998-    @mock.patch('time.time')
4999-    @mock.patch('os.mkdir')
5000-    @mock.patch('__builtin__.open')
5001-    @mock.patch('os.listdir')
5002-    @mock.patch('os.path.isdir')
5003-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5004+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5005+    def test_create_server_fs_backend(self):
5006         """ This tests whether a server instance can be constructed with a
5007         filesystem backend. To pass the test, it mustn't use the filesystem
5008         outside of its configured storedir. """
5009hunk ./src/allmydata/test/test_backends.py 129
5010 
5011-        def call_open(fname, mode):
5012-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5013-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5014-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5015-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5016-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5017-                return StringIO()
5018-            else:
5019-                fnamefp = FilePath(fname)
5020-                self.failUnless(storedirfp in fnamefp.parents(),
5021-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5022-        mockopen.side_effect = call_open
5023+        def _f():
5024+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5025 
5026hunk ./src/allmydata/test/test_backends.py 132
5027-        def call_isdir(fname):
5028-            if fname == os.path.join(storedir, 'shares'):
5029-                return True
5030-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
5031-                return True
5032-            else:
5033-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5034-        mockisdir.side_effect = call_isdir
5035-
5036-        mocklistdir.return_value = []
5037-
5038-        def call_mkdir(fname, mode):
5039-            self.failUnlessEqual(0777, mode)
5040-            self.failUnlessIn(fname,
5041-                              [storedir,
5042-                               os.path.join(storedir, 'shares'),
5043-                               os.path.join(storedir, 'shares', 'incoming')],
5044-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
5045-        mockmkdir.side_effect = call_mkdir
5046-
5047-        # Now begin the test.
5048-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5049-
5050-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5051+        self._help_test_stay_in_your_subtree(_f)
5052 
5053 
5054 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5055}
5056[another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
5057zooko@zooko.com**20110715191500
5058 Ignore-this: af33336789041800761e80510ea2f583
5059 In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
5060] {
5061hunk ./src/allmydata/storage/backends/das/core.py 59
5062                 log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5063                         umid="0wZ27w", level=log.UNUSUAL)
5064 
5065-        self.sharedir = os.path.join(self.storedir, "shares")
5066-        fileutil.make_dirs(self.sharedir)
5067-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
5068+        self.sharedir = self.storedir.child("shares")
5069+        fileutil.fp_make_dirs(self.sharedir)
5070+        self.incomingdir = self.sharedir.child('incoming')
5071         self._clean_incomplete()
5072 
5073     def _clean_incomplete(self):
5074hunk ./src/allmydata/storage/backends/das/core.py 65
5075-        fileutil.rmtree(self.incomingdir)
5076-        fileutil.make_dirs(self.incomingdir)
5077+        fileutil.fp_remove(self.incomingdir)
5078+        fileutil.fp_make_dirs(self.incomingdir)
5079 
5080     def _setup_corruption_advisory(self):
5081         # we don't actually create the corruption-advisory dir until necessary
5082hunk ./src/allmydata/storage/backends/das/core.py 70
5083-        self.corruption_advisory_dir = os.path.join(self.storedir,
5084-                                                    "corruption-advisories")
5085+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
5086 
5087     def _setup_bucket_counter(self):
5088hunk ./src/allmydata/storage/backends/das/core.py 73
5089-        statefname = os.path.join(self.storedir, "bucket_counter.state")
5090+        statefname = self.storedir.child("bucket_counter.state")
5091         self.bucket_counter = FSBucketCountingCrawler(statefname)
5092         self.bucket_counter.setServiceParent(self)
5093 
5094hunk ./src/allmydata/storage/backends/das/core.py 78
5095     def _setup_lease_checkerf(self, expiration_policy):
5096-        statefile = os.path.join(self.storedir, "lease_checker.state")
5097-        historyfile = os.path.join(self.storedir, "lease_checker.history")
5098+        statefile = self.storedir.child("lease_checker.state")
5099+        historyfile = self.storedir.child("lease_checker.history")
5100         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
5101         self.lease_checker.setServiceParent(self)
5102 
5103hunk ./src/allmydata/storage/backends/das/core.py 83
5104-    def get_incoming(self, storageindex):
5105+    def get_incoming_shnums(self, storageindex):
5106         """Return the set of incoming shnums."""
5107         try:
5108hunk ./src/allmydata/storage/backends/das/core.py 86
5109-            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
5110-            incominglist = os.listdir(incomingsharesdir)
5111-            incomingshnums = [int(x) for x in incominglist]
5112-            return set(incomingshnums)
5113-        except OSError:
5114-            # XXX I'd like to make this more specific. If there are no shares at all.
5115-            return set()
5116+           
5117+            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5118+            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5119+            return frozenset(incomingshnums)
5120+        except UnlistableError:
5121+            # There is no shares directory at all.
5122+            return frozenset()
5123             
5124     def get_shares(self, storageindex):
5125         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5126hunk ./src/allmydata/storage/backends/das/core.py 96
5127-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
5128+        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5129         try:
5130hunk ./src/allmydata/storage/backends/das/core.py 98
5131-            for f in os.listdir(finalstoragedir):
5132-                if NUM_RE.match(f):
5133-                    filename = os.path.join(finalstoragedir, f)
5134-                    yield ImmutableShare(filename, storageindex, int(f))
5135-        except OSError:
5136-            # Commonly caused by there being no shares at all.
5137+            for f in finalstoragedir.listdir():
5138+                if NUM_RE.match(f.basename):
5139+                    yield ImmutableShare(f, storageindex, int(f))
5140+        except UnlistableError:
5141+            # There is no shares directory at all.
5142             pass
5143         
5144     def get_available_space(self):
5145hunk ./src/allmydata/storage/backends/das/core.py 149
5146 # then the value stored in this field will be the actual share data length
5147 # modulo 2**32.
5148 
5149-class ImmutableShare:
5150+class ImmutableShare(object):
5151     LEASE_SIZE = struct.calcsize(">L32s32sL")
5152     sharetype = "immutable"
5153 
5154hunk ./src/allmydata/storage/backends/das/core.py 166
5155         if create:
5156             # touch the file, so later callers will see that we're working on
5157             # it. Also construct the metadata.
5158-            assert not os.path.exists(self.finalhome)
5159-            fileutil.make_dirs(os.path.dirname(self.incominghome))
5160+            assert not finalhome.exists()
5161+            fp_make_dirs(self.incominghome)
5162             f = open(self.incominghome, 'wb')
5163             # The second field -- the four-byte share data length -- is no
5164             # longer used as of Tahoe v1.3.0, but we continue to write it in
5165hunk ./src/allmydata/storage/backends/das/core.py 316
5166         except IndexError:
5167             self.add_lease(lease_info)
5168 
5169-
5170     def cancel_lease(self, cancel_secret):
5171         """Remove a lease with the given cancel_secret. If the last lease is
5172         cancelled, the file will be removed. Return the number of bytes that
5173hunk ./src/allmydata/storage/common.py 19
5174 def si_a2b(ascii_storageindex):
5175     return base32.a2b(ascii_storageindex)
5176 
5177-def storage_index_to_dir(storageindex):
5178+def storage_index_to_dir(startfp, storageindex):
5179     sia = si_b2a(storageindex)
5180     return os.path.join(sia[:2], sia)
5181hunk ./src/allmydata/storage/server.py 210
5182 
5183         # fill incoming with all shares that are incoming use a set operation
5184         # since there's no need to operate on individual pieces
5185-        incoming = self.backend.get_incoming(storageindex)
5186+        incoming = self.backend.get_incoming_shnums(storageindex)
5187 
5188         for shnum in ((sharenums - alreadygot) - incoming):
5189             if (not limited) or (remaining_space >= max_space_per_bucket):
5190hunk ./src/allmydata/test/test_backends.py 5
5191 
5192 from twisted.python.filepath import FilePath
5193 
5194+from allmydata.util.log import msg
5195+
5196 from StringIO import StringIO
5197 
5198 from allmydata.test.common_util import ReallyEqualMixin
5199hunk ./src/allmydata/test/test_backends.py 42
5200 
5201 testnodeid = 'testnodeidxxxxxxxxxx'
5202 
5203-class TestFilesMixin(unittest.TestCase):
5204-    def setUp(self):
5205-        self.storedir = FilePath('teststoredir')
5206-        self.basedir = self.storedir.child('shares')
5207-        self.baseincdir = self.basedir.child('incoming')
5208-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5209-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5210-        self.shareincomingname = self.sharedirincomingname.child('0')
5211-        self.sharefname = self.sharedirfinalname.child('0')
5212+class MockStat:
5213+    def __init__(self):
5214+        self.st_mode = None
5215 
5216hunk ./src/allmydata/test/test_backends.py 46
5217+class MockFiles(unittest.TestCase):
5218+    """ I simulate a filesystem that the code under test can use. I flag the
5219+    code under test if it reads or writes outside of its prescribed
5220+    subtree. I simulate just the parts of the filesystem that the current
5221+    implementation of DAS backend needs. """
5222     def call_open(self, fname, mode):
5223         fnamefp = FilePath(fname)
5224hunk ./src/allmydata/test/test_backends.py 53
5225+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5226+                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5227+
5228         if fnamefp == self.storedir.child('bucket_counter.state'):
5229             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
5230         elif fnamefp == self.storedir.child('lease_checker.state'):
5231hunk ./src/allmydata/test/test_backends.py 61
5232             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
5233         elif fnamefp == self.storedir.child('lease_checker.history'):
5234+            # This is separated out from the else clause below just because
5235+            # we know this particular file is going to be used by the
5236+            # current implementation of DAS backend, and we might want to
5237+            # use this information in this test in the future...
5238             return StringIO()
5239         else:
5240hunk ./src/allmydata/test/test_backends.py 67
5241-            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5242-                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5243+            # Anything else you open inside your subtree appears to be an
5244+            # empty file.
5245+            return StringIO()
5246 
5247     def call_isdir(self, fname):
5248         fnamefp = FilePath(fname)
5249hunk ./src/allmydata/test/test_backends.py 73
5250-        if fnamefp == self.storedir.child('shares'):
5251+        return fnamefp.isdir()
5252+
5253+        self.failUnless(self.storedir == self or self.storedir in self.parents(),
5254+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
5255+
5256+        # The first two cases are separate from the else clause below just
5257+        # because we know that the current implementation of the DAS backend
5258+        # inspects these two directories and we might want to make use of
5259+        # that information in the tests in the future...
5260+        if self == self.storedir.child('shares'):
5261             return True
5262hunk ./src/allmydata/test/test_backends.py 84
5263-        elif fnamefp == self.storedir.child('shares').child('incoming'):
5264+        elif self == self.storedir.child('shares').child('incoming'):
5265             return True
5266         else:
5267hunk ./src/allmydata/test/test_backends.py 87
5268-            self.failUnless(self.storedir in fnamefp.parents(),
5269-                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5270+            # Anything else you open inside your subtree appears to be a
5271+            # directory.
5272+            return True
5273 
5274     def call_mkdir(self, fname, mode):
5275hunk ./src/allmydata/test/test_backends.py 92
5276-        self.failUnlessEqual(0777, mode)
5277         fnamefp = FilePath(fname)
5278         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5279                         "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5280hunk ./src/allmydata/test/test_backends.py 95
5281+        self.failUnlessEqual(0777, mode)
5282 
5283hunk ./src/allmydata/test/test_backends.py 97
5284+    def call_listdir(self, fname):
5285+        fnamefp = FilePath(fname)
5286+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5287+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5288 
5289hunk ./src/allmydata/test/test_backends.py 102
5290-    @mock.patch('os.mkdir')
5291-    @mock.patch('__builtin__.open')
5292-    @mock.patch('os.listdir')
5293-    @mock.patch('os.path.isdir')
5294-    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5295-        mocklistdir.return_value = []
5296+    def call_stat(self, fname):
5297+        fnamefp = FilePath(fname)
5298+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5299+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5300+
5301+        msg("%s.call_stat(%s)" % (self, fname,))
5302+        mstat = MockStat()
5303+        mstat.st_mode = 16893 # a directory
5304+        return mstat
5305+
5306+    def setUp(self):
5307+        msg( "%s.setUp()" % (self,))
5308+        self.storedir = FilePath('teststoredir')
5309+        self.basedir = self.storedir.child('shares')
5310+        self.baseincdir = self.basedir.child('incoming')
5311+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5312+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5313+        self.shareincomingname = self.sharedirincomingname.child('0')
5314+        self.sharefname = self.sharedirfinalname.child('0')
5315+
5316+        self.mocklistdirp = mock.patch('os.listdir')
5317+        mocklistdir = self.mocklistdirp.__enter__()
5318+        mocklistdir.side_effect = self.call_listdir
5319+
5320+        self.mockmkdirp = mock.patch('os.mkdir')
5321+        mockmkdir = self.mockmkdirp.__enter__()
5322         mockmkdir.side_effect = self.call_mkdir
5323hunk ./src/allmydata/test/test_backends.py 129
5324+
5325+        self.mockisdirp = mock.patch('os.path.isdir')
5326+        mockisdir = self.mockisdirp.__enter__()
5327         mockisdir.side_effect = self.call_isdir
5328hunk ./src/allmydata/test/test_backends.py 133
5329+
5330+        self.mockopenp = mock.patch('__builtin__.open')
5331+        mockopen = self.mockopenp.__enter__()
5332         mockopen.side_effect = self.call_open
5333hunk ./src/allmydata/test/test_backends.py 137
5334-        mocklistdir.return_value = []
5335-       
5336-        test_func()
5337-       
5338-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5339+
5340+        self.mockstatp = mock.patch('os.stat')
5341+        mockstat = self.mockstatp.__enter__()
5342+        mockstat.side_effect = self.call_stat
5343+
5344+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
5345+        mockfpstat = self.mockfpstatp.__enter__()
5346+        mockfpstat.side_effect = self.call_stat
5347+
5348+    def tearDown(self):
5349+        msg( "%s.tearDown()" % (self,))
5350+        self.mockfpstatp.__exit__()
5351+        self.mockstatp.__exit__()
5352+        self.mockopenp.__exit__()
5353+        self.mockisdirp.__exit__()
5354+        self.mockmkdirp.__exit__()
5355+        self.mocklistdirp.__exit__()
5356 
5357 expiration_policy = {'enabled' : False,
5358                      'mode' : 'age',
5359hunk ./src/allmydata/test/test_backends.py 184
5360         self.failIf(mockopen.called)
5361         self.failIf(mockmkdir.called)
5362 
5363-class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5364+class TestServerConstruction(MockFiles, ReallyEqualMixin):
5365     def test_create_server_fs_backend(self):
5366         """ This tests whether a server instance can be constructed with a
5367         filesystem backend. To pass the test, it mustn't use the filesystem
5368hunk ./src/allmydata/test/test_backends.py 190
5369         outside of its configured storedir. """
5370 
5371-        def _f():
5372-            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5373+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5374 
5375hunk ./src/allmydata/test/test_backends.py 192
5376-        self._help_test_stay_in_your_subtree(_f)
5377-
5378-
5379-class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5380-    """ This tests both the StorageServer xyz """
5381-    @mock.patch('__builtin__.open')
5382-    def setUp(self, mockopen):
5383-        def call_open(fname, mode):
5384-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5385-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5386-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5387-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5388-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5389-                return StringIO()
5390-            else:
5391-                _assert(False, "The tester code doesn't recognize this case.") 
5392-
5393-        mockopen.side_effect = call_open
5394-        self.backend = DASCore(storedir, expiration_policy)
5395-        self.ss = StorageServer(testnodeid, self.backend)
5396-        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
5397-        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5398+class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
5399+    """ This tests both the StorageServer and the DAS backend together. """
5400+    def setUp(self):
5401+        MockFiles.setUp(self)
5402+        try:
5403+            self.backend = DASCore(self.storedir, expiration_policy)
5404+            self.ss = StorageServer(testnodeid, self.backend)
5405+            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
5406+            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5407+        except:
5408+            MockFiles.tearDown(self)
5409+            raise
5410 
5411     @mock.patch('time.time')
5412     def test_write_and_read_share(self, mocktime):
5413hunk ./src/allmydata/util/fileutil.py 8
5414 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5415 
5416 from twisted.python import log
5417+from twisted.python.filepath import UnlistableError
5418 
5419 from pycryptopp.cipher.aes import AES
5420 
5421hunk ./src/allmydata/util/fileutil.py 187
5422             raise tx
5423         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5424 
5425+def fp_make_dirs(dirfp):
5426+    """
5427+    An idempotent version of FilePath.makedirs().  If the dir already
5428+    exists, do nothing and return without raising an exception.  If this
5429+    call creates the dir, return without raising an exception.  If there is
5430+    an error that prevents creation or if the directory gets deleted after
5431+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
5432+    exists, raise an exception.
5433+    """
5434+    log.msg( "xxx 0 %s" % (dirfp,))
5435+    tx = None
5436+    try:
5437+        dirfp.makedirs()
5438+    except OSError, x:
5439+        tx = x
5440+
5441+    if not dirfp.isdir():
5442+        if tx:
5443+            raise tx
5444+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5445+
5446 def rmtree(dirname):
5447     """
5448     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5449hunk ./src/allmydata/util/fileutil.py 244
5450             raise OSError, "Failed to remove dir for unknown reason."
5451         raise OSError, excs
5452 
5453+def fp_remove(dirfp):
5454+    try:
5455+        dirfp.remove()
5456+    except UnlistableError, e:
5457+        if e.originalException.errno != errno.ENOENT:
5458+            raise
5459+
5460 def rm_dir(dirname):
5461     # Renamed to be like shutil.rmtree and unlike rmdir.
5462     return rmtree(dirname)
5463}
5464[another temporary patch for sharing work-in-progress
5465zooko@zooko.com**20110720055918
5466 Ignore-this: dfa0270476cbc6511cdb54c5c9a55a8e
5467 A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
5468 There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
5469 (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
5470 
5471] {
5472hunk ./src/allmydata/storage/backends/das/core.py 5
5473 
5474 from allmydata.interfaces import IStorageBackend
5475 from allmydata.storage.backends.base import Backend
5476-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5477+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5478 from allmydata.util.assertutil import precondition
5479 
5480 #from foolscap.api import Referenceable
5481hunk ./src/allmydata/storage/backends/das/core.py 10
5482 from twisted.application import service
5483+from twisted.python.filepath import UnlistableError
5484 
5485 from zope.interface import implements
5486 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
5487hunk ./src/allmydata/storage/backends/das/core.py 17
5488 from allmydata.util import fileutil, idlib, log, time_format
5489 import allmydata # for __full_version__
5490 
5491-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5492-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5493+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5494+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5495 from allmydata.storage.lease import LeaseInfo
5496 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5497      create_mutable_sharefile
5498hunk ./src/allmydata/storage/backends/das/core.py 41
5499 # $SHARENUM matches this regex:
5500 NUM_RE=re.compile("^[0-9]+$")
5501 
5502+def is_num(fp):
5503+    return NUM_RE.match(fp.basename)
5504+
5505 class DASCore(Backend):
5506     implements(IStorageBackend)
5507     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
5508hunk ./src/allmydata/storage/backends/das/core.py 58
5509         self.storedir = storedir
5510         self.readonly = readonly
5511         self.reserved_space = int(reserved_space)
5512-        if self.reserved_space:
5513-            if self.get_available_space() is None:
5514-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5515-                        umid="0wZ27w", level=log.UNUSUAL)
5516-
5517         self.sharedir = self.storedir.child("shares")
5518         fileutil.fp_make_dirs(self.sharedir)
5519         self.incomingdir = self.sharedir.child('incoming')
5520hunk ./src/allmydata/storage/backends/das/core.py 62
5521         self._clean_incomplete()
5522+        if self.reserved_space and (self.get_available_space() is None):
5523+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5524+                    umid="0wZ27w", level=log.UNUSUAL)
5525+
5526 
5527     def _clean_incomplete(self):
5528         fileutil.fp_remove(self.incomingdir)
5529hunk ./src/allmydata/storage/backends/das/core.py 87
5530         self.lease_checker.setServiceParent(self)
5531 
5532     def get_incoming_shnums(self, storageindex):
5533-        """Return the set of incoming shnums."""
5534+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
5535+        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
5536         try:
5537hunk ./src/allmydata/storage/backends/das/core.py 90
5538-           
5539-            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5540-            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5541-            return frozenset(incomingshnums)
5542+            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
5543+            shnums = [ int(fp.basename) for fp in childfps ]
5544+            return frozenset(shnums)
5545         except UnlistableError:
5546             # There is no shares directory at all.
5547             return frozenset()
5548hunk ./src/allmydata/storage/backends/das/core.py 98
5549             
5550     def get_shares(self, storageindex):
5551-        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5552+        """ Generate ImmutableShare objects for shares we have for this
5553+        storageindex. ("Shares we have" means completed ones, excluding
5554+        incoming ones.)"""
5555         finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5556         try:
5557hunk ./src/allmydata/storage/backends/das/core.py 103
5558-            for f in finalstoragedir.listdir():
5559-                if NUM_RE.match(f.basename):
5560-                    yield ImmutableShare(f, storageindex, int(f))
5561+            for fp in finalstoragedir.children():
5562+                if is_num(fp):
5563+                    yield ImmutableShare(fp, storageindex)
5564         except UnlistableError:
5565             # There is no shares directory at all.
5566             pass
5567hunk ./src/allmydata/storage/backends/das/core.py 116
5568         return fileutil.get_available_space(self.storedir, self.reserved_space)
5569 
5570     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
5571-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
5572-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
5573+        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
5574+        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
5575         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
5576         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
5577         return bw
5578hunk ./src/allmydata/storage/backends/das/expirer.py 50
5579     slow_start = 360 # wait 6 minutes after startup
5580     minimum_cycle_time = 12*60*60 # not more than twice per day
5581 
5582-    def __init__(self, statefile, historyfile, expiration_policy):
5583-        self.historyfile = historyfile
5584+    def __init__(self, statefile, historyfp, expiration_policy):
5585+        self.historyfp = historyfp
5586         self.expiration_enabled = expiration_policy['enabled']
5587         self.mode = expiration_policy['mode']
5588         self.override_lease_duration = None
5589hunk ./src/allmydata/storage/backends/das/expirer.py 80
5590             self.state["cycle-to-date"].setdefault(k, so_far[k])
5591 
5592         # initialize history
5593-        if not os.path.exists(self.historyfile):
5594+        if not self.historyfp.exists():
5595             history = {} # cyclenum -> dict
5596hunk ./src/allmydata/storage/backends/das/expirer.py 82
5597-            f = open(self.historyfile, "wb")
5598-            pickle.dump(history, f)
5599-            f.close()
5600+            self.historyfp.setContent(pickle.dumps(history))
5601 
5602     def create_empty_cycle_dict(self):
5603         recovered = self.create_empty_recovered_dict()
5604hunk ./src/allmydata/storage/backends/das/expirer.py 305
5605         # copy() needs to become a deepcopy
5606         h["space-recovered"] = s["space-recovered"].copy()
5607 
5608-        history = pickle.load(open(self.historyfile, "rb"))
5609+        history = pickle.load(self.historyfp.getContent())
5610         history[cycle] = h
5611         while len(history) > 10:
5612             oldcycles = sorted(history.keys())
5613hunk ./src/allmydata/storage/backends/das/expirer.py 310
5614             del history[oldcycles[0]]
5615-        f = open(self.historyfile, "wb")
5616-        pickle.dump(history, f)
5617-        f.close()
5618+        self.historyfp.setContent(pickle.dumps(history))
5619 
5620     def get_state(self):
5621         """In addition to the crawler state described in
5622hunk ./src/allmydata/storage/backends/das/expirer.py 379
5623         progress = self.get_progress()
5624 
5625         state = ShareCrawler.get_state(self) # does a shallow copy
5626-        history = pickle.load(open(self.historyfile, "rb"))
5627+        history = pickle.load(self.historyfp.getContent())
5628         state["history"] = history
5629 
5630         if not progress["cycle-in-progress"]:
5631hunk ./src/allmydata/storage/common.py 19
5632 def si_a2b(ascii_storageindex):
5633     return base32.a2b(ascii_storageindex)
5634 
5635-def storage_index_to_dir(startfp, storageindex):
5636+def si_dir(startfp, storageindex):
5637     sia = si_b2a(storageindex)
5638hunk ./src/allmydata/storage/common.py 21
5639-    return os.path.join(sia[:2], sia)
5640+    return startfp.child(sia[:2]).child(sia)
5641hunk ./src/allmydata/storage/crawler.py 68
5642     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
5643     minimum_cycle_time = 300 # don't run a cycle faster than this
5644 
5645-    def __init__(self, statefname, allowed_cpu_percentage=None):
5646+    def __init__(self, statefp, allowed_cpu_percentage=None):
5647         service.MultiService.__init__(self)
5648         if allowed_cpu_percentage is not None:
5649             self.allowed_cpu_percentage = allowed_cpu_percentage
5650hunk ./src/allmydata/storage/crawler.py 72
5651-        self.statefname = statefname
5652+        self.statefp = statefp
5653         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
5654                          for i in range(2**10)]
5655         self.prefixes.sort()
5656hunk ./src/allmydata/storage/crawler.py 192
5657         #                            of the last bucket to be processed, or
5658         #                            None if we are sleeping between cycles
5659         try:
5660-            f = open(self.statefname, "rb")
5661-            state = pickle.load(f)
5662-            f.close()
5663+            state = pickle.loads(self.statefp.getContent())
5664         except EnvironmentError:
5665             state = {"version": 1,
5666                      "last-cycle-finished": None,
5667hunk ./src/allmydata/storage/crawler.py 228
5668         else:
5669             last_complete_prefix = self.prefixes[lcpi]
5670         self.state["last-complete-prefix"] = last_complete_prefix
5671-        tmpfile = self.statefname + ".tmp"
5672-        f = open(tmpfile, "wb")
5673-        pickle.dump(self.state, f)
5674-        f.close()
5675-        fileutil.move_into_place(tmpfile, self.statefname)
5676+        self.statefp.setContent(pickle.dumps(self.state))
5677 
5678     def startService(self):
5679         # arrange things to look like we were just sleeping, so
5680hunk ./src/allmydata/storage/crawler.py 440
5681 
5682     minimum_cycle_time = 60*60 # we don't need this more than once an hour
5683 
5684-    def __init__(self, statefname, num_sample_prefixes=1):
5685-        FSShareCrawler.__init__(self, statefname)
5686+    def __init__(self, statefp, num_sample_prefixes=1):
5687+        FSShareCrawler.__init__(self, statefp)
5688         self.num_sample_prefixes = num_sample_prefixes
5689 
5690     def add_initial_state(self):
5691hunk ./src/allmydata/storage/server.py 11
5692 from allmydata.util import fileutil, idlib, log, time_format
5693 import allmydata # for __full_version__
5694 
5695-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5696-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5697+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5698+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5699 from allmydata.storage.lease import LeaseInfo
5700 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5701      create_mutable_sharefile
5702hunk ./src/allmydata/storage/server.py 173
5703         # to a particular owner.
5704         start = time.time()
5705         self.count("allocate")
5706-        alreadygot = set()
5707         incoming = set()
5708         bucketwriters = {} # k: shnum, v: BucketWriter
5709 
5710hunk ./src/allmydata/storage/server.py 199
5711             remaining_space -= self.allocated_size()
5712         # self.readonly_storage causes remaining_space <= 0
5713 
5714-        # fill alreadygot with all shares that we have, not just the ones
5715+        # Fill alreadygot with all shares that we have, not just the ones
5716         # they asked about: this will save them a lot of work. Add or update
5717         # leases for all of them: if they want us to hold shares for this
5718hunk ./src/allmydata/storage/server.py 202
5719-        # file, they'll want us to hold leases for this file.
5720+        # file, they'll want us to hold leases for all the shares of it.
5721+        alreadygot = set()
5722         for share in self.backend.get_shares(storageindex):
5723hunk ./src/allmydata/storage/server.py 205
5724-            alreadygot.add(share.shnum)
5725             share.add_or_renew_lease(lease_info)
5726hunk ./src/allmydata/storage/server.py 206
5727+            alreadygot.add(share.shnum)
5728 
5729hunk ./src/allmydata/storage/server.py 208
5730-        # fill incoming with all shares that are incoming use a set operation
5731-        # since there's no need to operate on individual pieces
5732+        # all share numbers that are incoming
5733         incoming = self.backend.get_incoming_shnums(storageindex)
5734 
5735         for shnum in ((sharenums - alreadygot) - incoming):
5736hunk ./src/allmydata/storage/server.py 282
5737             total_space_freed += sf.cancel_lease(cancel_secret)
5738 
5739         if found_buckets:
5740-            storagedir = os.path.join(self.sharedir,
5741-                                      storage_index_to_dir(storageindex))
5742-            if not os.listdir(storagedir):
5743-                os.rmdir(storagedir)
5744+            storagedir = si_dir(self.sharedir, storageindex)
5745+            fp_rmdir_if_empty(storagedir)
5746 
5747         if self.stats_provider:
5748             self.stats_provider.count('storage_server.bytes_freed',
5749hunk ./src/allmydata/test/test_backends.py 52
5750     subtree. I simulate just the parts of the filesystem that the current
5751     implementation of DAS backend needs. """
5752     def call_open(self, fname, mode):
5753+        assert isinstance(fname, basestring), fname
5754         fnamefp = FilePath(fname)
5755         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5756                         "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5757hunk ./src/allmydata/test/test_backends.py 104
5758                         "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5759 
5760     def call_stat(self, fname):
5761+        assert isinstance(fname, basestring), fname
5762         fnamefp = FilePath(fname)
5763         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5764                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5765hunk ./src/allmydata/test/test_backends.py 217
5766 
5767         mocktime.return_value = 0
5768         # Inspect incoming and fail unless it's empty.
5769-        incomingset = self.ss.backend.get_incoming('teststorage_index')
5770-        self.failUnlessReallyEqual(incomingset, set())
5771+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
5772+        self.failUnlessReallyEqual(incomingset, frozenset())
5773         
5774         # Populate incoming with the sharenum: 0.
5775hunk ./src/allmydata/test/test_backends.py 221
5776-        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5777+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5778 
5779         # Inspect incoming and fail unless the sharenum: 0 is listed there.
5780hunk ./src/allmydata/test/test_backends.py 224
5781-        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
5782+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
5783         
5784         # Attempt to create a second share writer with the same sharenum.
5785hunk ./src/allmydata/test/test_backends.py 227
5786-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5787+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5788 
5789         # Show that no sharewriter results from a remote_allocate_buckets
5790         # with the same si and sharenum, until BucketWriter.remote_close()
5791hunk ./src/allmydata/test/test_backends.py 280
5792         StorageServer object. """
5793 
5794         def call_listdir(dirname):
5795+            precondition(isinstance(dirname, basestring), dirname)
5796             self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
5797             return ['0']
5798 
5799hunk ./src/allmydata/test/test_backends.py 287
5800         mocklistdir.side_effect = call_listdir
5801 
5802         def call_open(fname, mode):
5803+            precondition(isinstance(fname, basestring), fname)
5804             self.failUnlessReallyEqual(fname, sharefname)
5805             self.failUnlessEqual(mode[0], 'r', mode)
5806             self.failUnless('b' in mode, mode)
5807hunk ./src/allmydata/test/test_backends.py 297
5808 
5809         datalen = len(share_data)
5810         def call_getsize(fname):
5811+            precondition(isinstance(fname, basestring), fname)
5812             self.failUnlessReallyEqual(fname, sharefname)
5813             return datalen
5814         mockgetsize.side_effect = call_getsize
5815hunk ./src/allmydata/test/test_backends.py 303
5816 
5817         def call_exists(fname):
5818+            precondition(isinstance(fname, basestring), fname)
5819             self.failUnlessReallyEqual(fname, sharefname)
5820             return True
5821         mockexists.side_effect = call_exists
5822hunk ./src/allmydata/test/test_backends.py 321
5823         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
5824 
5825 
5826-class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
5827-    @mock.patch('time.time')
5828-    @mock.patch('os.mkdir')
5829-    @mock.patch('__builtin__.open')
5830-    @mock.patch('os.listdir')
5831-    @mock.patch('os.path.isdir')
5832-    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5833+class TestBackendConstruction(MockFiles, ReallyEqualMixin):
5834+    def test_create_fs_backend(self):
5835         """ This tests whether a file system backend instance can be
5836         constructed. To pass the test, it has to use the
5837         filesystem in only the prescribed ways. """
5838hunk ./src/allmydata/test/test_backends.py 327
5839 
5840-        def call_open(fname, mode):
5841-            if fname == os.path.join(storedir,'bucket_counter.state'):
5842-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5843-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5844-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5845-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5846-                return StringIO()
5847-            else:
5848-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5849-        mockopen.side_effect = call_open
5850-
5851-        def call_isdir(fname):
5852-            if fname == os.path.join(storedir,'shares'):
5853-                return True
5854-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5855-                return True
5856-            else:
5857-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5858-        mockisdir.side_effect = call_isdir
5859-
5860-        def call_mkdir(fname, mode):
5861-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
5862-            self.failUnlessEqual(0777, mode)
5863-            if fname == storedir:
5864-                return None
5865-            elif fname == os.path.join(storedir,'shares'):
5866-                return None
5867-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5868-                return None
5869-            else:
5870-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
5871-        mockmkdir.side_effect = call_mkdir
5872-
5873         # Now begin the test.
5874hunk ./src/allmydata/test/test_backends.py 328
5875-        DASCore('teststoredir', expiration_policy)
5876-
5877-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5878-
5879+        DASCore(self.storedir, expiration_policy)
5880hunk ./src/allmydata/util/fileutil.py 7
5881 
5882 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5883 
5884+from allmydata.util.assertutil import precondition
5885+
5886 from twisted.python import log
5887hunk ./src/allmydata/util/fileutil.py 10
5888-from twisted.python.filepath import UnlistableError
5889+from twisted.python.filepath import FilePath, UnlistableError
5890 
5891 from pycryptopp.cipher.aes import AES
5892 
5893hunk ./src/allmydata/util/fileutil.py 210
5894             raise tx
5895         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5896 
5897+def fp_rmdir_if_empty(dirfp):
5898+    """ Remove the directory if it is empty. """
5899+    try:
5900+        os.rmdir(dirfp.path)
5901+    except OSError, e:
5902+        if e.errno != errno.ENOTEMPTY:
5903+            raise
5904+    else:
5905+        dirfp.changed()
5906+
5907 def rmtree(dirname):
5908     """
5909     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5910hunk ./src/allmydata/util/fileutil.py 257
5911         raise OSError, excs
5912 
5913 def fp_remove(dirfp):
5914+    """
5915+    An idempotent version of shutil.rmtree().  If the dir is already gone,
5916+    do nothing and return without raising an exception.  If this call
5917+    removes the dir, return without raising an exception.  If there is an
5918+    error that prevents removal or if the directory gets created again by
5919+    someone else after this deletes it and before this checks that it is
5920+    gone, raise an exception.
5921+    """
5922     try:
5923         dirfp.remove()
5924     except UnlistableError, e:
5925hunk ./src/allmydata/util/fileutil.py 270
5926         if e.originalException.errno != errno.ENOENT:
5927             raise
5928+    except OSError, e:
5929+        if e.errno != errno.ENOENT:
5930+            raise
5931 
5932 def rm_dir(dirname):
5933     # Renamed to be like shutil.rmtree and unlike rmdir.
5934hunk ./src/allmydata/util/fileutil.py 387
5935         import traceback
5936         traceback.print_exc()
5937 
5938-def get_disk_stats(whichdir, reserved_space=0):
5939+def get_disk_stats(whichdirfp, reserved_space=0):
5940     """Return disk statistics for the storage disk, in the form of a dict
5941     with the following fields.
5942       total:            total bytes on disk
5943hunk ./src/allmydata/util/fileutil.py 408
5944     you can pass how many bytes you would like to leave unused on this
5945     filesystem as reserved_space.
5946     """
5947+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5948 
5949     if have_GetDiskFreeSpaceExW:
5950         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
5951hunk ./src/allmydata/util/fileutil.py 419
5952         n_free_for_nonroot = c_ulonglong(0)
5953         n_total            = c_ulonglong(0)
5954         n_free_for_root    = c_ulonglong(0)
5955-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
5956+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
5957                                                byref(n_total),
5958                                                byref(n_free_for_root))
5959         if retval == 0:
5960hunk ./src/allmydata/util/fileutil.py 424
5961             raise OSError("Windows error %d attempting to get disk statistics for %r"
5962-                          % (GetLastError(), whichdir))
5963+                          % (GetLastError(), whichdirfp.path))
5964         free_for_nonroot = n_free_for_nonroot.value
5965         total            = n_total.value
5966         free_for_root    = n_free_for_root.value
5967hunk ./src/allmydata/util/fileutil.py 433
5968         # <http://docs.python.org/library/os.html#os.statvfs>
5969         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
5970         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
5971-        s = os.statvfs(whichdir)
5972+        s = os.statvfs(whichdirfp.path)
5973 
5974         # on my mac laptop:
5975         #  statvfs(2) is a wrapper around statfs(2).
5976hunk ./src/allmydata/util/fileutil.py 460
5977              'avail': avail,
5978            }
5979 
5980-def get_available_space(whichdir, reserved_space):
5981+def get_available_space(whichdirfp, reserved_space):
5982     """Returns available space for share storage in bytes, or None if no
5983     API to get this information is available.
5984 
5985hunk ./src/allmydata/util/fileutil.py 472
5986     you can pass how many bytes you would like to leave unused on this
5987     filesystem as reserved_space.
5988     """
5989+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5990     try:
5991hunk ./src/allmydata/util/fileutil.py 474
5992-        return get_disk_stats(whichdir, reserved_space)['avail']
5993+        return get_disk_stats(whichdirfp, reserved_space)['avail']
5994     except AttributeError:
5995         return None
5996hunk ./src/allmydata/util/fileutil.py 477
5997-    except EnvironmentError:
5998-        log.msg("OS call to get disk statistics failed")
5999-        return 0
6000}
6001[jacp16 or so
6002wilcoxjg@gmail.com**20110722070036
6003 Ignore-this: 7548785cad146056eede9a16b93b569f
6004] {
6005merger 0.0 (
6006hunk ./src/allmydata/_auto_deps.py 21
6007-    "Twisted >= 2.4.0",
6008+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6009+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6010+    # support asynchronous close.
6011+    "Twisted >= 10.1.0",
6012hunk ./src/allmydata/_auto_deps.py 21
6013-    "Twisted >= 2.4.0",
6014+    "Twisted >= 11.0",
6015)
6016hunk ./src/allmydata/storage/backends/das/core.py 2
6017 import os, re, weakref, struct, time, stat
6018+from twisted.application import service
6019+from twisted.python.filepath import UnlistableError
6020+from twisted.python.filepath import FilePath
6021+from zope.interface import implements
6022 
6023hunk ./src/allmydata/storage/backends/das/core.py 7
6024+import allmydata # for __full_version__
6025 from allmydata.interfaces import IStorageBackend
6026 from allmydata.storage.backends.base import Backend
6027hunk ./src/allmydata/storage/backends/das/core.py 10
6028-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6029+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6030 from allmydata.util.assertutil import precondition
6031hunk ./src/allmydata/storage/backends/das/core.py 12
6032-
6033-#from foolscap.api import Referenceable
6034-from twisted.application import service
6035-from twisted.python.filepath import UnlistableError
6036-
6037-from zope.interface import implements
6038 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6039 from allmydata.util import fileutil, idlib, log, time_format
6040hunk ./src/allmydata/storage/backends/das/core.py 14
6041-import allmydata # for __full_version__
6042-
6043-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6044-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6045 from allmydata.storage.lease import LeaseInfo
6046 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6047      create_mutable_sharefile
6048hunk ./src/allmydata/storage/backends/das/core.py 21
6049 from allmydata.storage.crawler import FSBucketCountingCrawler
6050 from allmydata.util.hashutil import constant_time_compare
6051 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6052-
6053-from zope.interface import implements
6054+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6055 
6056 # storage/
6057 # storage/shares/incoming
6058hunk ./src/allmydata/storage/backends/das/core.py 49
6059         self._setup_lease_checkerf(expiration_policy)
6060 
6061     def _setup_storage(self, storedir, readonly, reserved_space):
6062+        precondition(isinstance(storedir, FilePath)) 
6063         self.storedir = storedir
6064         self.readonly = readonly
6065         self.reserved_space = int(reserved_space)
6066hunk ./src/allmydata/storage/backends/das/core.py 83
6067 
6068     def get_incoming_shnums(self, storageindex):
6069         """ Return a frozenset of the shnum (as ints) of incoming shares. """
6070-        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
6071+        incomingdir = si_si2dir(self.incomingdir, storageindex)
6072         try:
6073             childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
6074             shnums = [ int(fp.basename) for fp in childfps ]
6075hunk ./src/allmydata/storage/backends/das/core.py 96
6076         """ Generate ImmutableShare objects for shares we have for this
6077         storageindex. ("Shares we have" means completed ones, excluding
6078         incoming ones.)"""
6079-        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
6080+        finalstoragedir = si_si2dir(self.sharedir, storageindex)
6081         try:
6082             for fp in finalstoragedir.children():
6083                 if is_num(fp):
6084hunk ./src/allmydata/storage/backends/das/core.py 111
6085         return fileutil.get_available_space(self.storedir, self.reserved_space)
6086 
6087     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6088-        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
6089-        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6090+        finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
6091+        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6092         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
6093         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
6094         return bw
6095hunk ./src/allmydata/storage/backends/null/core.py 18
6096         return None
6097 
6098     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6099-       
6100-        immutableshare = ImmutableShare()
6101+        immutableshare = ImmutableShare()
6102         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
6103 
6104     def set_storage_server(self, ss):
6105hunk ./src/allmydata/storage/backends/null/core.py 24
6106         self.ss = ss
6107 
6108-    def get_incoming(self, storageindex):
6109-        return set()
6110+    def get_incoming_shnums(self, storageindex):
6111+        return frozenset()
6112 
6113 class ImmutableShare:
6114     sharetype = "immutable"
6115hunk ./src/allmydata/storage/common.py 19
6116 def si_a2b(ascii_storageindex):
6117     return base32.a2b(ascii_storageindex)
6118 
6119-def si_dir(startfp, storageindex):
6120+def si_si2dir(startfp, storageindex):
6121     sia = si_b2a(storageindex)
6122     return startfp.child(sia[:2]).child(sia)
6123hunk ./src/allmydata/storage/immutable.py 20
6124     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
6125         self.ss = ss
6126         self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
6127-
6128         self._canary = canary
6129         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
6130         self.closed = False
6131hunk ./src/allmydata/storage/lease.py 17
6132 
6133     def get_expiration_time(self):
6134         return self.expiration_time
6135+
6136     def get_grant_renew_time_time(self):
6137         # hack, based upon fixed 31day expiration period
6138         return self.expiration_time - 31*24*60*60
6139hunk ./src/allmydata/storage/lease.py 21
6140+
6141     def get_age(self):
6142         return time.time() - self.get_grant_renew_time_time()
6143 
6144hunk ./src/allmydata/storage/lease.py 32
6145          self.expiration_time) = struct.unpack(">L32s32sL", data)
6146         self.nodeid = None
6147         return self
6148+
6149     def to_immutable_data(self):
6150         return struct.pack(">L32s32sL",
6151                            self.owner_num,
6152hunk ./src/allmydata/storage/lease.py 45
6153                            int(self.expiration_time),
6154                            self.renew_secret, self.cancel_secret,
6155                            self.nodeid)
6156+
6157     def from_mutable_data(self, data):
6158         (self.owner_num,
6159          self.expiration_time,
6160hunk ./src/allmydata/storage/server.py 11
6161 from allmydata.util import fileutil, idlib, log, time_format
6162 import allmydata # for __full_version__
6163 
6164-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6165-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6166+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6167+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6168 from allmydata.storage.lease import LeaseInfo
6169 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6170      create_mutable_sharefile
6171hunk ./src/allmydata/storage/server.py 88
6172             else:
6173                 stats["mean"] = None
6174 
6175-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
6176-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
6177-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
6178+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
6179+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
6180+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
6181                              (0.999, "99_9_percentile", 1000)]
6182 
6183             for percentile, percentilestring, minnumtoobserve in orderstatlist:
6184hunk ./src/allmydata/storage/server.py 231
6185             header = f.read(32)
6186             f.close()
6187             if header[:32] == MutableShareFile.MAGIC:
6188+                # XXX  Can I exploit this code?
6189                 sf = MutableShareFile(filename, self)
6190                 # note: if the share has been migrated, the renew_lease()
6191                 # call will throw an exception, with information to help the
6192hunk ./src/allmydata/storage/server.py 237
6193                 # client update the lease.
6194             elif header[:4] == struct.pack(">L", 1):
6195+                # Check if version number is "1".
6196+                # XXX WHAT ABOUT OTHER VERSIONS!!!!!!!?
6197                 sf = ShareFile(filename)
6198             else:
6199                 continue # non-sharefile
6200hunk ./src/allmydata/storage/server.py 285
6201             total_space_freed += sf.cancel_lease(cancel_secret)
6202 
6203         if found_buckets:
6204-            storagedir = si_dir(self.sharedir, storageindex)
6205+            # XXX  Yikes looks like code that shouldn't be in the server!
6206+            storagedir = si_si2dir(self.sharedir, storageindex)
6207             fp_rmdir_if_empty(storagedir)
6208 
6209         if self.stats_provider:
6210hunk ./src/allmydata/storage/server.py 301
6211             self.stats_provider.count('storage_server.bytes_added', consumed_size)
6212         del self._active_writers[bw]
6213 
6214-
6215     def remote_get_buckets(self, storageindex):
6216         start = time.time()
6217         self.count("get")
6218hunk ./src/allmydata/storage/server.py 329
6219         except StopIteration:
6220             return iter([])
6221 
6222+    #  XXX  As far as Zancas' grockery has gotten.
6223     def remote_slot_testv_and_readv_and_writev(self, storageindex,
6224                                                secrets,
6225                                                test_and_write_vectors,
6226hunk ./src/allmydata/storage/server.py 338
6227         self.count("writev")
6228         si_s = si_b2a(storageindex)
6229         log.msg("storage: slot_writev %s" % si_s)
6230-        si_dir = storage_index_to_dir(storageindex)
6231+       
6232         (write_enabler, renew_secret, cancel_secret) = secrets
6233         # shares exist if there is a file for them
6234hunk ./src/allmydata/storage/server.py 341
6235-        bucketdir = os.path.join(self.sharedir, si_dir)
6236+        bucketdir = si_si2dir(self.sharedir, storageindex)
6237         shares = {}
6238         if os.path.isdir(bucketdir):
6239             for sharenum_s in os.listdir(bucketdir):
6240hunk ./src/allmydata/storage/server.py 430
6241         si_s = si_b2a(storageindex)
6242         lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
6243                      facility="tahoe.storage", level=log.OPERATIONAL)
6244-        si_dir = storage_index_to_dir(storageindex)
6245         # shares exist if there is a file for them
6246hunk ./src/allmydata/storage/server.py 431
6247-        bucketdir = os.path.join(self.sharedir, si_dir)
6248+        bucketdir = si_si2dir(self.sharedir, storageindex)
6249         if not os.path.isdir(bucketdir):
6250             self.add_latency("readv", time.time() - start)
6251             return {}
6252hunk ./src/allmydata/test/test_backends.py 2
6253 from twisted.trial import unittest
6254-
6255 from twisted.python.filepath import FilePath
6256hunk ./src/allmydata/test/test_backends.py 3
6257-
6258 from allmydata.util.log import msg
6259hunk ./src/allmydata/test/test_backends.py 4
6260-
6261 from StringIO import StringIO
6262hunk ./src/allmydata/test/test_backends.py 5
6263-
6264 from allmydata.test.common_util import ReallyEqualMixin
6265 from allmydata.util.assertutil import _assert
6266hunk ./src/allmydata/test/test_backends.py 7
6267-
6268 import mock
6269 
6270 # This is the code that we're going to be testing.
6271hunk ./src/allmydata/test/test_backends.py 11
6272 from allmydata.storage.server import StorageServer
6273-
6274 from allmydata.storage.backends.das.core import DASCore
6275 from allmydata.storage.backends.null.core import NullCore
6276 
6277hunk ./src/allmydata/test/test_backends.py 14
6278-
6279-# The following share file contents was generated with
6280+# The following share file content was generated with
6281 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
6282hunk ./src/allmydata/test/test_backends.py 16
6283-# with share data == 'a'.
6284+# with share data == 'a'. The total size of this input
6285+# is 85 bytes.
6286 shareversionnumber = '\x00\x00\x00\x01'
6287 sharedatalength = '\x00\x00\x00\x01'
6288 numberofleases = '\x00\x00\x00\x01'
6289hunk ./src/allmydata/test/test_backends.py 21
6290-
6291 shareinputdata = 'a'
6292 ownernumber = '\x00\x00\x00\x00'
6293 renewsecret  = 'x'*32
6294hunk ./src/allmydata/test/test_backends.py 31
6295 client_data = shareinputdata + ownernumber + renewsecret + \
6296     cancelsecret + expirationtime + nextlease
6297 share_data = containerdata + client_data
6298-
6299-
6300 testnodeid = 'testnodeidxxxxxxxxxx'
6301 
6302 class MockStat:
6303hunk ./src/allmydata/test/test_backends.py 105
6304         mstat.st_mode = 16893 # a directory
6305         return mstat
6306 
6307+    def call_get_available_space(self, storedir, reservedspace):
6308+        # The input vector has an input size of 85.
6309+        return 85 - reservedspace
6310+
6311+    def call_exists(self):
6312+        # I'm only called in the ImmutableShareFile constructor.
6313+        return False
6314+
6315     def setUp(self):
6316         msg( "%s.setUp()" % (self,))
6317         self.storedir = FilePath('teststoredir')
6318hunk ./src/allmydata/test/test_backends.py 147
6319         mockfpstat = self.mockfpstatp.__enter__()
6320         mockfpstat.side_effect = self.call_stat
6321 
6322+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6323+        mockget_available_space = self.mockget_available_space.__enter__()
6324+        mockget_available_space.side_effect = self.call_get_available_space
6325+
6326+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6327+        mockfpexists = self.mockfpexists.__enter__()
6328+        mockfpexists.side_effect = self.call_exists
6329+
6330     def tearDown(self):
6331         msg( "%s.tearDown()" % (self,))
6332hunk ./src/allmydata/test/test_backends.py 157
6333+        self.mockfpexists.__exit__()
6334+        self.mockget_available_space.__exit__()
6335         self.mockfpstatp.__exit__()
6336         self.mockstatp.__exit__()
6337         self.mockopenp.__exit__()
6338hunk ./src/allmydata/test/test_backends.py 166
6339         self.mockmkdirp.__exit__()
6340         self.mocklistdirp.__exit__()
6341 
6342+
6343 expiration_policy = {'enabled' : False,
6344                      'mode' : 'age',
6345                      'override_lease_duration' : None,
6346hunk ./src/allmydata/test/test_backends.py 182
6347         self.ss = StorageServer(testnodeid, backend=NullCore())
6348 
6349     @mock.patch('os.mkdir')
6350-
6351     @mock.patch('__builtin__.open')
6352     @mock.patch('os.listdir')
6353     @mock.patch('os.path.isdir')
6354hunk ./src/allmydata/test/test_backends.py 201
6355         filesystem backend. To pass the test, it mustn't use the filesystem
6356         outside of its configured storedir. """
6357 
6358-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
6359+        StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
6360 
6361 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
6362     """ This tests both the StorageServer and the DAS backend together. """
6363hunk ./src/allmydata/test/test_backends.py 205
6364+   
6365     def setUp(self):
6366         MockFiles.setUp(self)
6367         try:
6368hunk ./src/allmydata/test/test_backends.py 211
6369             self.backend = DASCore(self.storedir, expiration_policy)
6370             self.ss = StorageServer(testnodeid, self.backend)
6371-            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6372-            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
6373+            self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6374+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
6375         except:
6376             MockFiles.tearDown(self)
6377             raise
6378hunk ./src/allmydata/test/test_backends.py 233
6379         # Populate incoming with the sharenum: 0.
6380         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6381 
6382-        # Inspect incoming and fail unless the sharenum: 0 is listed there.
6383-        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6384+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
6385+        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6386         
6387         # Attempt to create a second share writer with the same sharenum.
6388         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6389hunk ./src/allmydata/test/test_backends.py 257
6390 
6391         # Postclose: (Omnibus) failUnless written data is in final.
6392         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
6393-        contents = sharesinfinal[0].read_share_data(0,73)
6394+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
6395+        contents = sharesinfinal[0].read_share_data(0, 73)
6396         self.failUnlessReallyEqual(contents, client_data)
6397 
6398         # Exercise the case that the share we're asking to allocate is
6399hunk ./src/allmydata/test/test_backends.py 276
6400         mockget_available_space.side_effect = call_get_available_space
6401         
6402         
6403-        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6404+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6405 
6406     @mock.patch('os.path.exists')
6407     @mock.patch('os.path.getsize')
6408}
6409[jacp17
6410wilcoxjg@gmail.com**20110722203244
6411 Ignore-this: e79a5924fb2eb786ee4e9737a8228f87
6412] {
6413hunk ./src/allmydata/storage/backends/das/core.py 14
6414 from allmydata.util.assertutil import precondition
6415 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6416 from allmydata.util import fileutil, idlib, log, time_format
6417+from allmydata.util.fileutil import fp_make_dirs
6418 from allmydata.storage.lease import LeaseInfo
6419 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6420      create_mutable_sharefile
6421hunk ./src/allmydata/storage/backends/das/core.py 19
6422 from allmydata.storage.immutable import BucketWriter, BucketReader
6423-from allmydata.storage.crawler import FSBucketCountingCrawler
6424+from allmydata.storage.crawler import BucketCountingCrawler
6425 from allmydata.util.hashutil import constant_time_compare
6426hunk ./src/allmydata/storage/backends/das/core.py 21
6427-from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6428+from allmydata.storage.backends.das.expirer import LeaseCheckingCrawler
6429 _pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6430 
6431 # storage/
6432hunk ./src/allmydata/storage/backends/das/core.py 43
6433     implements(IStorageBackend)
6434     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
6435         Backend.__init__(self)
6436-
6437         self._setup_storage(storedir, readonly, reserved_space)
6438         self._setup_corruption_advisory()
6439         self._setup_bucket_counter()
6440hunk ./src/allmydata/storage/backends/das/core.py 72
6441 
6442     def _setup_bucket_counter(self):
6443         statefname = self.storedir.child("bucket_counter.state")
6444-        self.bucket_counter = FSBucketCountingCrawler(statefname)
6445+        self.bucket_counter = BucketCountingCrawler(statefname)
6446         self.bucket_counter.setServiceParent(self)
6447 
6448     def _setup_lease_checkerf(self, expiration_policy):
6449hunk ./src/allmydata/storage/backends/das/core.py 78
6450         statefile = self.storedir.child("lease_checker.state")
6451         historyfile = self.storedir.child("lease_checker.history")
6452-        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6453+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6454         self.lease_checker.setServiceParent(self)
6455 
6456     def get_incoming_shnums(self, storageindex):
6457hunk ./src/allmydata/storage/backends/das/core.py 168
6458             # it. Also construct the metadata.
6459             assert not finalhome.exists()
6460             fp_make_dirs(self.incominghome)
6461-            f = open(self.incominghome, 'wb')
6462+            f = self.incominghome.child(str(self.shnum))
6463             # The second field -- the four-byte share data length -- is no
6464             # longer used as of Tahoe v1.3.0, but we continue to write it in
6465             # there in case someone downgrades a storage server from >=
6466hunk ./src/allmydata/storage/backends/das/core.py 178
6467             # the largest length that can fit into the field. That way, even
6468             # if this does happen, the old < v1.3.0 server will still allow
6469             # clients to read the first part of the share.
6470-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6471-            f.close()
6472+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6473+            #f.close()
6474             self._lease_offset = max_size + 0x0c
6475             self._num_leases = 0
6476         else:
6477hunk ./src/allmydata/storage/backends/das/core.py 261
6478         f.write(data)
6479         f.close()
6480 
6481-    def _write_lease_record(self, f, lease_number, lease_info):
6482+    def _write_lease_record(self, lease_number, lease_info):
6483         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6484         f.seek(offset)
6485         assert f.tell() == offset
6486hunk ./src/allmydata/storage/backends/das/core.py 290
6487                 yield LeaseInfo().from_immutable_data(data)
6488 
6489     def add_lease(self, lease_info):
6490-        f = open(self.incominghome, 'rb+')
6491+        self.incominghome, 'rb+')
6492         num_leases = self._read_num_leases(f)
6493         self._write_lease_record(f, num_leases, lease_info)
6494         self._write_num_leases(f, num_leases+1)
6495hunk ./src/allmydata/storage/backends/das/expirer.py 1
6496-import time, os, pickle, struct
6497-from allmydata.storage.crawler import FSShareCrawler
6498+import time, os, pickle, struct # os, pickle, and struct will almost certainly be migrated to the backend...
6499+from allmydata.storage.crawler import ShareCrawler
6500 from allmydata.storage.common import UnknownMutableContainerVersionError, \
6501      UnknownImmutableContainerVersionError
6502 from twisted.python import log as twlog
6503hunk ./src/allmydata/storage/backends/das/expirer.py 7
6504 
6505-class FSLeaseCheckingCrawler(FSShareCrawler):
6506+class LeaseCheckingCrawler(ShareCrawler):
6507     """I examine the leases on all shares, determining which are still valid
6508     and which have expired. I can remove the expired leases (if so
6509     configured), and the share will be deleted when the last lease is
6510hunk ./src/allmydata/storage/backends/das/expirer.py 66
6511         else:
6512             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
6513         self.sharetypes_to_expire = expiration_policy['sharetypes']
6514-        FSShareCrawler.__init__(self, statefile)
6515+        ShareCrawler.__init__(self, statefile)
6516 
6517     def add_initial_state(self):
6518         # we fill ["cycle-to-date"] here (even though they will be reset in
6519hunk ./src/allmydata/storage/crawler.py 1
6520-
6521 import os, time, struct
6522 import cPickle as pickle
6523 from twisted.internet import reactor
6524hunk ./src/allmydata/storage/crawler.py 11
6525 class TimeSliceExceeded(Exception):
6526     pass
6527 
6528-class FSShareCrawler(service.MultiService):
6529-    """A subcless of ShareCrawler is attached to a StorageServer, and
6530+class ShareCrawler(service.MultiService):
6531+    """A subclass of ShareCrawler is attached to a StorageServer, and
6532     periodically walks all of its shares, processing each one in some
6533     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
6534     since large servers can easily have a terabyte of shares, in several
6535hunk ./src/allmydata/storage/crawler.py 426
6536         pass
6537 
6538 
6539-class FSBucketCountingCrawler(FSShareCrawler):
6540+class BucketCountingCrawler(ShareCrawler):
6541     """I keep track of how many buckets are being managed by this server.
6542     This is equivalent to the number of distributed files and directories for
6543     which I am providing storage. The actual number of files+directories in
6544hunk ./src/allmydata/storage/crawler.py 440
6545     minimum_cycle_time = 60*60 # we don't need this more than once an hour
6546 
6547     def __init__(self, statefp, num_sample_prefixes=1):
6548-        FSShareCrawler.__init__(self, statefp)
6549+        ShareCrawler.__init__(self, statefp)
6550         self.num_sample_prefixes = num_sample_prefixes
6551 
6552     def add_initial_state(self):
6553hunk ./src/allmydata/test/test_backends.py 113
6554         # I'm only called in the ImmutableShareFile constructor.
6555         return False
6556 
6557+    def call_setContent(self, inputstring):
6558+        # XXX Good enough for expirer, not sure about elsewhere...
6559+        return True
6560+
6561     def setUp(self):
6562         msg( "%s.setUp()" % (self,))
6563         self.storedir = FilePath('teststoredir')
6564hunk ./src/allmydata/test/test_backends.py 159
6565         mockfpexists = self.mockfpexists.__enter__()
6566         mockfpexists.side_effect = self.call_exists
6567 
6568+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6569+        mocksetContent = self.mocksetContent.__enter__()
6570+        mocksetContent.side_effect = self.call_setContent
6571+
6572     def tearDown(self):
6573         msg( "%s.tearDown()" % (self,))
6574hunk ./src/allmydata/test/test_backends.py 165
6575+        self.mocksetContent.__exit__()
6576         self.mockfpexists.__exit__()
6577         self.mockget_available_space.__exit__()
6578         self.mockfpstatp.__exit__()
6579}
6580[jacp18
6581wilcoxjg@gmail.com**20110723031915
6582 Ignore-this: 21e7f22ac20e3f8af22ea2e9b755d6a5
6583] {
6584hunk ./src/allmydata/_auto_deps.py 21
6585     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6586     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6587 
6588-    "Twisted >= 2.4.0",
6589+v v v v v v v
6590+    "Twisted >= 11.0",
6591+*************
6592+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6593+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6594+    # support asynchronous close.
6595+    "Twisted >= 10.1.0",
6596+^ ^ ^ ^ ^ ^ ^
6597 
6598     # foolscap < 0.5.1 had a performance bug which spent
6599     # O(N**2) CPU for transferring large mutable files
6600hunk ./src/allmydata/storage/backends/das/core.py 168
6601             # it. Also construct the metadata.
6602             assert not finalhome.exists()
6603             fp_make_dirs(self.incominghome)
6604-            f = self.incominghome.child(str(self.shnum))
6605+            f = self.incominghome
6606             # The second field -- the four-byte share data length -- is no
6607             # longer used as of Tahoe v1.3.0, but we continue to write it in
6608             # there in case someone downgrades a storage server from >=
6609hunk ./src/allmydata/storage/backends/das/core.py 178
6610             # the largest length that can fit into the field. That way, even
6611             # if this does happen, the old < v1.3.0 server will still allow
6612             # clients to read the first part of the share.
6613-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6614-            #f.close()
6615+            print 'f: ',f
6616+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6617             self._lease_offset = max_size + 0x0c
6618             self._num_leases = 0
6619         else:
6620hunk ./src/allmydata/storage/backends/das/core.py 263
6621 
6622     def _write_lease_record(self, lease_number, lease_info):
6623         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6624-        f.seek(offset)
6625-        assert f.tell() == offset
6626-        f.write(lease_info.to_immutable_data())
6627+        fh = f.open()
6628+        try:
6629+            fh.seek(offset)
6630+            assert fh.tell() == offset
6631+            fh.write(lease_info.to_immutable_data())
6632+        finally:
6633+            fh.close()
6634 
6635     def _read_num_leases(self, f):
6636hunk ./src/allmydata/storage/backends/das/core.py 272
6637-        f.seek(0x08)
6638-        (num_leases,) = struct.unpack(">L", f.read(4))
6639+        fh = f.open()
6640+        try:
6641+            fh.seek(0x08)
6642+            ro = fh.read(4)
6643+            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6644+            (num_leases,) = struct.unpack(">L", ro)
6645+        finally:
6646+            fh.close()
6647         return num_leases
6648 
6649     def _write_num_leases(self, f, num_leases):
6650hunk ./src/allmydata/storage/backends/das/core.py 283
6651-        f.seek(0x08)
6652-        f.write(struct.pack(">L", num_leases))
6653+        fh = f.open()
6654+        try:
6655+            fh.seek(0x08)
6656+            fh.write(struct.pack(">L", num_leases))
6657+        finally:
6658+            fh.close()
6659 
6660     def _truncate_leases(self, f, num_leases):
6661         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
6662hunk ./src/allmydata/storage/backends/das/core.py 304
6663                 yield LeaseInfo().from_immutable_data(data)
6664 
6665     def add_lease(self, lease_info):
6666-        self.incominghome, 'rb+')
6667-        num_leases = self._read_num_leases(f)
6668+        f = self.incominghome
6669+        num_leases = self._read_num_leases(self.incominghome)
6670         self._write_lease_record(f, num_leases, lease_info)
6671         self._write_num_leases(f, num_leases+1)
6672hunk ./src/allmydata/storage/backends/das/core.py 308
6673-        f.close()
6674-
6675+       
6676     def renew_lease(self, renew_secret, new_expire_time):
6677         for i,lease in enumerate(self.get_leases()):
6678             if constant_time_compare(lease.renew_secret, renew_secret):
6679hunk ./src/allmydata/test/test_backends.py 33
6680 share_data = containerdata + client_data
6681 testnodeid = 'testnodeidxxxxxxxxxx'
6682 
6683+
6684 class MockStat:
6685     def __init__(self):
6686         self.st_mode = None
6687hunk ./src/allmydata/test/test_backends.py 43
6688     code under test if it reads or writes outside of its prescribed
6689     subtree. I simulate just the parts of the filesystem that the current
6690     implementation of DAS backend needs. """
6691+
6692+    def setUp(self):
6693+        msg( "%s.setUp()" % (self,))
6694+        self.storedir = FilePath('teststoredir')
6695+        self.basedir = self.storedir.child('shares')
6696+        self.baseincdir = self.basedir.child('incoming')
6697+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6698+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6699+        self.shareincomingname = self.sharedirincomingname.child('0')
6700+        self.sharefilename = self.sharedirfinalname.child('0')
6701+        self.sharefilecontents = StringIO(share_data)
6702+
6703+        self.mocklistdirp = mock.patch('os.listdir')
6704+        mocklistdir = self.mocklistdirp.__enter__()
6705+        mocklistdir.side_effect = self.call_listdir
6706+
6707+        self.mockmkdirp = mock.patch('os.mkdir')
6708+        mockmkdir = self.mockmkdirp.__enter__()
6709+        mockmkdir.side_effect = self.call_mkdir
6710+
6711+        self.mockisdirp = mock.patch('os.path.isdir')
6712+        mockisdir = self.mockisdirp.__enter__()
6713+        mockisdir.side_effect = self.call_isdir
6714+
6715+        self.mockopenp = mock.patch('__builtin__.open')
6716+        mockopen = self.mockopenp.__enter__()
6717+        mockopen.side_effect = self.call_open
6718+
6719+        self.mockstatp = mock.patch('os.stat')
6720+        mockstat = self.mockstatp.__enter__()
6721+        mockstat.side_effect = self.call_stat
6722+
6723+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6724+        mockfpstat = self.mockfpstatp.__enter__()
6725+        mockfpstat.side_effect = self.call_stat
6726+
6727+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6728+        mockget_available_space = self.mockget_available_space.__enter__()
6729+        mockget_available_space.side_effect = self.call_get_available_space
6730+
6731+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6732+        mockfpexists = self.mockfpexists.__enter__()
6733+        mockfpexists.side_effect = self.call_exists
6734+
6735+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6736+        mocksetContent = self.mocksetContent.__enter__()
6737+        mocksetContent.side_effect = self.call_setContent
6738+
6739     def call_open(self, fname, mode):
6740         assert isinstance(fname, basestring), fname
6741         fnamefp = FilePath(fname)
6742hunk ./src/allmydata/test/test_backends.py 107
6743             # current implementation of DAS backend, and we might want to
6744             # use this information in this test in the future...
6745             return StringIO()
6746+        elif fnamefp == self.shareincomingname:
6747+            print "repr(fnamefp): ", repr(fnamefp)
6748         else:
6749             # Anything else you open inside your subtree appears to be an
6750             # empty file.
6751hunk ./src/allmydata/test/test_backends.py 168
6752         # XXX Good enough for expirer, not sure about elsewhere...
6753         return True
6754 
6755-    def setUp(self):
6756-        msg( "%s.setUp()" % (self,))
6757-        self.storedir = FilePath('teststoredir')
6758-        self.basedir = self.storedir.child('shares')
6759-        self.baseincdir = self.basedir.child('incoming')
6760-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6761-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6762-        self.shareincomingname = self.sharedirincomingname.child('0')
6763-        self.sharefname = self.sharedirfinalname.child('0')
6764-
6765-        self.mocklistdirp = mock.patch('os.listdir')
6766-        mocklistdir = self.mocklistdirp.__enter__()
6767-        mocklistdir.side_effect = self.call_listdir
6768-
6769-        self.mockmkdirp = mock.patch('os.mkdir')
6770-        mockmkdir = self.mockmkdirp.__enter__()
6771-        mockmkdir.side_effect = self.call_mkdir
6772-
6773-        self.mockisdirp = mock.patch('os.path.isdir')
6774-        mockisdir = self.mockisdirp.__enter__()
6775-        mockisdir.side_effect = self.call_isdir
6776-
6777-        self.mockopenp = mock.patch('__builtin__.open')
6778-        mockopen = self.mockopenp.__enter__()
6779-        mockopen.side_effect = self.call_open
6780-
6781-        self.mockstatp = mock.patch('os.stat')
6782-        mockstat = self.mockstatp.__enter__()
6783-        mockstat.side_effect = self.call_stat
6784-
6785-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6786-        mockfpstat = self.mockfpstatp.__enter__()
6787-        mockfpstat.side_effect = self.call_stat
6788-
6789-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6790-        mockget_available_space = self.mockget_available_space.__enter__()
6791-        mockget_available_space.side_effect = self.call_get_available_space
6792-
6793-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6794-        mockfpexists = self.mockfpexists.__enter__()
6795-        mockfpexists.side_effect = self.call_exists
6796-
6797-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6798-        mocksetContent = self.mocksetContent.__enter__()
6799-        mocksetContent.side_effect = self.call_setContent
6800 
6801     def tearDown(self):
6802         msg( "%s.tearDown()" % (self,))
6803hunk ./src/allmydata/test/test_backends.py 239
6804         handling of simultaneous and successive attempts to write the same
6805         share.
6806         """
6807-
6808         mocktime.return_value = 0
6809         # Inspect incoming and fail unless it's empty.
6810         incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
6811}
6812[jacp19orso
6813wilcoxjg@gmail.com**20110724034230
6814 Ignore-this: f001093c467225c289489636a61935fe
6815] {
6816hunk ./src/allmydata/_auto_deps.py 21
6817     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6818     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6819 
6820-v v v v v v v
6821-    "Twisted >= 11.0",
6822-*************
6823+
6824     # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6825     # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6826     # support asynchronous close.
6827hunk ./src/allmydata/_auto_deps.py 26
6828     "Twisted >= 10.1.0",
6829-^ ^ ^ ^ ^ ^ ^
6830+
6831 
6832     # foolscap < 0.5.1 had a performance bug which spent
6833     # O(N**2) CPU for transferring large mutable files
6834hunk ./src/allmydata/storage/backends/das/core.py 153
6835     LEASE_SIZE = struct.calcsize(">L32s32sL")
6836     sharetype = "immutable"
6837 
6838-    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
6839+    def __init__(self, finalhome, storageindex, shnum, incominghome, max_size=None, create=False):
6840         """ If max_size is not None then I won't allow more than
6841         max_size to be written to me. If create=True then max_size
6842         must not be None. """
6843hunk ./src/allmydata/storage/backends/das/core.py 167
6844             # touch the file, so later callers will see that we're working on
6845             # it. Also construct the metadata.
6846             assert not finalhome.exists()
6847-            fp_make_dirs(self.incominghome)
6848-            f = self.incominghome
6849+            fp_make_dirs(self.incominghome.parent())
6850             # The second field -- the four-byte share data length -- is no
6851             # longer used as of Tahoe v1.3.0, but we continue to write it in
6852             # there in case someone downgrades a storage server from >=
6853hunk ./src/allmydata/storage/backends/das/core.py 177
6854             # the largest length that can fit into the field. That way, even
6855             # if this does happen, the old < v1.3.0 server will still allow
6856             # clients to read the first part of the share.
6857-            print 'f: ',f
6858-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6859+            self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6860             self._lease_offset = max_size + 0x0c
6861             self._num_leases = 0
6862         else:
6863hunk ./src/allmydata/storage/backends/das/core.py 182
6864             f = open(self.finalhome, 'rb')
6865-            filesize = os.path.getsize(self.finalhome)
6866             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
6867             f.close()
6868hunk ./src/allmydata/storage/backends/das/core.py 184
6869+            filesize = self.finalhome.getsize()
6870             if version != 1:
6871                 msg = "sharefile %s had version %d but we wanted 1" % \
6872                       (self.finalhome, version)
6873hunk ./src/allmydata/storage/backends/das/core.py 259
6874         f.write(data)
6875         f.close()
6876 
6877-    def _write_lease_record(self, lease_number, lease_info):
6878+    def _write_lease_record(self, f, lease_number, lease_info):
6879         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6880         fh = f.open()
6881hunk ./src/allmydata/storage/backends/das/core.py 262
6882+        print fh
6883         try:
6884             fh.seek(offset)
6885             assert fh.tell() == offset
6886hunk ./src/allmydata/storage/backends/das/core.py 271
6887             fh.close()
6888 
6889     def _read_num_leases(self, f):
6890-        fh = f.open()
6891+        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
6892         try:
6893             fh.seek(0x08)
6894             ro = fh.read(4)
6895hunk ./src/allmydata/storage/backends/das/core.py 275
6896-            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6897             (num_leases,) = struct.unpack(">L", ro)
6898         finally:
6899             fh.close()
6900hunk ./src/allmydata/storage/backends/das/core.py 302
6901                 yield LeaseInfo().from_immutable_data(data)
6902 
6903     def add_lease(self, lease_info):
6904-        f = self.incominghome
6905         num_leases = self._read_num_leases(self.incominghome)
6906hunk ./src/allmydata/storage/backends/das/core.py 303
6907-        self._write_lease_record(f, num_leases, lease_info)
6908-        self._write_num_leases(f, num_leases+1)
6909+        self._write_lease_record(self.incominghome, num_leases, lease_info)
6910+        self._write_num_leases(self.incominghome, num_leases+1)
6911         
6912     def renew_lease(self, renew_secret, new_expire_time):
6913         for i,lease in enumerate(self.get_leases()):
6914hunk ./src/allmydata/test/test_backends.py 52
6915         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6916         self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6917         self.shareincomingname = self.sharedirincomingname.child('0')
6918-        self.sharefilename = self.sharedirfinalname.child('0')
6919-        self.sharefilecontents = StringIO(share_data)
6920+        self.sharefinalname = self.sharedirfinalname.child('0')
6921 
6922hunk ./src/allmydata/test/test_backends.py 54
6923-        self.mocklistdirp = mock.patch('os.listdir')
6924-        mocklistdir = self.mocklistdirp.__enter__()
6925-        mocklistdir.side_effect = self.call_listdir
6926+        # Make patcher, patch, and make effects for fs using functions.
6927+        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
6928+        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
6929+        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
6930 
6931hunk ./src/allmydata/test/test_backends.py 59
6932-        self.mockmkdirp = mock.patch('os.mkdir')
6933-        mockmkdir = self.mockmkdirp.__enter__()
6934-        mockmkdir.side_effect = self.call_mkdir
6935+        #self.mockmkdirp = mock.patch('os.mkdir')
6936+        #mockmkdir = self.mockmkdirp.__enter__()
6937+        #mockmkdir.side_effect = self.call_mkdir
6938 
6939hunk ./src/allmydata/test/test_backends.py 63
6940-        self.mockisdirp = mock.patch('os.path.isdir')
6941+        self.mockisdirp = mock.patch('FilePath.isdir')
6942         mockisdir = self.mockisdirp.__enter__()
6943         mockisdir.side_effect = self.call_isdir
6944 
6945hunk ./src/allmydata/test/test_backends.py 67
6946-        self.mockopenp = mock.patch('__builtin__.open')
6947+        self.mockopenp = mock.patch('FilePath.open')
6948         mockopen = self.mockopenp.__enter__()
6949         mockopen.side_effect = self.call_open
6950 
6951hunk ./src/allmydata/test/test_backends.py 71
6952-        self.mockstatp = mock.patch('os.stat')
6953+        self.mockstatp = mock.patch('filepath.stat')
6954         mockstat = self.mockstatp.__enter__()
6955         mockstat.side_effect = self.call_stat
6956 
6957hunk ./src/allmydata/test/test_backends.py 91
6958         mocksetContent = self.mocksetContent.__enter__()
6959         mocksetContent.side_effect = self.call_setContent
6960 
6961+    #  The behavior of mocked filesystem using functions
6962     def call_open(self, fname, mode):
6963         assert isinstance(fname, basestring), fname
6964         fnamefp = FilePath(fname)
6965hunk ./src/allmydata/test/test_backends.py 109
6966             # use this information in this test in the future...
6967             return StringIO()
6968         elif fnamefp == self.shareincomingname:
6969-            print "repr(fnamefp): ", repr(fnamefp)
6970+            self.incomingsharefilecontents.closed = False
6971+            return self.incomingsharefilecontents
6972         else:
6973             # Anything else you open inside your subtree appears to be an
6974             # empty file.
6975hunk ./src/allmydata/test/test_backends.py 152
6976         fnamefp = FilePath(fname)
6977         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
6978                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
6979-
6980         msg("%s.call_stat(%s)" % (self, fname,))
6981         mstat = MockStat()
6982         mstat.st_mode = 16893 # a directory
6983hunk ./src/allmydata/test/test_backends.py 166
6984         return False
6985 
6986     def call_setContent(self, inputstring):
6987-        # XXX Good enough for expirer, not sure about elsewhere...
6988-        return True
6989-
6990+        self.incomingsharefilecontents = StringIO(inputstring)
6991 
6992     def tearDown(self):
6993         msg( "%s.tearDown()" % (self,))
6994}
6995[jacp19
6996wilcoxjg@gmail.com**20110727080553
6997 Ignore-this: 851b1ebdeeee712abfbda557af142726
6998] {
6999hunk ./src/allmydata/storage/backends/das/core.py 1
7000-import os, re, weakref, struct, time, stat
7001+import re, weakref, struct, time, stat
7002 from twisted.application import service
7003 from twisted.python.filepath import UnlistableError
7004hunk ./src/allmydata/storage/backends/das/core.py 4
7005+from twisted.python import filepath
7006 from twisted.python.filepath import FilePath
7007 from zope.interface import implements
7008 
7009hunk ./src/allmydata/storage/backends/das/core.py 50
7010         self._setup_lease_checkerf(expiration_policy)
7011 
7012     def _setup_storage(self, storedir, readonly, reserved_space):
7013-        precondition(isinstance(storedir, FilePath)) 
7014+        precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7015         self.storedir = storedir
7016         self.readonly = readonly
7017         self.reserved_space = int(reserved_space)
7018hunk ./src/allmydata/storage/backends/das/core.py 195
7019         self._data_offset = 0xc
7020 
7021     def close(self):
7022-        fileutil.make_dirs(os.path.dirname(self.finalhome))
7023-        fileutil.rename(self.incominghome, self.finalhome)
7024+        fileutil.fp_make_dirs(self.finalhome.parent())
7025+        self.incominghome.moveTo(self.finalhome)
7026         try:
7027             # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
7028             # We try to delete the parent (.../ab/abcde) to avoid leaving
7029hunk ./src/allmydata/storage/backends/das/core.py 209
7030             # their children to know when they should do the rmdir. This
7031             # approach is simpler, but relies on os.rmdir refusing to delete
7032             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
7033-            #print "os.path.dirname(self.incominghome): "
7034-            #print os.path.dirname(self.incominghome)
7035-            os.rmdir(os.path.dirname(self.incominghome))
7036+            fileutil.fp_rmdir_if_empty(self.incominghome.parent())
7037             # we also delete the grandparent (prefix) directory, .../ab ,
7038             # again to avoid leaving directories lying around. This might
7039             # fail if there is another bucket open that shares a prefix (like
7040hunk ./src/allmydata/storage/backends/das/core.py 214
7041             # ab/abfff).
7042-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
7043+            fileutil.fp_rmdir_if_empty(self.incominghome.parent().parent())
7044             # we leave the great-grandparent (incoming/) directory in place.
7045         except EnvironmentError:
7046             # ignore the "can't rmdir because the directory is not empty"
7047hunk ./src/allmydata/storage/backends/das/core.py 224
7048         pass
7049         
7050     def stat(self):
7051-        return os.stat(self.finalhome)[stat.ST_SIZE]
7052-        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
7053+        return filepath.stat(self.finalhome)[stat.ST_SIZE]
7054 
7055     def get_shnum(self):
7056         return self.shnum
7057hunk ./src/allmydata/storage/backends/das/core.py 230
7058 
7059     def unlink(self):
7060-        os.unlink(self.finalhome)
7061+        self.finalhome.remove()
7062 
7063     def read_share_data(self, offset, length):
7064         precondition(offset >= 0)
7065hunk ./src/allmydata/storage/backends/das/core.py 237
7066         # Reads beyond the end of the data are truncated. Reads that start
7067         # beyond the end of the data return an empty string.
7068         seekpos = self._data_offset+offset
7069-        fsize = os.path.getsize(self.finalhome)
7070+        fsize = self.finalhome.getsize()
7071         actuallength = max(0, min(length, fsize-seekpos))
7072         if actuallength == 0:
7073             return ""
7074hunk ./src/allmydata/storage/backends/das/core.py 241
7075-        f = open(self.finalhome, 'rb')
7076-        f.seek(seekpos)
7077-        return f.read(actuallength)
7078+        try:
7079+            fh = open(self.finalhome, 'rb')
7080+            fh.seek(seekpos)
7081+            sharedata = fh.read(actuallength)
7082+        finally:
7083+            fh.close()
7084+        return sharedata
7085 
7086     def write_share_data(self, offset, data):
7087         length = len(data)
7088hunk ./src/allmydata/storage/backends/das/core.py 264
7089     def _write_lease_record(self, f, lease_number, lease_info):
7090         offset = self._lease_offset + lease_number * self.LEASE_SIZE
7091         fh = f.open()
7092-        print fh
7093         try:
7094             fh.seek(offset)
7095             assert fh.tell() == offset
7096hunk ./src/allmydata/storage/backends/das/core.py 269
7097             fh.write(lease_info.to_immutable_data())
7098         finally:
7099+            print dir(fh)
7100             fh.close()
7101 
7102     def _read_num_leases(self, f):
7103hunk ./src/allmydata/storage/backends/das/core.py 273
7104-        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
7105+        fh = f.open() #XXX  Should be mocking FilePath.open()
7106         try:
7107             fh.seek(0x08)
7108             ro = fh.read(4)
7109hunk ./src/allmydata/storage/backends/das/core.py 280
7110             (num_leases,) = struct.unpack(">L", ro)
7111         finally:
7112             fh.close()
7113+            print "end of _read_num_leases"
7114         return num_leases
7115 
7116     def _write_num_leases(self, f, num_leases):
7117hunk ./src/allmydata/storage/crawler.py 6
7118 from twisted.internet import reactor
7119 from twisted.application import service
7120 from allmydata.storage.common import si_b2a
7121-from allmydata.util import fileutil
7122 
7123 class TimeSliceExceeded(Exception):
7124     pass
7125hunk ./src/allmydata/storage/crawler.py 478
7126             old_cycle,buckets = self.state["storage-index-samples"][prefix]
7127             if old_cycle != cycle:
7128                 del self.state["storage-index-samples"][prefix]
7129-
7130hunk ./src/allmydata/test/test_backends.py 1
7131+import os
7132 from twisted.trial import unittest
7133 from twisted.python.filepath import FilePath
7134 from allmydata.util.log import msg
7135hunk ./src/allmydata/test/test_backends.py 9
7136 from allmydata.test.common_util import ReallyEqualMixin
7137 from allmydata.util.assertutil import _assert
7138 import mock
7139+from mock import Mock
7140 
7141 # This is the code that we're going to be testing.
7142 from allmydata.storage.server import StorageServer
7143hunk ./src/allmydata/test/test_backends.py 40
7144     def __init__(self):
7145         self.st_mode = None
7146 
7147+class MockFilePath:
7148+    def __init__(self, PathString):
7149+        self.PathName = PathString
7150+    def child(self, ChildString):
7151+        return MockFilePath(os.path.join(self.PathName, ChildString))
7152+    def parent(self):
7153+        return MockFilePath(os.path.dirname(self.PathName))
7154+    def makedirs(self):
7155+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7156+        pass
7157+    def isdir(self):
7158+        return True
7159+    def remove(self):
7160+        pass
7161+    def children(self):
7162+        return []
7163+    def exists(self):
7164+        return False
7165+    def setContent(self, ContentString):
7166+        self.File = MockFile(ContentString)
7167+    def open(self):
7168+        return self.File.open()
7169+
7170+class MockFile:
7171+    def __init__(self, ContentString):
7172+        self.Contents = ContentString
7173+    def open(self):
7174+        return self
7175+    def close(self):
7176+        pass
7177+    def seek(self, position):
7178+        pass
7179+    def read(self, amount):
7180+        pass
7181+
7182+
7183+class MockBCC:
7184+    def setServiceParent(self, Parent):
7185+        pass
7186+
7187+class MockLCC:
7188+    def setServiceParent(self, Parent):
7189+        pass
7190+
7191 class MockFiles(unittest.TestCase):
7192     """ I simulate a filesystem that the code under test can use. I flag the
7193     code under test if it reads or writes outside of its prescribed
7194hunk ./src/allmydata/test/test_backends.py 91
7195     implementation of DAS backend needs. """
7196 
7197     def setUp(self):
7198+        # Make patcher, patch, and make effects for fs using functions.
7199         msg( "%s.setUp()" % (self,))
7200hunk ./src/allmydata/test/test_backends.py 93
7201-        self.storedir = FilePath('teststoredir')
7202+        self.storedir = MockFilePath('teststoredir')
7203         self.basedir = self.storedir.child('shares')
7204         self.baseincdir = self.basedir.child('incoming')
7205         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
7206hunk ./src/allmydata/test/test_backends.py 101
7207         self.shareincomingname = self.sharedirincomingname.child('0')
7208         self.sharefinalname = self.sharedirfinalname.child('0')
7209 
7210-        # Make patcher, patch, and make effects for fs using functions.
7211-        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
7212-        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
7213-        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
7214-
7215-        #self.mockmkdirp = mock.patch('os.mkdir')
7216-        #mockmkdir = self.mockmkdirp.__enter__()
7217-        #mockmkdir.side_effect = self.call_mkdir
7218-
7219-        self.mockisdirp = mock.patch('FilePath.isdir')
7220-        mockisdir = self.mockisdirp.__enter__()
7221-        mockisdir.side_effect = self.call_isdir
7222+        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
7223+        FakePath = self.FilePathFake.__enter__()
7224 
7225hunk ./src/allmydata/test/test_backends.py 104
7226-        self.mockopenp = mock.patch('FilePath.open')
7227-        mockopen = self.mockopenp.__enter__()
7228-        mockopen.side_effect = self.call_open
7229+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
7230+        FakeBCC = self.BCountingCrawler.__enter__()
7231+        FakeBCC.side_effect = self.call_FakeBCC
7232 
7233hunk ./src/allmydata/test/test_backends.py 108
7234-        self.mockstatp = mock.patch('filepath.stat')
7235-        mockstat = self.mockstatp.__enter__()
7236-        mockstat.side_effect = self.call_stat
7237+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
7238+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
7239+        FakeLCC.side_effect = self.call_FakeLCC
7240 
7241hunk ./src/allmydata/test/test_backends.py 112
7242-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
7243-        mockfpstat = self.mockfpstatp.__enter__()
7244-        mockfpstat.side_effect = self.call_stat
7245+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
7246+        GetSpace = self.get_available_space.__enter__()
7247+        GetSpace.side_effect = self.call_get_available_space
7248 
7249hunk ./src/allmydata/test/test_backends.py 116
7250-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
7251-        mockget_available_space = self.mockget_available_space.__enter__()
7252-        mockget_available_space.side_effect = self.call_get_available_space
7253+    def call_FakeBCC(self, StateFile):
7254+        return MockBCC()
7255 
7256hunk ./src/allmydata/test/test_backends.py 119
7257-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
7258-        mockfpexists = self.mockfpexists.__enter__()
7259-        mockfpexists.side_effect = self.call_exists
7260-
7261-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
7262-        mocksetContent = self.mocksetContent.__enter__()
7263-        mocksetContent.side_effect = self.call_setContent
7264-
7265-    #  The behavior of mocked filesystem using functions
7266-    def call_open(self, fname, mode):
7267-        assert isinstance(fname, basestring), fname
7268-        fnamefp = FilePath(fname)
7269-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
7270-                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
7271-
7272-        if fnamefp == self.storedir.child('bucket_counter.state'):
7273-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
7274-        elif fnamefp == self.storedir.child('lease_checker.state'):
7275-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
7276-        elif fnamefp == self.storedir.child('lease_checker.history'):
7277-            # This is separated out from the else clause below just because
7278-            # we know this particular file is going to be used by the
7279-            # current implementation of DAS backend, and we might want to
7280-            # use this information in this test in the future...
7281-            return StringIO()
7282-        elif fnamefp == self.shareincomingname:
7283-            self.incomingsharefilecontents.closed = False
7284-            return self.incomingsharefilecontents
7285-        else:
7286-            # Anything else you open inside your subtree appears to be an
7287-            # empty file.
7288-            return StringIO()
7289-
7290-    def call_isdir(self, fname):
7291-        fnamefp = FilePath(fname)
7292-        return fnamefp.isdir()
7293-
7294-        self.failUnless(self.storedir == self or self.storedir in self.parents(),
7295-                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
7296-
7297-        # The first two cases are separate from the else clause below just
7298-        # because we know that the current implementation of the DAS backend
7299-        # inspects these two directories and we might want to make use of
7300-        # that information in the tests in the future...
7301-        if self == self.storedir.child('shares'):
7302-            return True
7303-        elif self == self.storedir.child('shares').child('incoming'):
7304-            return True
7305-        else:
7306-            # Anything else you open inside your subtree appears to be a
7307-            # directory.
7308-            return True
7309-
7310-    def call_mkdir(self, fname, mode):
7311-        fnamefp = FilePath(fname)
7312-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
7313-                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
7314-        self.failUnlessEqual(0777, mode)
7315+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
7316+        return MockLCC()
7317 
7318     def call_listdir(self, fname):
7319         fnamefp = FilePath(fname)
7320hunk ./src/allmydata/test/test_backends.py 150
7321 
7322     def tearDown(self):
7323         msg( "%s.tearDown()" % (self,))
7324-        self.mocksetContent.__exit__()
7325-        self.mockfpexists.__exit__()
7326-        self.mockget_available_space.__exit__()
7327-        self.mockfpstatp.__exit__()
7328-        self.mockstatp.__exit__()
7329-        self.mockopenp.__exit__()
7330-        self.mockisdirp.__exit__()
7331-        self.mockmkdirp.__exit__()
7332-        self.mocklistdirp.__exit__()
7333-
7334+        FakePath = self.FilePathFake.__exit__()       
7335+        FakeBCC = self.BCountingCrawler.__exit__()
7336 
7337 expiration_policy = {'enabled' : False,
7338                      'mode' : 'age',
7339hunk ./src/allmydata/test/test_backends.py 222
7340         # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7341         
7342         # Attempt to create a second share writer with the same sharenum.
7343-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7344+        # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7345 
7346         # Show that no sharewriter results from a remote_allocate_buckets
7347         # with the same si and sharenum, until BucketWriter.remote_close()
7348hunk ./src/allmydata/test/test_backends.py 227
7349         # has been called.
7350-        self.failIf(bsa)
7351+        # self.failIf(bsa)
7352 
7353         # Test allocated size.
7354hunk ./src/allmydata/test/test_backends.py 230
7355-        spaceint = self.ss.allocated_size()
7356-        self.failUnlessReallyEqual(spaceint, 1)
7357+        # spaceint = self.ss.allocated_size()
7358+        # self.failUnlessReallyEqual(spaceint, 1)
7359 
7360         # Write 'a' to shnum 0. Only tested together with close and read.
7361hunk ./src/allmydata/test/test_backends.py 234
7362-        bs[0].remote_write(0, 'a')
7363+        # bs[0].remote_write(0, 'a')
7364         
7365         # Preclose: Inspect final, failUnless nothing there.
7366hunk ./src/allmydata/test/test_backends.py 237
7367-        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
7368-        bs[0].remote_close()
7369+        # self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
7370+        # bs[0].remote_close()
7371 
7372         # Postclose: (Omnibus) failUnless written data is in final.
7373hunk ./src/allmydata/test/test_backends.py 241
7374-        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
7375-        self.failUnlessReallyEqual(len(sharesinfinal), 1)
7376-        contents = sharesinfinal[0].read_share_data(0, 73)
7377-        self.failUnlessReallyEqual(contents, client_data)
7378+        # sharesinfinal = list(self.backend.get_shares('teststorage_index'))
7379+        # self.failUnlessReallyEqual(len(sharesinfinal), 1)
7380+        # contents = sharesinfinal[0].read_share_data(0, 73)
7381+        # self.failUnlessReallyEqual(contents, client_data)
7382 
7383         # Exercise the case that the share we're asking to allocate is
7384         # already (completely) uploaded.
7385hunk ./src/allmydata/test/test_backends.py 248
7386-        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
7387+        # self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
7388         
7389     @mock.patch('time.time')
7390     @mock.patch('allmydata.util.fileutil.get_available_space')
7391}
7392[jacp20
7393wilcoxjg@gmail.com**20110728072514
7394 Ignore-this: 6a03289023c3c79b8d09e2711183ea82
7395] {
7396hunk ./src/allmydata/storage/backends/das/core.py 52
7397     def _setup_storage(self, storedir, readonly, reserved_space):
7398         precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7399         self.storedir = storedir
7400+        print "self.storedir: ", self.storedir
7401         self.readonly = readonly
7402         self.reserved_space = int(reserved_space)
7403         self.sharedir = self.storedir.child("shares")
7404hunk ./src/allmydata/storage/backends/das/core.py 85
7405 
7406     def get_incoming_shnums(self, storageindex):
7407         """ Return a frozenset of the shnum (as ints) of incoming shares. """
7408-        incomingdir = si_si2dir(self.incomingdir, storageindex)
7409+        print "self.incomingdir.children(): ", self.incomingdir.children()
7410+        print "self.incomingdir.pathname: ", self.incomingdir.pathname
7411+        incomingthissi = si_si2dir(self.incomingdir, storageindex)
7412+        print "incomingthissi.children(): ", incomingthissi.children()
7413         try:
7414hunk ./src/allmydata/storage/backends/das/core.py 90
7415-            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
7416+            childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
7417             shnums = [ int(fp.basename) for fp in childfps ]
7418             return frozenset(shnums)
7419         except UnlistableError:
7420hunk ./src/allmydata/storage/backends/das/core.py 117
7421 
7422     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
7423         finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
7424-        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
7425+        incominghome = si_si2dir(self.incomingdir, storageindex).child(str(shnum))
7426         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
7427         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
7428         return bw
7429hunk ./src/allmydata/storage/backends/das/core.py 183
7430             # if this does happen, the old < v1.3.0 server will still allow
7431             # clients to read the first part of the share.
7432             self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
7433+            print "We got here right?"
7434             self._lease_offset = max_size + 0x0c
7435             self._num_leases = 0
7436         else:
7437hunk ./src/allmydata/storage/backends/das/core.py 274
7438             assert fh.tell() == offset
7439             fh.write(lease_info.to_immutable_data())
7440         finally:
7441-            print dir(fh)
7442             fh.close()
7443 
7444     def _read_num_leases(self, f):
7445hunk ./src/allmydata/storage/backends/das/core.py 284
7446             (num_leases,) = struct.unpack(">L", ro)
7447         finally:
7448             fh.close()
7449-            print "end of _read_num_leases"
7450         return num_leases
7451 
7452     def _write_num_leases(self, f, num_leases):
7453hunk ./src/allmydata/storage/common.py 21
7454 
7455 def si_si2dir(startfp, storageindex):
7456     sia = si_b2a(storageindex)
7457-    return startfp.child(sia[:2]).child(sia)
7458+    print "I got here right?  sia =", sia
7459+    print "What the fuck is startfp? ", startfp
7460+    print "What the fuck is startfp.pathname? ", startfp.pathname
7461+    newfp = startfp.child(sia[:2])
7462+    print "Did I get here?"
7463+    return newfp.child(sia)
7464hunk ./src/allmydata/test/test_backends.py 5
7465 from twisted.trial import unittest
7466 from twisted.python.filepath import FilePath
7467 from allmydata.util.log import msg
7468-from StringIO import StringIO
7469+from tempfile import TemporaryFile
7470 from allmydata.test.common_util import ReallyEqualMixin
7471 from allmydata.util.assertutil import _assert
7472 import mock
7473hunk ./src/allmydata/test/test_backends.py 34
7474     cancelsecret + expirationtime + nextlease
7475 share_data = containerdata + client_data
7476 testnodeid = 'testnodeidxxxxxxxxxx'
7477+fakefilepaths = {}
7478 
7479 
7480 class MockStat:
7481hunk ./src/allmydata/test/test_backends.py 41
7482     def __init__(self):
7483         self.st_mode = None
7484 
7485+
7486 class MockFilePath:
7487hunk ./src/allmydata/test/test_backends.py 43
7488-    def __init__(self, PathString):
7489-        self.PathName = PathString
7490-    def child(self, ChildString):
7491-        return MockFilePath(os.path.join(self.PathName, ChildString))
7492+    def __init__(self, pathstring):
7493+        self.pathname = pathstring
7494+        self.spawn = {}
7495+        self.antecedent = os.path.dirname(self.pathname)
7496+    def child(self, childstring):
7497+        arg2child = os.path.join(self.pathname, childstring)
7498+        print "arg2child: ", arg2child
7499+        if fakefilepaths.has_key(arg2child):
7500+            child = fakefilepaths[arg2child]
7501+            print "Should have gotten here."
7502+        else:
7503+            child = MockFilePath(arg2child)
7504+        return child
7505     def parent(self):
7506hunk ./src/allmydata/test/test_backends.py 57
7507-        return MockFilePath(os.path.dirname(self.PathName))
7508+        if fakefilepaths.has_key(self.antecedent):
7509+            parent = fakefilepaths[self.antecedent]
7510+        else:
7511+            parent = MockFilePath(self.antecedent)
7512+        return parent
7513+    def children(self):
7514+        childrenfromffs = frozenset(fakefilepaths.values())
7515+        return list(childrenfromffs | frozenset(self.spawn.values())) 
7516     def makedirs(self):
7517         # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7518         pass
7519hunk ./src/allmydata/test/test_backends.py 72
7520         return True
7521     def remove(self):
7522         pass
7523-    def children(self):
7524-        return []
7525     def exists(self):
7526         return False
7527hunk ./src/allmydata/test/test_backends.py 74
7528-    def setContent(self, ContentString):
7529-        self.File = MockFile(ContentString)
7530     def open(self):
7531         return self.File.open()
7532hunk ./src/allmydata/test/test_backends.py 76
7533+    def setparents(self):
7534+        antecedents = []
7535+        def f(fps, antecedents):
7536+            newfps = os.path.split(fps)[0]
7537+            if newfps:
7538+                antecedents.append(newfps)
7539+                f(newfps, antecedents)
7540+        f(self.pathname, antecedents)
7541+        for fps in antecedents:
7542+            if not fakefilepaths.has_key(fps):
7543+                fakefilepaths[fps] = MockFilePath(fps)
7544+    def setContent(self, contentstring):
7545+        print "I am self.pathname: ", self.pathname
7546+        fakefilepaths[self.pathname] = self
7547+        self.File = MockFile(contentstring)
7548+        self.setparents()
7549+    def create(self):
7550+        fakefilepaths[self.pathname] = self
7551+        self.setparents()
7552+           
7553 
7554 class MockFile:
7555hunk ./src/allmydata/test/test_backends.py 98
7556-    def __init__(self, ContentString):
7557-        self.Contents = ContentString
7558+    def __init__(self, contentstring):
7559+        self.buffer = contentstring
7560+        self.pos = 0
7561     def open(self):
7562         return self
7563hunk ./src/allmydata/test/test_backends.py 103
7564+    def write(self, instring):
7565+        begin = self.pos
7566+        padlen = begin - len(self.buffer)
7567+        if padlen > 0:
7568+            self.buffer += '\x00' * padlen
7569+            end = self.pos + len(instring)
7570+            self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7571+            self.pos = end
7572     def close(self):
7573         pass
7574hunk ./src/allmydata/test/test_backends.py 113
7575-    def seek(self, position):
7576-        pass
7577-    def read(self, amount):
7578-        pass
7579+    def seek(self, pos):
7580+        self.pos = pos
7581+    def read(self, numberbytes):
7582+        return self.buffer[self.pos:self.pos+numberbytes]
7583+    def tell(self):
7584+        return self.pos
7585 
7586 
7587 class MockBCC:
7588hunk ./src/allmydata/test/test_backends.py 125
7589     def setServiceParent(self, Parent):
7590         pass
7591 
7592+
7593 class MockLCC:
7594     def setServiceParent(self, Parent):
7595         pass
7596hunk ./src/allmydata/test/test_backends.py 130
7597 
7598+
7599 class MockFiles(unittest.TestCase):
7600     """ I simulate a filesystem that the code under test can use. I flag the
7601     code under test if it reads or writes outside of its prescribed
7602hunk ./src/allmydata/test/test_backends.py 193
7603         return False
7604 
7605     def call_setContent(self, inputstring):
7606-        self.incomingsharefilecontents = StringIO(inputstring)
7607+        self.incomingsharefilecontents = TemporaryFile(inputstring)
7608 
7609     def tearDown(self):
7610         msg( "%s.tearDown()" % (self,))
7611hunk ./src/allmydata/test/test_backends.py 206
7612                      'cutoff_date' : None,
7613                      'sharetypes' : None}
7614 
7615+
7616 class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
7617     """ NullBackend is just for testing and executable documentation, so
7618     this test is actually a test of StorageServer in which we're using
7619hunk ./src/allmydata/test/test_backends.py 229
7620         self.failIf(mockopen.called)
7621         self.failIf(mockmkdir.called)
7622 
7623+
7624 class TestServerConstruction(MockFiles, ReallyEqualMixin):
7625     def test_create_server_fs_backend(self):
7626         """ This tests whether a server instance can be constructed with a
7627hunk ./src/allmydata/test/test_backends.py 238
7628 
7629         StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
7630 
7631+
7632 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
7633     """ This tests both the StorageServer and the DAS backend together. """
7634     
7635hunk ./src/allmydata/test/test_backends.py 262
7636         """
7637         mocktime.return_value = 0
7638         # Inspect incoming and fail unless it's empty.
7639-        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7640-        self.failUnlessReallyEqual(incomingset, frozenset())
7641+        # incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7642+        # self.failUnlessReallyEqual(incomingset, frozenset())
7643         
7644         # Populate incoming with the sharenum: 0.
7645         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7646hunk ./src/allmydata/test/test_backends.py 269
7647 
7648         # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
7649-        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7650+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7651         
7652         # Attempt to create a second share writer with the same sharenum.
7653         # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7654hunk ./src/allmydata/test/test_backends.py 274
7655 
7656+        # print bsa
7657         # Show that no sharewriter results from a remote_allocate_buckets
7658         # with the same si and sharenum, until BucketWriter.remote_close()
7659         # has been called.
7660hunk ./src/allmydata/test/test_backends.py 339
7661             self.failUnlessEqual(mode[0], 'r', mode)
7662             self.failUnless('b' in mode, mode)
7663 
7664-            return StringIO(share_data)
7665+            return TemporaryFile(share_data)
7666         mockopen.side_effect = call_open
7667 
7668         datalen = len(share_data)
7669}
7670[Completed FilePath based test_write_and_read_share
7671wilcoxjg@gmail.com**20110729043830
7672 Ignore-this: 2c32adb041f0344394927cd3ce8f3b36
7673] {
7674hunk ./src/allmydata/storage/backends/das/core.py 38
7675 NUM_RE=re.compile("^[0-9]+$")
7676 
7677 def is_num(fp):
7678-    return NUM_RE.match(fp.basename)
7679+    return NUM_RE.match(fp.basename())
7680 
7681 class DASCore(Backend):
7682     implements(IStorageBackend)
7683hunk ./src/allmydata/storage/backends/das/core.py 52
7684     def _setup_storage(self, storedir, readonly, reserved_space):
7685         precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7686         self.storedir = storedir
7687-        print "self.storedir: ", self.storedir
7688         self.readonly = readonly
7689         self.reserved_space = int(reserved_space)
7690         self.sharedir = self.storedir.child("shares")
7691hunk ./src/allmydata/storage/backends/das/core.py 84
7692 
7693     def get_incoming_shnums(self, storageindex):
7694         """ Return a frozenset of the shnum (as ints) of incoming shares. """
7695-        print "self.incomingdir.children(): ", self.incomingdir.children()
7696-        print "self.incomingdir.pathname: ", self.incomingdir.pathname
7697         incomingthissi = si_si2dir(self.incomingdir, storageindex)
7698hunk ./src/allmydata/storage/backends/das/core.py 85
7699-        print "incomingthissi.children(): ", incomingthissi.children()
7700         try:
7701             childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
7702hunk ./src/allmydata/storage/backends/das/core.py 87
7703-            shnums = [ int(fp.basename) for fp in childfps ]
7704+            shnums = [ int(fp.basename()) for fp in childfps ]
7705             return frozenset(shnums)
7706         except UnlistableError:
7707             # There is no shares directory at all.
7708hunk ./src/allmydata/storage/backends/das/core.py 101
7709         try:
7710             for fp in finalstoragedir.children():
7711                 if is_num(fp):
7712-                    yield ImmutableShare(fp, storageindex)
7713+                    finalhome = finalstoragedir.child(str(fp.basename()))
7714+                    yield ImmutableShare(storageindex, fp, finalhome)
7715         except UnlistableError:
7716             # There is no shares directory at all.
7717             pass
7718hunk ./src/allmydata/storage/backends/das/core.py 115
7719     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
7720         finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
7721         incominghome = si_si2dir(self.incomingdir, storageindex).child(str(shnum))
7722-        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
7723+        immsh = ImmutableShare(storageindex, shnum, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
7724         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
7725         return bw
7726 
7727hunk ./src/allmydata/storage/backends/das/core.py 155
7728     LEASE_SIZE = struct.calcsize(">L32s32sL")
7729     sharetype = "immutable"
7730 
7731-    def __init__(self, finalhome, storageindex, shnum, incominghome, max_size=None, create=False):
7732+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
7733         """ If max_size is not None then I won't allow more than
7734         max_size to be written to me. If create=True then max_size
7735         must not be None. """
7736hunk ./src/allmydata/storage/backends/das/core.py 180
7737             # if this does happen, the old < v1.3.0 server will still allow
7738             # clients to read the first part of the share.
7739             self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
7740-            print "We got here right?"
7741             self._lease_offset = max_size + 0x0c
7742             self._num_leases = 0
7743         else:
7744hunk ./src/allmydata/storage/backends/das/core.py 183
7745-            f = open(self.finalhome, 'rb')
7746-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
7747-            f.close()
7748+            fh = self.finalhome.open(mode='rb')
7749+            try:
7750+                (version, unused, num_leases) = struct.unpack(">LLL", fh.read(0xc))
7751+            finally:
7752+                fh.close()
7753             filesize = self.finalhome.getsize()
7754             if version != 1:
7755                 msg = "sharefile %s had version %d but we wanted 1" % \
7756hunk ./src/allmydata/storage/backends/das/core.py 227
7757         pass
7758         
7759     def stat(self):
7760-        return filepath.stat(self.finalhome)[stat.ST_SIZE]
7761+        return filepath.stat(self.finalhome.path)[stat.ST_SIZE]
7762 
7763     def get_shnum(self):
7764         return self.shnum
7765hunk ./src/allmydata/storage/backends/das/core.py 244
7766         actuallength = max(0, min(length, fsize-seekpos))
7767         if actuallength == 0:
7768             return ""
7769+        fh = self.finalhome.open(mode='rb')
7770         try:
7771hunk ./src/allmydata/storage/backends/das/core.py 246
7772-            fh = open(self.finalhome, 'rb')
7773             fh.seek(seekpos)
7774             sharedata = fh.read(actuallength)
7775         finally:
7776hunk ./src/allmydata/storage/backends/das/core.py 257
7777         precondition(offset >= 0, offset)
7778         if self._max_size is not None and offset+length > self._max_size:
7779             raise DataTooLargeError(self._max_size, offset, length)
7780-        f = open(self.incominghome, 'rb+')
7781-        real_offset = self._data_offset+offset
7782-        f.seek(real_offset)
7783-        assert f.tell() == real_offset
7784-        f.write(data)
7785-        f.close()
7786+        fh = self.incominghome.open(mode='rb+')
7787+        try:
7788+            real_offset = self._data_offset+offset
7789+            fh.seek(real_offset)
7790+            assert fh.tell() == real_offset
7791+            fh.write(data)
7792+        finally:
7793+            fh.close()
7794 
7795     def _write_lease_record(self, f, lease_number, lease_info):
7796         offset = self._lease_offset + lease_number * self.LEASE_SIZE
7797hunk ./src/allmydata/storage/backends/das/core.py 299
7798 
7799     def get_leases(self):
7800         """Yields a LeaseInfo instance for all leases."""
7801-        f = open(self.finalhome, 'rb')
7802-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
7803-        f.seek(self._lease_offset)
7804+        fh = self.finalhome.open(mode='rb')
7805+        (version, unused, num_leases) = struct.unpack(">LLL", fh.read(0xc))
7806+        fh.seek(self._lease_offset)
7807         for i in range(num_leases):
7808hunk ./src/allmydata/storage/backends/das/core.py 303
7809-            data = f.read(self.LEASE_SIZE)
7810+            data = fh.read(self.LEASE_SIZE)
7811             if data:
7812                 yield LeaseInfo().from_immutable_data(data)
7813 
7814hunk ./src/allmydata/storage/common.py 21
7815 
7816 def si_si2dir(startfp, storageindex):
7817     sia = si_b2a(storageindex)
7818-    print "I got here right?  sia =", sia
7819-    print "What the fuck is startfp? ", startfp
7820-    print "What the fuck is startfp.pathname? ", startfp.pathname
7821     newfp = startfp.child(sia[:2])
7822hunk ./src/allmydata/storage/common.py 22
7823-    print "Did I get here?"
7824     return newfp.child(sia)
7825hunk ./src/allmydata/test/test_backends.py 1
7826-import os
7827+import os, stat
7828 from twisted.trial import unittest
7829 from twisted.python.filepath import FilePath
7830 from allmydata.util.log import msg
7831hunk ./src/allmydata/test/test_backends.py 44
7832 
7833 class MockFilePath:
7834     def __init__(self, pathstring):
7835-        self.pathname = pathstring
7836+        self.path = pathstring
7837         self.spawn = {}
7838hunk ./src/allmydata/test/test_backends.py 46
7839-        self.antecedent = os.path.dirname(self.pathname)
7840+        self.antecedent = os.path.dirname(self.path)
7841     def child(self, childstring):
7842hunk ./src/allmydata/test/test_backends.py 48
7843-        arg2child = os.path.join(self.pathname, childstring)
7844-        print "arg2child: ", arg2child
7845+        arg2child = os.path.join(self.path, childstring)
7846         if fakefilepaths.has_key(arg2child):
7847             child = fakefilepaths[arg2child]
7848hunk ./src/allmydata/test/test_backends.py 51
7849-            print "Should have gotten here."
7850         else:
7851             child = MockFilePath(arg2child)
7852         return child
7853hunk ./src/allmydata/test/test_backends.py 61
7854             parent = MockFilePath(self.antecedent)
7855         return parent
7856     def children(self):
7857-        childrenfromffs = frozenset(fakefilepaths.values())
7858+        childrenfromffs = [ffp for ffp in fakefilepaths.values() if ffp.path.startswith(self.path)]
7859+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
7860+        childrenfromffs = frozenset(childrenfromffs)
7861         return list(childrenfromffs | frozenset(self.spawn.values())) 
7862     def makedirs(self):
7863         # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7864hunk ./src/allmydata/test/test_backends.py 74
7865         pass
7866     def exists(self):
7867         return False
7868-    def open(self):
7869-        return self.File.open()
7870+    def open(self, mode='r'):
7871+        return self.fileobject.open(mode)
7872     def setparents(self):
7873         antecedents = []
7874         def f(fps, antecedents):
7875hunk ./src/allmydata/test/test_backends.py 83
7876             if newfps:
7877                 antecedents.append(newfps)
7878                 f(newfps, antecedents)
7879-        f(self.pathname, antecedents)
7880+        f(self.path, antecedents)
7881         for fps in antecedents:
7882             if not fakefilepaths.has_key(fps):
7883                 fakefilepaths[fps] = MockFilePath(fps)
7884hunk ./src/allmydata/test/test_backends.py 88
7885     def setContent(self, contentstring):
7886-        print "I am self.pathname: ", self.pathname
7887-        fakefilepaths[self.pathname] = self
7888-        self.File = MockFile(contentstring)
7889+        fakefilepaths[self.path] = self
7890+        self.fileobject = MockFileObject(contentstring)
7891         self.setparents()
7892     def create(self):
7893hunk ./src/allmydata/test/test_backends.py 92
7894-        fakefilepaths[self.pathname] = self
7895+        fakefilepaths[self.path] = self
7896         self.setparents()
7897hunk ./src/allmydata/test/test_backends.py 94
7898-           
7899+    def basename(self):
7900+        return os.path.split(self.path)[1]
7901+    def moveTo(self, newffp):
7902+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
7903+        if fakefilepaths.has_key(newffp.path):
7904+            raise OSError
7905+        else:
7906+            fakefilepaths[newffp.path] = self
7907+            self.path = newffp.path
7908+    def getsize(self):
7909+        return self.fileobject.getsize()
7910 
7911hunk ./src/allmydata/test/test_backends.py 106
7912-class MockFile:
7913+class MockFileObject:
7914     def __init__(self, contentstring):
7915         self.buffer = contentstring
7916         self.pos = 0
7917hunk ./src/allmydata/test/test_backends.py 110
7918-    def open(self):
7919+    def open(self, mode='r'):
7920         return self
7921     def write(self, instring):
7922         begin = self.pos
7923hunk ./src/allmydata/test/test_backends.py 117
7924         padlen = begin - len(self.buffer)
7925         if padlen > 0:
7926             self.buffer += '\x00' * padlen
7927-            end = self.pos + len(instring)
7928-            self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7929-            self.pos = end
7930+        end = self.pos + len(instring)
7931+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7932+        self.pos = end
7933     def close(self):
7934hunk ./src/allmydata/test/test_backends.py 121
7935-        pass
7936+        self.pos = 0
7937     def seek(self, pos):
7938         self.pos = pos
7939     def read(self, numberbytes):
7940hunk ./src/allmydata/test/test_backends.py 128
7941         return self.buffer[self.pos:self.pos+numberbytes]
7942     def tell(self):
7943         return self.pos
7944-
7945+    def size(self):
7946+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
7947+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
7948+        return {stat.ST_SIZE:len(self.buffer)}
7949+    def getsize(self):
7950+        return len(self.buffer)
7951 
7952 class MockBCC:
7953     def setServiceParent(self, Parent):
7954hunk ./src/allmydata/test/test_backends.py 177
7955         GetSpace = self.get_available_space.__enter__()
7956         GetSpace.side_effect = self.call_get_available_space
7957 
7958+        self.statforsize = mock.patch('allmydata.storage.backends.das.core.filepath.stat')
7959+        getsize = self.statforsize.__enter__()
7960+        getsize.side_effect = self.call_statforsize
7961+
7962+    def call_statforsize(self, fakefpname):
7963+        return fakefilepaths[fakefpname].fileobject.size()
7964+
7965     def call_FakeBCC(self, StateFile):
7966         return MockBCC()
7967 
7968hunk ./src/allmydata/test/test_backends.py 220
7969         msg( "%s.tearDown()" % (self,))
7970         FakePath = self.FilePathFake.__exit__()       
7971         FakeBCC = self.BCountingCrawler.__exit__()
7972+        getsize = self.statforsize.__exit__()
7973 
7974 expiration_policy = {'enabled' : False,
7975                      'mode' : 'age',
7976hunk ./src/allmydata/test/test_backends.py 284
7977         """
7978         mocktime.return_value = 0
7979         # Inspect incoming and fail unless it's empty.
7980-        # incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7981-        # self.failUnlessReallyEqual(incomingset, frozenset())
7982+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7983+        self.failUnlessReallyEqual(incomingset, frozenset())
7984         
7985         # Populate incoming with the sharenum: 0.
7986         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7987hunk ./src/allmydata/test/test_backends.py 294
7988         self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7989         
7990         # Attempt to create a second share writer with the same sharenum.
7991-        # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7992+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7993 
7994hunk ./src/allmydata/test/test_backends.py 296
7995-        # print bsa
7996         # Show that no sharewriter results from a remote_allocate_buckets
7997         # with the same si and sharenum, until BucketWriter.remote_close()
7998         # has been called.
7999hunk ./src/allmydata/test/test_backends.py 299
8000-        # self.failIf(bsa)
8001+        self.failIf(bsa)
8002 
8003         # Test allocated size.
8004hunk ./src/allmydata/test/test_backends.py 302
8005-        # spaceint = self.ss.allocated_size()
8006-        # self.failUnlessReallyEqual(spaceint, 1)
8007+        spaceint = self.ss.allocated_size()
8008+        self.failUnlessReallyEqual(spaceint, 1)
8009 
8010         # Write 'a' to shnum 0. Only tested together with close and read.
8011hunk ./src/allmydata/test/test_backends.py 306
8012-        # bs[0].remote_write(0, 'a')
8013+        bs[0].remote_write(0, 'a')
8014         
8015         # Preclose: Inspect final, failUnless nothing there.
8016hunk ./src/allmydata/test/test_backends.py 309
8017-        # self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
8018-        # bs[0].remote_close()
8019+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
8020+        bs[0].remote_close()
8021 
8022         # Postclose: (Omnibus) failUnless written data is in final.
8023hunk ./src/allmydata/test/test_backends.py 313
8024-        # sharesinfinal = list(self.backend.get_shares('teststorage_index'))
8025-        # self.failUnlessReallyEqual(len(sharesinfinal), 1)
8026-        # contents = sharesinfinal[0].read_share_data(0, 73)
8027-        # self.failUnlessReallyEqual(contents, client_data)
8028+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
8029+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
8030+        contents = sharesinfinal[0].read_share_data(0, 73)
8031+        self.failUnlessReallyEqual(contents, client_data)
8032 
8033         # Exercise the case that the share we're asking to allocate is
8034         # already (completely) uploaded.
8035hunk ./src/allmydata/test/test_backends.py 320
8036-        # self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8037+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8038         
8039     @mock.patch('time.time')
8040     @mock.patch('allmydata.util.fileutil.get_available_space')
8041}
8042[TestServerAndFSBackend.test_read_old_share passes
8043wilcoxjg@gmail.com**20110729235356
8044 Ignore-this: 574636c959ea58d4609bea2428ff51d3
8045] {
8046hunk ./src/allmydata/storage/backends/das/core.py 37
8047 # $SHARENUM matches this regex:
8048 NUM_RE=re.compile("^[0-9]+$")
8049 
8050-def is_num(fp):
8051-    return NUM_RE.match(fp.basename())
8052-
8053 class DASCore(Backend):
8054     implements(IStorageBackend)
8055     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
8056hunk ./src/allmydata/storage/backends/das/core.py 97
8057         finalstoragedir = si_si2dir(self.sharedir, storageindex)
8058         try:
8059             for fp in finalstoragedir.children():
8060-                if is_num(fp):
8061-                    finalhome = finalstoragedir.child(str(fp.basename()))
8062-                    yield ImmutableShare(storageindex, fp, finalhome)
8063+                fpshnumstr = fp.basename()
8064+                if NUM_RE.match(fpshnumstr):
8065+                    finalhome = finalstoragedir.child(fpshnumstr)
8066+                    yield ImmutableShare(storageindex, fpshnumstr, finalhome)
8067         except UnlistableError:
8068             # There is no shares directory at all.
8069             pass
8070hunk ./src/allmydata/test/test_backends.py 15
8071 from allmydata.storage.server import StorageServer
8072 from allmydata.storage.backends.das.core import DASCore
8073 from allmydata.storage.backends.null.core import NullCore
8074+from allmydata.storage.common import si_si2dir
8075 
8076 # The following share file content was generated with
8077 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
8078hunk ./src/allmydata/test/test_backends.py 155
8079     def setUp(self):
8080         # Make patcher, patch, and make effects for fs using functions.
8081         msg( "%s.setUp()" % (self,))
8082+        fakefilepaths = {}
8083         self.storedir = MockFilePath('teststoredir')
8084         self.basedir = self.storedir.child('shares')
8085         self.baseincdir = self.basedir.child('incoming')
8086hunk ./src/allmydata/test/test_backends.py 223
8087         FakePath = self.FilePathFake.__exit__()       
8088         FakeBCC = self.BCountingCrawler.__exit__()
8089         getsize = self.statforsize.__exit__()
8090+        fakefilepaths = {}
8091 
8092 expiration_policy = {'enabled' : False,
8093                      'mode' : 'age',
8094hunk ./src/allmydata/test/test_backends.py 334
8095             return 0
8096 
8097         mockget_available_space.side_effect = call_get_available_space
8098-       
8099-       
8100         alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8101 
8102hunk ./src/allmydata/test/test_backends.py 336
8103-    @mock.patch('os.path.exists')
8104-    @mock.patch('os.path.getsize')
8105-    @mock.patch('__builtin__.open')
8106-    @mock.patch('os.listdir')
8107-    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
8108+    def test_read_old_share(self):
8109         """ This tests whether the code correctly finds and reads
8110         shares written out by old (Tahoe-LAFS <= v1.8.2)
8111         servers. There is a similar test in test_download, but that one
8112hunk ./src/allmydata/test/test_backends.py 344
8113         stack of code. This one is for exercising just the
8114         StorageServer object. """
8115 
8116-        def call_listdir(dirname):
8117-            precondition(isinstance(dirname, basestring), dirname)
8118-            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
8119-            return ['0']
8120-
8121-        mocklistdir.side_effect = call_listdir
8122-
8123-        def call_open(fname, mode):
8124-            precondition(isinstance(fname, basestring), fname)
8125-            self.failUnlessReallyEqual(fname, sharefname)
8126-            self.failUnlessEqual(mode[0], 'r', mode)
8127-            self.failUnless('b' in mode, mode)
8128-
8129-            return TemporaryFile(share_data)
8130-        mockopen.side_effect = call_open
8131-
8132         datalen = len(share_data)
8133hunk ./src/allmydata/test/test_backends.py 345
8134-        def call_getsize(fname):
8135-            precondition(isinstance(fname, basestring), fname)
8136-            self.failUnlessReallyEqual(fname, sharefname)
8137-            return datalen
8138-        mockgetsize.side_effect = call_getsize
8139-
8140-        def call_exists(fname):
8141-            precondition(isinstance(fname, basestring), fname)
8142-            self.failUnlessReallyEqual(fname, sharefname)
8143-            return True
8144-        mockexists.side_effect = call_exists
8145+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(3))
8146+        finalhome.setContent(share_data)
8147 
8148         # Now begin the test.
8149         bs = self.ss.remote_get_buckets('teststorage_index')
8150hunk ./src/allmydata/test/test_backends.py 352
8151 
8152         self.failUnlessEqual(len(bs), 1)
8153-        b = bs[0]
8154+        b = bs['3']
8155         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
8156         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
8157         # If you try to read past the end you get the as much data as is there.
8158}
8159[TestServerAndFSBackend passes en total!
8160wilcoxjg@gmail.com**20110730010025
8161 Ignore-this: fdc92e08674af1da5708c30557ac5860
8162] {
8163hunk ./src/allmydata/storage/backends/das/core.py 83
8164         """ Return a frozenset of the shnum (as ints) of incoming shares. """
8165         incomingthissi = si_si2dir(self.incomingdir, storageindex)
8166         try:
8167-            childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
8168+            childfps = [ fp for fp in incomingthissi.children() if NUM_RE.match(fp.basename()) ]
8169             shnums = [ int(fp.basename()) for fp in childfps ]
8170             return frozenset(shnums)
8171         except UnlistableError:
8172hunk ./src/allmydata/test/test_backends.py 35
8173     cancelsecret + expirationtime + nextlease
8174 share_data = containerdata + client_data
8175 testnodeid = 'testnodeidxxxxxxxxxx'
8176-fakefilepaths = {}
8177 
8178 
8179hunk ./src/allmydata/test/test_backends.py 37
8180+class MockFiles(unittest.TestCase):
8181+    """ I simulate a filesystem that the code under test can use. I flag the
8182+    code under test if it reads or writes outside of its prescribed
8183+    subtree. I simulate just the parts of the filesystem that the current
8184+    implementation of DAS backend needs. """
8185+
8186+    def setUp(self):
8187+        # Make patcher, patch, and make effects for fs using functions.
8188+        msg( "%s.setUp()" % (self,))
8189+        self.fakefilepaths = {}
8190+        self.storedir = MockFilePath('teststoredir', self.fakefilepaths)
8191+        self.basedir = self.storedir.child('shares')
8192+        self.baseincdir = self.basedir.child('incoming')
8193+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8194+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8195+        self.shareincomingname = self.sharedirincomingname.child('0')
8196+        self.sharefinalname = self.sharedirfinalname.child('0')
8197+
8198+        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
8199+        FakePath = self.FilePathFake.__enter__()
8200+
8201+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
8202+        FakeBCC = self.BCountingCrawler.__enter__()
8203+        FakeBCC.side_effect = self.call_FakeBCC
8204+
8205+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
8206+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
8207+        FakeLCC.side_effect = self.call_FakeLCC
8208+
8209+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
8210+        GetSpace = self.get_available_space.__enter__()
8211+        GetSpace.side_effect = self.call_get_available_space
8212+
8213+        self.statforsize = mock.patch('allmydata.storage.backends.das.core.filepath.stat')
8214+        getsize = self.statforsize.__enter__()
8215+        getsize.side_effect = self.call_statforsize
8216+
8217+    def call_statforsize(self, fakefpname):
8218+        return self.fakefilepaths[fakefpname].fileobject.size()
8219+
8220+    def call_FakeBCC(self, StateFile):
8221+        return MockBCC()
8222+
8223+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
8224+        return MockLCC()
8225+
8226+    def call_listdir(self, fname):
8227+        fnamefp = FilePath(fname)
8228+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8229+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8230+
8231+    def call_stat(self, fname):
8232+        assert isinstance(fname, basestring), fname
8233+        fnamefp = FilePath(fname)
8234+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8235+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8236+        msg("%s.call_stat(%s)" % (self, fname,))
8237+        mstat = MockStat()
8238+        mstat.st_mode = 16893 # a directory
8239+        return mstat
8240+
8241+    def call_get_available_space(self, storedir, reservedspace):
8242+        # The input vector has an input size of 85.
8243+        return 85 - reservedspace
8244+
8245+    def call_exists(self):
8246+        # I'm only called in the ImmutableShareFile constructor.
8247+        return False
8248+
8249+    def call_setContent(self, inputstring):
8250+        self.incomingsharefilecontents = TemporaryFile(inputstring)
8251+
8252+    def tearDown(self):
8253+        msg( "%s.tearDown()" % (self,))
8254+        FakePath = self.FilePathFake.__exit__()       
8255+        FakeBCC = self.BCountingCrawler.__exit__()
8256+        getsize = self.statforsize.__exit__()
8257+        self.fakefilepaths = {}
8258+
8259 class MockStat:
8260     def __init__(self):
8261         self.st_mode = None
8262hunk ./src/allmydata/test/test_backends.py 122
8263 
8264 
8265 class MockFilePath:
8266-    def __init__(self, pathstring):
8267+    def __init__(self, pathstring, ffpathsenvironment):
8268+        self.fakefilepaths = ffpathsenvironment
8269         self.path = pathstring
8270         self.spawn = {}
8271         self.antecedent = os.path.dirname(self.path)
8272hunk ./src/allmydata/test/test_backends.py 129
8273     def child(self, childstring):
8274         arg2child = os.path.join(self.path, childstring)
8275-        if fakefilepaths.has_key(arg2child):
8276-            child = fakefilepaths[arg2child]
8277+        if self.fakefilepaths.has_key(arg2child):
8278+            child = self.fakefilepaths[arg2child]
8279         else:
8280hunk ./src/allmydata/test/test_backends.py 132
8281-            child = MockFilePath(arg2child)
8282+            child = MockFilePath(arg2child, self.fakefilepaths)
8283         return child
8284     def parent(self):
8285hunk ./src/allmydata/test/test_backends.py 135
8286-        if fakefilepaths.has_key(self.antecedent):
8287-            parent = fakefilepaths[self.antecedent]
8288+        if self.fakefilepaths.has_key(self.antecedent):
8289+            parent = self.fakefilepaths[self.antecedent]
8290         else:
8291hunk ./src/allmydata/test/test_backends.py 138
8292-            parent = MockFilePath(self.antecedent)
8293+            parent = MockFilePath(self.antecedent, self.fakefilepaths)
8294         return parent
8295     def children(self):
8296hunk ./src/allmydata/test/test_backends.py 141
8297-        childrenfromffs = [ffp for ffp in fakefilepaths.values() if ffp.path.startswith(self.path)]
8298+        childrenfromffs = [ffp for ffp in self.fakefilepaths.values() if ffp.path.startswith(self.path)]
8299         childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
8300         childrenfromffs = frozenset(childrenfromffs)
8301         return list(childrenfromffs | frozenset(self.spawn.values())) 
8302hunk ./src/allmydata/test/test_backends.py 165
8303                 f(newfps, antecedents)
8304         f(self.path, antecedents)
8305         for fps in antecedents:
8306-            if not fakefilepaths.has_key(fps):
8307-                fakefilepaths[fps] = MockFilePath(fps)
8308+            if not self.fakefilepaths.has_key(fps):
8309+                self.fakefilepaths[fps] = MockFilePath(fps, self.fakefilepaths)
8310     def setContent(self, contentstring):
8311hunk ./src/allmydata/test/test_backends.py 168
8312-        fakefilepaths[self.path] = self
8313+        self.fakefilepaths[self.path] = self
8314         self.fileobject = MockFileObject(contentstring)
8315         self.setparents()
8316     def create(self):
8317hunk ./src/allmydata/test/test_backends.py 172
8318-        fakefilepaths[self.path] = self
8319+        self.fakefilepaths[self.path] = self
8320         self.setparents()
8321     def basename(self):
8322         return os.path.split(self.path)[1]
8323hunk ./src/allmydata/test/test_backends.py 178
8324     def moveTo(self, newffp):
8325         #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
8326-        if fakefilepaths.has_key(newffp.path):
8327+        if self.fakefilepaths.has_key(newffp.path):
8328             raise OSError
8329         else:
8330hunk ./src/allmydata/test/test_backends.py 181
8331-            fakefilepaths[newffp.path] = self
8332+            self.fakefilepaths[newffp.path] = self
8333             self.path = newffp.path
8334     def getsize(self):
8335         return self.fileobject.getsize()
8336hunk ./src/allmydata/test/test_backends.py 225
8337         pass
8338 
8339 
8340-class MockFiles(unittest.TestCase):
8341-    """ I simulate a filesystem that the code under test can use. I flag the
8342-    code under test if it reads or writes outside of its prescribed
8343-    subtree. I simulate just the parts of the filesystem that the current
8344-    implementation of DAS backend needs. """
8345-
8346-    def setUp(self):
8347-        # Make patcher, patch, and make effects for fs using functions.
8348-        msg( "%s.setUp()" % (self,))
8349-        fakefilepaths = {}
8350-        self.storedir = MockFilePath('teststoredir')
8351-        self.basedir = self.storedir.child('shares')
8352-        self.baseincdir = self.basedir.child('incoming')
8353-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8354-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8355-        self.shareincomingname = self.sharedirincomingname.child('0')
8356-        self.sharefinalname = self.sharedirfinalname.child('0')
8357-
8358-        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
8359-        FakePath = self.FilePathFake.__enter__()
8360-
8361-        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
8362-        FakeBCC = self.BCountingCrawler.__enter__()
8363-        FakeBCC.side_effect = self.call_FakeBCC
8364-
8365-        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
8366-        FakeLCC = self.LeaseCheckingCrawler.__enter__()
8367-        FakeLCC.side_effect = self.call_FakeLCC
8368-
8369-        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
8370-        GetSpace = self.get_available_space.__enter__()
8371-        GetSpace.side_effect = self.call_get_available_space
8372-
8373-        self.statforsize = mock.patch('allmydata.storage.backends.das.core.filepath.stat')
8374-        getsize = self.statforsize.__enter__()
8375-        getsize.side_effect = self.call_statforsize
8376-
8377-    def call_statforsize(self, fakefpname):
8378-        return fakefilepaths[fakefpname].fileobject.size()
8379-
8380-    def call_FakeBCC(self, StateFile):
8381-        return MockBCC()
8382-
8383-    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
8384-        return MockLCC()
8385 
8386hunk ./src/allmydata/test/test_backends.py 226
8387-    def call_listdir(self, fname):
8388-        fnamefp = FilePath(fname)
8389-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8390-                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8391-
8392-    def call_stat(self, fname):
8393-        assert isinstance(fname, basestring), fname
8394-        fnamefp = FilePath(fname)
8395-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8396-                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8397-        msg("%s.call_stat(%s)" % (self, fname,))
8398-        mstat = MockStat()
8399-        mstat.st_mode = 16893 # a directory
8400-        return mstat
8401-
8402-    def call_get_available_space(self, storedir, reservedspace):
8403-        # The input vector has an input size of 85.
8404-        return 85 - reservedspace
8405-
8406-    def call_exists(self):
8407-        # I'm only called in the ImmutableShareFile constructor.
8408-        return False
8409-
8410-    def call_setContent(self, inputstring):
8411-        self.incomingsharefilecontents = TemporaryFile(inputstring)
8412-
8413-    def tearDown(self):
8414-        msg( "%s.tearDown()" % (self,))
8415-        FakePath = self.FilePathFake.__exit__()       
8416-        FakeBCC = self.BCountingCrawler.__exit__()
8417-        getsize = self.statforsize.__exit__()
8418-        fakefilepaths = {}
8419 
8420 expiration_policy = {'enabled' : False,
8421                      'mode' : 'age',
8422}
8423[current test_backend tests pass
8424wilcoxjg@gmail.com**20110730034159
8425 Ignore-this: 4bcf2566404f7b38c464512b82e8b722
8426] {
8427hunk ./src/allmydata/test/test_backends.py 7
8428 from allmydata.util.log import msg
8429 from tempfile import TemporaryFile
8430 from allmydata.test.common_util import ReallyEqualMixin
8431-from allmydata.util.assertutil import _assert
8432 import mock
8433hunk ./src/allmydata/test/test_backends.py 8
8434-from mock import Mock
8435-
8436 # This is the code that we're going to be testing.
8437 from allmydata.storage.server import StorageServer
8438 from allmydata.storage.backends.das.core import DASCore
8439hunk ./src/allmydata/test/test_backends.py 13
8440 from allmydata.storage.backends.null.core import NullCore
8441 from allmydata.storage.common import si_si2dir
8442-
8443 # The following share file content was generated with
8444 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
8445 # with share data == 'a'. The total size of this input
8446hunk ./src/allmydata/test/test_backends.py 31
8447     cancelsecret + expirationtime + nextlease
8448 share_data = containerdata + client_data
8449 testnodeid = 'testnodeidxxxxxxxxxx'
8450+expiration_policy = {'enabled' : False,
8451+                     'mode' : 'age',
8452+                     'override_lease_duration' : None,
8453+                     'cutoff_date' : None,
8454+                     'sharetypes' : None}
8455 
8456 
8457 class MockFiles(unittest.TestCase):
8458hunk ./src/allmydata/test/test_backends.py 75
8459         getsize = self.statforsize.__enter__()
8460         getsize.side_effect = self.call_statforsize
8461 
8462-    def call_statforsize(self, fakefpname):
8463-        return self.fakefilepaths[fakefpname].fileobject.size()
8464-
8465     def call_FakeBCC(self, StateFile):
8466         return MockBCC()
8467 
8468hunk ./src/allmydata/test/test_backends.py 81
8469     def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
8470         return MockLCC()
8471 
8472-    def call_listdir(self, fname):
8473-        fnamefp = FilePath(fname)
8474-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8475-                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8476-
8477-    def call_stat(self, fname):
8478-        assert isinstance(fname, basestring), fname
8479-        fnamefp = FilePath(fname)
8480-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8481-                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8482-        msg("%s.call_stat(%s)" % (self, fname,))
8483-        mstat = MockStat()
8484-        mstat.st_mode = 16893 # a directory
8485-        return mstat
8486-
8487     def call_get_available_space(self, storedir, reservedspace):
8488         # The input vector has an input size of 85.
8489         return 85 - reservedspace
8490hunk ./src/allmydata/test/test_backends.py 85
8491 
8492-    def call_exists(self):
8493-        # I'm only called in the ImmutableShareFile constructor.
8494-        return False
8495-
8496-    def call_setContent(self, inputstring):
8497-        self.incomingsharefilecontents = TemporaryFile(inputstring)
8498+    def call_statforsize(self, fakefpname):
8499+        return self.fakefilepaths[fakefpname].fileobject.size()
8500 
8501     def tearDown(self):
8502         msg( "%s.tearDown()" % (self,))
8503hunk ./src/allmydata/test/test_backends.py 91
8504         FakePath = self.FilePathFake.__exit__()       
8505-        FakeBCC = self.BCountingCrawler.__exit__()
8506-        getsize = self.statforsize.__exit__()
8507         self.fakefilepaths = {}
8508 
8509hunk ./src/allmydata/test/test_backends.py 93
8510-class MockStat:
8511-    def __init__(self):
8512-        self.st_mode = None
8513-
8514 
8515 class MockFilePath:
8516     def __init__(self, pathstring, ffpathsenvironment):
8517hunk ./src/allmydata/test/test_backends.py 128
8518     def exists(self):
8519         return False
8520     def open(self, mode='r'):
8521+        # XXX Makes no use of mode.
8522         return self.fileobject.open(mode)
8523hunk ./src/allmydata/test/test_backends.py 130
8524-    def setparents(self):
8525+    def parents(self):
8526         antecedents = []
8527         def f(fps, antecedents):
8528             newfps = os.path.split(fps)[0]
8529hunk ./src/allmydata/test/test_backends.py 138
8530                 antecedents.append(newfps)
8531                 f(newfps, antecedents)
8532         f(self.path, antecedents)
8533-        for fps in antecedents:
8534+        return antecedents
8535+    def setparents(self):
8536+        for fps in self.parents():
8537             if not self.fakefilepaths.has_key(fps):
8538                 self.fakefilepaths[fps] = MockFilePath(fps, self.fakefilepaths)
8539     def setContent(self, contentstring):
8540hunk ./src/allmydata/test/test_backends.py 187
8541     def size(self):
8542         # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
8543         # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
8544+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
8545         return {stat.ST_SIZE:len(self.buffer)}
8546     def getsize(self):
8547         return len(self.buffer)
8548hunk ./src/allmydata/test/test_backends.py 202
8549         pass
8550 
8551 
8552-
8553-
8554-expiration_policy = {'enabled' : False,
8555-                     'mode' : 'age',
8556-                     'override_lease_duration' : None,
8557-                     'cutoff_date' : None,
8558-                     'sharetypes' : None}
8559-
8560-
8561 class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
8562     """ NullBackend is just for testing and executable documentation, so
8563     this test is actually a test of StorageServer in which we're using
8564hunk ./src/allmydata/test/test_backends.py 314
8565         stack of code. This one is for exercising just the
8566         StorageServer object. """
8567 
8568+        # Contruct a file with the appropriate contents in the mockfilesystem.
8569         datalen = len(share_data)
8570         finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(3))
8571         finalhome.setContent(share_data)
8572hunk ./src/allmydata/test/test_backends.py 330
8573         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
8574         # If you start reading past the end of the file you get the empty string.
8575         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
8576-
8577-
8578-class TestBackendConstruction(MockFiles, ReallyEqualMixin):
8579-    def test_create_fs_backend(self):
8580-        """ This tests whether a file system backend instance can be
8581-        constructed. To pass the test, it has to use the
8582-        filesystem in only the prescribed ways. """
8583-
8584-        # Now begin the test.
8585-        DASCore(self.storedir, expiration_policy)
8586}
8587
8588Context:
8589
8590[cli: make 'tahoe cp' overwrite mutable files in-place
8591Kevan Carstensen <kevan@isnotajoke.com>**20110729202039
8592 Ignore-this: b2ad21a19439722f05c49bfd35b01855
8593]
8594[SFTP: write an error message to standard error for unrecognized shell commands. Change the existing message for shell sessions to be written to standard error, and refactor some duplicated code. Also change the lines of the error messages to end in CRLF, and take into account Kevan's review comments. fixes #1442, #1446
8595david-sarah@jacaranda.org**20110729233102
8596 Ignore-this: d2f2bb4664f25007d1602bf7333e2cdd
8597]
8598[src/allmydata/scripts/cli.py: fix pyflakes warning.
8599david-sarah@jacaranda.org**20110728021402
8600 Ignore-this: 94050140ddb99865295973f49927c509
8601]
8602[Fix the help synopses of CLI commands to include [options] in the right place. fixes #1359, fixes #636
8603david-sarah@jacaranda.org**20110724225440
8604 Ignore-this: 2a8e488a5f63dabfa9db9efd83768a5
8605]
8606[encodingutil: argv and output encodings are always the same on all platforms. Lose the unnecessary generality of them being different. fixes #1120
8607david-sarah@jacaranda.org**20110629185356
8608 Ignore-this: 5ebacbe6903dfa83ffd3ff8436a97787
8609]
8610[docs/man/tahoe.1: add man page. fixes #1420
8611david-sarah@jacaranda.org**20110724171728
8612 Ignore-this: fc7601ec7f25494288d6141d0ae0004c
8613]
8614[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
8615david-sarah@jacaranda.org**20110721234941
8616 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
8617]
8618[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
8619david-sarah@jacaranda.org**20110722000320
8620 Ignore-this: 55cd558b791526113db3f83c00ec328a
8621]
8622[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
8623david-sarah@jacaranda.org**20110721233658
8624 Ignore-this: 81b41745477163c9b39c0b59db91cc62
8625]
8626[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
8627david-sarah@jacaranda.org**20110722035402
8628 Ignore-this: 5d03f544c4154f088e26c7107494bf39
8629]
8630[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
8631david-sarah@jacaranda.org**20110722024907
8632 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
8633]
8634[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
8635david-sarah@jacaranda.org**20110718005949
8636 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
8637]
8638[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
8639david-sarah@jacaranda.org**20110717194315
8640 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
8641]
8642[README.txt: say that quickstart.rst is in the docs directory.
8643david-sarah@jacaranda.org**20110717192400
8644 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
8645]
8646[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
8647zooko@zooko.com**20110717114226
8648 Ignore-this: df222120d41447ce4102616921626c82
8649 fixes #1383
8650]
8651[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
8652david-sarah@jacaranda.org**20110716181813
8653 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
8654]
8655[docs: add missing link in NEWS.rst
8656zooko@zooko.com**20110712153307
8657 Ignore-this: be7b7eb81c03700b739daa1027d72b35
8658]
8659[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
8660zooko@zooko.com**20110712153229
8661 Ignore-this: 723c4f9e2211027c79d711715d972c5
8662 Also remove a couple of vestigial references to figleaf, which is long gone.
8663 fixes #1409 (remove contrib/fuse)
8664]
8665[add Protovis.js-based download-status timeline visualization
8666Brian Warner <warner@lothar.com>**20110629222606
8667 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
8668 
8669 provide status overlap info on the webapi t=json output, add decode/decrypt
8670 rate tooltips, add zoomin/zoomout buttons
8671]
8672[add more download-status data, fix tests
8673Brian Warner <warner@lothar.com>**20110629222555
8674 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
8675]
8676[prepare for viz: improve DownloadStatus events
8677Brian Warner <warner@lothar.com>**20110629222542
8678 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
8679 
8680 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
8681]
8682[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
8683zooko@zooko.com**20110629185711
8684 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
8685]
8686[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
8687david-sarah@jacaranda.org**20110130235809
8688 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
8689]
8690[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
8691david-sarah@jacaranda.org**20110626054124
8692 Ignore-this: abb864427a1b91bd10d5132b4589fd90
8693]
8694[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
8695david-sarah@jacaranda.org**20110623205528
8696 Ignore-this: c63e23146c39195de52fb17c7c49b2da
8697]
8698[Rename test_package_initialization.py to (much shorter) test_import.py .
8699Brian Warner <warner@lothar.com>**20110611190234
8700 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
8701 
8702 The former name was making my 'ls' listings hard to read, by forcing them
8703 down to just two columns.
8704]
8705[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
8706zooko@zooko.com**20110611163741
8707 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
8708 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
8709 fixes #1412
8710]
8711[wui: right-align the size column in the WUI
8712zooko@zooko.com**20110611153758
8713 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
8714 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
8715 fixes #1412
8716]
8717[docs: three minor fixes
8718zooko@zooko.com**20110610121656
8719 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
8720 CREDITS for arc for stats tweak
8721 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
8722 English usage tweak
8723]
8724[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
8725david-sarah@jacaranda.org**20110609223719
8726 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
8727]
8728[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
8729wilcoxjg@gmail.com**20110527120135
8730 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
8731 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
8732 NEWS.rst, stats.py: documentation of change to get_latencies
8733 stats.rst: now documents percentile modification in get_latencies
8734 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
8735 fixes #1392
8736]
8737[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
8738david-sarah@jacaranda.org**20110517011214
8739 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
8740]
8741[docs: convert NEWS to NEWS.rst and change all references to it.
8742david-sarah@jacaranda.org**20110517010255
8743 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
8744]
8745[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
8746david-sarah@jacaranda.org**20110512140559
8747 Ignore-this: 784548fc5367fac5450df1c46890876d
8748]
8749[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
8750david-sarah@jacaranda.org**20110130164923
8751 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
8752]
8753[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
8754zooko@zooko.com**20110128142006
8755 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
8756 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
8757]
8758[M-x whitespace-cleanup
8759zooko@zooko.com**20110510193653
8760 Ignore-this: dea02f831298c0f65ad096960e7df5c7
8761]
8762[docs: fix typo in running.rst, thanks to arch_o_median
8763zooko@zooko.com**20110510193633
8764 Ignore-this: ca06de166a46abbc61140513918e79e8
8765]
8766[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
8767david-sarah@jacaranda.org**20110204204902
8768 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
8769]
8770[relnotes.txt: forseeable -> foreseeable. refs #1342
8771david-sarah@jacaranda.org**20110204204116
8772 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
8773]
8774[replace remaining .html docs with .rst docs
8775zooko@zooko.com**20110510191650
8776 Ignore-this: d557d960a986d4ac8216d1677d236399
8777 Remove install.html (long since deprecated).
8778 Also replace some obsolete references to install.html with references to quickstart.rst.
8779 Fix some broken internal references within docs/historical/historical_known_issues.txt.
8780 Thanks to Ravi Pinjala and Patrick McDonald.
8781 refs #1227
8782]
8783[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
8784zooko@zooko.com**20110428055232
8785 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
8786]
8787[munin tahoe_files plugin: fix incorrect file count
8788francois@ctrlaltdel.ch**20110428055312
8789 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
8790 fixes #1391
8791]
8792[corrected "k must never be smaller than N" to "k must never be greater than N"
8793secorp@allmydata.org**20110425010308
8794 Ignore-this: 233129505d6c70860087f22541805eac
8795]
8796[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
8797david-sarah@jacaranda.org**20110411190738
8798 Ignore-this: 7847d26bc117c328c679f08a7baee519
8799]
8800[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
8801david-sarah@jacaranda.org**20110410155844
8802 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
8803]
8804[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
8805david-sarah@jacaranda.org**20110410155705
8806 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
8807]
8808[remove unused variable detected by pyflakes
8809zooko@zooko.com**20110407172231
8810 Ignore-this: 7344652d5e0720af822070d91f03daf9
8811]
8812[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
8813david-sarah@jacaranda.org**20110401202750
8814 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
8815]
8816[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
8817Brian Warner <warner@lothar.com>**20110325232511
8818 Ignore-this: d5307faa6900f143193bfbe14e0f01a
8819]
8820[control.py: remove all uses of s.get_serverid()
8821warner@lothar.com**20110227011203
8822 Ignore-this: f80a787953bd7fa3d40e828bde00e855
8823]
8824[web: remove some uses of s.get_serverid(), not all
8825warner@lothar.com**20110227011159
8826 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
8827]
8828[immutable/downloader/fetcher.py: remove all get_serverid() calls
8829warner@lothar.com**20110227011156
8830 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
8831]
8832[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
8833warner@lothar.com**20110227011153
8834 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
8835 
8836 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
8837 _shares_from_server dict was being popped incorrectly (using shnum as the
8838 index instead of serverid). I'm still thinking through the consequences of
8839 this bug. It was probably benign and really hard to detect. I think it would
8840 cause us to incorrectly believe that we're pulling too many shares from a
8841 server, and thus prefer a different server rather than asking for a second
8842 share from the first server. The diversity code is intended to spread out the
8843 number of shares simultaneously being requested from each server, but with
8844 this bug, it might be spreading out the total number of shares requested at
8845 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
8846 segment, so the effect doesn't last very long).
8847]
8848[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
8849warner@lothar.com**20110227011150
8850 Ignore-this: d8d56dd8e7b280792b40105e13664554
8851 
8852 test_download.py: create+check MyShare instances better, make sure they share
8853 Server objects, now that finder.py cares
8854]
8855[immutable/downloader/finder.py: reduce use of get_serverid(), one left
8856warner@lothar.com**20110227011146
8857 Ignore-this: 5785be173b491ae8a78faf5142892020
8858]
8859[immutable/offloaded.py: reduce use of get_serverid() a bit more
8860warner@lothar.com**20110227011142
8861 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
8862]
8863[immutable/upload.py: reduce use of get_serverid()
8864warner@lothar.com**20110227011138
8865 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
8866]
8867[immutable/checker.py: remove some uses of s.get_serverid(), not all
8868warner@lothar.com**20110227011134
8869 Ignore-this: e480a37efa9e94e8016d826c492f626e
8870]
8871[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
8872warner@lothar.com**20110227011132
8873 Ignore-this: 6078279ddf42b179996a4b53bee8c421
8874 MockIServer stubs
8875]
8876[upload.py: rearrange _make_trackers a bit, no behavior changes
8877warner@lothar.com**20110227011128
8878 Ignore-this: 296d4819e2af452b107177aef6ebb40f
8879]
8880[happinessutil.py: finally rename merge_peers to merge_servers
8881warner@lothar.com**20110227011124
8882 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
8883]
8884[test_upload.py: factor out FakeServerTracker
8885warner@lothar.com**20110227011120
8886 Ignore-this: 6c182cba90e908221099472cc159325b
8887]
8888[test_upload.py: server-vs-tracker cleanup
8889warner@lothar.com**20110227011115
8890 Ignore-this: 2915133be1a3ba456e8603885437e03
8891]
8892[happinessutil.py: server-vs-tracker cleanup
8893warner@lothar.com**20110227011111
8894 Ignore-this: b856c84033562d7d718cae7cb01085a9
8895]
8896[upload.py: more tracker-vs-server cleanup
8897warner@lothar.com**20110227011107
8898 Ignore-this: bb75ed2afef55e47c085b35def2de315
8899]
8900[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
8901warner@lothar.com**20110227011103
8902 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
8903]
8904[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
8905warner@lothar.com**20110227011100
8906 Ignore-this: 7ea858755cbe5896ac212a925840fe68
8907 
8908 No behavioral changes, just updating variable/method names and log messages.
8909 The effects outside these three files should be minimal: some exception
8910 messages changed (to say "server" instead of "peer"), and some internal class
8911 names were changed. A few things still use "peer" to minimize external
8912 changes, like UploadResults.timings["peer_selection"] and
8913 happinessutil.merge_peers, which can be changed later.
8914]
8915[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
8916warner@lothar.com**20110227011056
8917 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
8918]
8919[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
8920warner@lothar.com**20110227011051
8921 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
8922]
8923[test: increase timeout on a network test because Francois's ARM machine hit that timeout
8924zooko@zooko.com**20110317165909
8925 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
8926 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
8927]
8928[docs/configuration.rst: add a "Frontend Configuration" section
8929Brian Warner <warner@lothar.com>**20110222014323
8930 Ignore-this: 657018aa501fe4f0efef9851628444ca
8931 
8932 this points to docs/frontends/*.rst, which were previously underlinked
8933]
8934[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
8935"Brian Warner <warner@lothar.com>"**20110221061544
8936 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
8937]
8938[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
8939david-sarah@jacaranda.org**20110221015817
8940 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
8941]
8942[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
8943david-sarah@jacaranda.org**20110221020125
8944 Ignore-this: b0744ed58f161bf188e037bad077fc48
8945]
8946[Refactor StorageFarmBroker handling of servers
8947Brian Warner <warner@lothar.com>**20110221015804
8948 Ignore-this: 842144ed92f5717699b8f580eab32a51
8949 
8950 Pass around IServer instance instead of (peerid, rref) tuple. Replace
8951 "descriptor" with "server". Other replacements:
8952 
8953  get_all_servers -> get_connected_servers/get_known_servers
8954  get_servers_for_index -> get_servers_for_psi (now returns IServers)
8955 
8956 This change still needs to be pushed further down: lots of code is now
8957 getting the IServer and then distributing (peerid, rref) internally.
8958 Instead, it ought to distribute the IServer internally and delay
8959 extracting a serverid or rref until the last moment.
8960 
8961 no_network.py was updated to retain parallelism.
8962]
8963[TAG allmydata-tahoe-1.8.2
8964warner@lothar.com**20110131020101]
8965Patch bundle hash:
8966759af3db235ac9d7a57837c385ffc2ca9c87012a