Ticket #999: TestServerandFSBackPasses_Zancas20110729.darcs.patch

File TestServerandFSBackPasses_Zancas20110729.darcs.patch, 383.5 KB (added by Zancas, at 2011-07-30T00:59:39Z)

TestServerAndFSBackend passes all (3) tests

Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
50  * adding comments to clarify what I'm about to do.
51
52Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
53  * branching back, no longer attempting to mock inside TestServerFSBackend
54
55Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
56  * checkpoint12 TestServerFSBackend no longer mocks filesystem
57
58Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
59  * JACP
60
61Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
62  * testing get incoming
63
64Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
65  * ImmutableShareFile does not know its StorageIndex
66
67Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
68  * get_incoming correctly reports the 0 share after it has arrived
69
70Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
71  * jacp14
72
73Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
74  * jacp14 or so
75
76Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
77  * temporary work-in-progress patch to be unrecorded
78  tidy up a few tests, work done in pair-programming with Zancas
79
80Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
81  * work in progress intended to be unrecorded and never committed to trunk
82  switch from os.path.join to filepath
83  incomplete refactoring of common "stay in your subtree" tester code into a superclass
84 
85
86Fri Jul 15 13:15:00 MDT 2011  zooko@zooko.com
87  * another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
88  In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
89
90Tue Jul 19 23:59:18 MDT 2011  zooko@zooko.com
91  * another temporary patch for sharing work-in-progress
92  A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
93  There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
94  (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
95 
96
97Fri Jul 22 01:00:36 MDT 2011  wilcoxjg@gmail.com
98  * jacp16 or so
99
100Fri Jul 22 14:32:44 MDT 2011  wilcoxjg@gmail.com
101  * jacp17
102
103Fri Jul 22 21:19:15 MDT 2011  wilcoxjg@gmail.com
104  * jacp18
105
106Sat Jul 23 21:42:30 MDT 2011  wilcoxjg@gmail.com
107  * jacp19orso
108
109Wed Jul 27 02:05:53 MDT 2011  wilcoxjg@gmail.com
110  * jacp19
111
112Thu Jul 28 01:25:14 MDT 2011  wilcoxjg@gmail.com
113  * jacp20
114
115Thu Jul 28 22:38:30 MDT 2011  wilcoxjg@gmail.com
116  * Completed FilePath based test_write_and_read_share
117
118Fri Jul 29 17:53:56 MDT 2011  wilcoxjg@gmail.com
119  * TestServerAndFSBackend.test_read_old_share passes
120
121Fri Jul 29 19:00:25 MDT 2011  wilcoxjg@gmail.com
122  * TestServerAndFSBackend passes en total!
123
124New patches:
125
126[storage: new mocking tests of storage server read and write
127wilcoxjg@gmail.com**20110325203514
128 Ignore-this: df65c3c4f061dd1516f88662023fdb41
129 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
130] {
131addfile ./src/allmydata/test/test_server.py
132hunk ./src/allmydata/test/test_server.py 1
133+from twisted.trial import unittest
134+
135+from StringIO import StringIO
136+
137+from allmydata.test.common_util import ReallyEqualMixin
138+
139+import mock
140+
141+# This is the code that we're going to be testing.
142+from allmydata.storage.server import StorageServer
143+
144+# The following share file contents was generated with
145+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
146+# with share data == 'a'.
147+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
148+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
149+
150+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
151+
152+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
153+    @mock.patch('__builtin__.open')
154+    def test_create_server(self, mockopen):
155+        """ This tests whether a server instance can be constructed. """
156+
157+        def call_open(fname, mode):
158+            if fname == 'testdir/bucket_counter.state':
159+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
160+            elif fname == 'testdir/lease_checker.state':
161+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
162+            elif fname == 'testdir/lease_checker.history':
163+                return StringIO()
164+        mockopen.side_effect = call_open
165+
166+        # Now begin the test.
167+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
168+
169+        # You passed!
170+
171+class TestServer(unittest.TestCase, ReallyEqualMixin):
172+    @mock.patch('__builtin__.open')
173+    def setUp(self, mockopen):
174+        def call_open(fname, mode):
175+            if fname == 'testdir/bucket_counter.state':
176+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
177+            elif fname == 'testdir/lease_checker.state':
178+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
179+            elif fname == 'testdir/lease_checker.history':
180+                return StringIO()
181+        mockopen.side_effect = call_open
182+
183+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
184+
185+
186+    @mock.patch('time.time')
187+    @mock.patch('os.mkdir')
188+    @mock.patch('__builtin__.open')
189+    @mock.patch('os.listdir')
190+    @mock.patch('os.path.isdir')
191+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
192+        """Handle a report of corruption."""
193+
194+        def call_listdir(dirname):
195+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
196+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
197+
198+        mocklistdir.side_effect = call_listdir
199+
200+        class MockFile:
201+            def __init__(self):
202+                self.buffer = ''
203+                self.pos = 0
204+            def write(self, instring):
205+                begin = self.pos
206+                padlen = begin - len(self.buffer)
207+                if padlen > 0:
208+                    self.buffer += '\x00' * padlen
209+                end = self.pos + len(instring)
210+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
211+                self.pos = end
212+            def close(self):
213+                pass
214+            def seek(self, pos):
215+                self.pos = pos
216+            def read(self, numberbytes):
217+                return self.buffer[self.pos:self.pos+numberbytes]
218+            def tell(self):
219+                return self.pos
220+
221+        mocktime.return_value = 0
222+
223+        sharefile = MockFile()
224+        def call_open(fname, mode):
225+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
226+            return sharefile
227+
228+        mockopen.side_effect = call_open
229+        # Now begin the test.
230+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
231+        print bs
232+        bs[0].remote_write(0, 'a')
233+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
234+
235+
236+    @mock.patch('os.path.exists')
237+    @mock.patch('os.path.getsize')
238+    @mock.patch('__builtin__.open')
239+    @mock.patch('os.listdir')
240+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
241+        """ This tests whether the code correctly finds and reads
242+        shares written out by old (Tahoe-LAFS <= v1.8.2)
243+        servers. There is a similar test in test_download, but that one
244+        is from the perspective of the client and exercises a deeper
245+        stack of code. This one is for exercising just the
246+        StorageServer object. """
247+
248+        def call_listdir(dirname):
249+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
250+            return ['0']
251+
252+        mocklistdir.side_effect = call_listdir
253+
254+        def call_open(fname, mode):
255+            self.failUnlessReallyEqual(fname, sharefname)
256+            self.failUnless('r' in mode, mode)
257+            self.failUnless('b' in mode, mode)
258+
259+            return StringIO(share_file_data)
260+        mockopen.side_effect = call_open
261+
262+        datalen = len(share_file_data)
263+        def call_getsize(fname):
264+            self.failUnlessReallyEqual(fname, sharefname)
265+            return datalen
266+        mockgetsize.side_effect = call_getsize
267+
268+        def call_exists(fname):
269+            self.failUnlessReallyEqual(fname, sharefname)
270+            return True
271+        mockexists.side_effect = call_exists
272+
273+        # Now begin the test.
274+        bs = self.s.remote_get_buckets('teststorage_index')
275+
276+        self.failUnlessEqual(len(bs), 1)
277+        b = bs[0]
278+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
279+        # If you try to read past the end you get the as much data as is there.
280+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
281+        # If you start reading past the end of the file you get the empty string.
282+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
283}
284[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
285wilcoxjg@gmail.com**20110624202850
286 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
287 sloppy not for production
288] {
289move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
290hunk ./src/allmydata/storage/crawler.py 13
291     pass
292 
293 class ShareCrawler(service.MultiService):
294-    """A ShareCrawler subclass is attached to a StorageServer, and
295+    """A subcless of ShareCrawler is attached to a StorageServer, and
296     periodically walks all of its shares, processing each one in some
297     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
298     since large servers can easily have a terabyte of shares, in several
299hunk ./src/allmydata/storage/crawler.py 31
300     We assume that the normal upload/download/get_buckets traffic of a tahoe
301     grid will cause the prefixdir contents to be mostly cached in the kernel,
302     or that the number of buckets in each prefixdir will be small enough to
303-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
304+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
305     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
306     prefix. On this server, each prefixdir took 130ms-200ms to list the first
307     time, and 17ms to list the second time.
308hunk ./src/allmydata/storage/crawler.py 68
309     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
310     minimum_cycle_time = 300 # don't run a cycle faster than this
311 
312-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
313+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
314         service.MultiService.__init__(self)
315         if allowed_cpu_percentage is not None:
316             self.allowed_cpu_percentage = allowed_cpu_percentage
317hunk ./src/allmydata/storage/crawler.py 72
318-        self.server = server
319-        self.sharedir = server.sharedir
320-        self.statefile = statefile
321+        self.backend = backend
322         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
323                          for i in range(2**10)]
324         self.prefixes.sort()
325hunk ./src/allmydata/storage/crawler.py 446
326 
327     minimum_cycle_time = 60*60 # we don't need this more than once an hour
328 
329-    def __init__(self, server, statefile, num_sample_prefixes=1):
330-        ShareCrawler.__init__(self, server, statefile)
331+    def __init__(self, statefile, num_sample_prefixes=1):
332+        ShareCrawler.__init__(self, statefile)
333         self.num_sample_prefixes = num_sample_prefixes
334 
335     def add_initial_state(self):
336hunk ./src/allmydata/storage/expirer.py 15
337     removed.
338 
339     I collect statistics on the leases and make these available to a web
340-    status page, including::
341+    status page, including:
342 
343     Space recovered during this cycle-so-far:
344      actual (only if expiration_enabled=True):
345hunk ./src/allmydata/storage/expirer.py 51
346     slow_start = 360 # wait 6 minutes after startup
347     minimum_cycle_time = 12*60*60 # not more than twice per day
348 
349-    def __init__(self, server, statefile, historyfile,
350+    def __init__(self, statefile, historyfile,
351                  expiration_enabled, mode,
352                  override_lease_duration, # used if expiration_mode=="age"
353                  cutoff_date, # used if expiration_mode=="cutoff-date"
354hunk ./src/allmydata/storage/expirer.py 71
355         else:
356             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
357         self.sharetypes_to_expire = sharetypes
358-        ShareCrawler.__init__(self, server, statefile)
359+        ShareCrawler.__init__(self, statefile)
360 
361     def add_initial_state(self):
362         # we fill ["cycle-to-date"] here (even though they will be reset in
363hunk ./src/allmydata/storage/immutable.py 44
364     sharetype = "immutable"
365 
366     def __init__(self, filename, max_size=None, create=False):
367-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
368+        """ If max_size is not None then I won't allow more than
369+        max_size to be written to me. If create=True then max_size
370+        must not be None. """
371         precondition((max_size is not None) or (not create), max_size, create)
372         self.home = filename
373         self._max_size = max_size
374hunk ./src/allmydata/storage/immutable.py 87
375 
376     def read_share_data(self, offset, length):
377         precondition(offset >= 0)
378-        # reads beyond the end of the data are truncated. Reads that start
379-        # beyond the end of the data return an empty string. I wonder why
380-        # Python doesn't do the following computation for me?
381+        # Reads beyond the end of the data are truncated. Reads that start
382+        # beyond the end of the data return an empty string.
383         seekpos = self._data_offset+offset
384         fsize = os.path.getsize(self.home)
385         actuallength = max(0, min(length, fsize-seekpos))
386hunk ./src/allmydata/storage/immutable.py 198
387             space_freed += os.stat(self.home)[stat.ST_SIZE]
388             self.unlink()
389         return space_freed
390+class NullBucketWriter(Referenceable):
391+    implements(RIBucketWriter)
392 
393hunk ./src/allmydata/storage/immutable.py 201
394+    def remote_write(self, offset, data):
395+        return
396 
397 class BucketWriter(Referenceable):
398     implements(RIBucketWriter)
399hunk ./src/allmydata/storage/server.py 7
400 from twisted.application import service
401 
402 from zope.interface import implements
403-from allmydata.interfaces import RIStorageServer, IStatsProducer
404+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
405 from allmydata.util import fileutil, idlib, log, time_format
406 import allmydata # for __full_version__
407 
408hunk ./src/allmydata/storage/server.py 16
409 from allmydata.storage.lease import LeaseInfo
410 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
411      create_mutable_sharefile
412-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
413+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
414 from allmydata.storage.crawler import BucketCountingCrawler
415 from allmydata.storage.expirer import LeaseCheckingCrawler
416 
417hunk ./src/allmydata/storage/server.py 20
418+from zope.interface import implements
419+
420+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
421+# be started and stopped.
422+class Backend(service.MultiService):
423+    implements(IStatsProducer)
424+    def __init__(self):
425+        service.MultiService.__init__(self)
426+
427+    def get_bucket_shares(self):
428+        """XXX"""
429+        raise NotImplementedError
430+
431+    def get_share(self):
432+        """XXX"""
433+        raise NotImplementedError
434+
435+    def make_bucket_writer(self):
436+        """XXX"""
437+        raise NotImplementedError
438+
439+class NullBackend(Backend):
440+    def __init__(self):
441+        Backend.__init__(self)
442+
443+    def get_available_space(self):
444+        return None
445+
446+    def get_bucket_shares(self, storage_index):
447+        return set()
448+
449+    def get_share(self, storage_index, sharenum):
450+        return None
451+
452+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
453+        return NullBucketWriter()
454+
455+class FSBackend(Backend):
456+    def __init__(self, storedir, readonly=False, reserved_space=0):
457+        Backend.__init__(self)
458+
459+        self._setup_storage(storedir, readonly, reserved_space)
460+        self._setup_corruption_advisory()
461+        self._setup_bucket_counter()
462+        self._setup_lease_checkerf()
463+
464+    def _setup_storage(self, storedir, readonly, reserved_space):
465+        self.storedir = storedir
466+        self.readonly = readonly
467+        self.reserved_space = int(reserved_space)
468+        if self.reserved_space:
469+            if self.get_available_space() is None:
470+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
471+                        umid="0wZ27w", level=log.UNUSUAL)
472+
473+        self.sharedir = os.path.join(self.storedir, "shares")
474+        fileutil.make_dirs(self.sharedir)
475+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
476+        self._clean_incomplete()
477+
478+    def _clean_incomplete(self):
479+        fileutil.rm_dir(self.incomingdir)
480+        fileutil.make_dirs(self.incomingdir)
481+
482+    def _setup_corruption_advisory(self):
483+        # we don't actually create the corruption-advisory dir until necessary
484+        self.corruption_advisory_dir = os.path.join(self.storedir,
485+                                                    "corruption-advisories")
486+
487+    def _setup_bucket_counter(self):
488+        statefile = os.path.join(self.storedir, "bucket_counter.state")
489+        self.bucket_counter = BucketCountingCrawler(statefile)
490+        self.bucket_counter.setServiceParent(self)
491+
492+    def _setup_lease_checkerf(self):
493+        statefile = os.path.join(self.storedir, "lease_checker.state")
494+        historyfile = os.path.join(self.storedir, "lease_checker.history")
495+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
496+                                   expiration_enabled, expiration_mode,
497+                                   expiration_override_lease_duration,
498+                                   expiration_cutoff_date,
499+                                   expiration_sharetypes)
500+        self.lease_checker.setServiceParent(self)
501+
502+    def get_available_space(self):
503+        if self.readonly:
504+            return 0
505+        return fileutil.get_available_space(self.storedir, self.reserved_space)
506+
507+    def get_bucket_shares(self, storage_index):
508+        """Return a list of (shnum, pathname) tuples for files that hold
509+        shares for this storage_index. In each tuple, 'shnum' will always be
510+        the integer form of the last component of 'pathname'."""
511+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
512+        try:
513+            for f in os.listdir(storagedir):
514+                if NUM_RE.match(f):
515+                    filename = os.path.join(storagedir, f)
516+                    yield (int(f), filename)
517+        except OSError:
518+            # Commonly caused by there being no buckets at all.
519+            pass
520+
521 # storage/
522 # storage/shares/incoming
523 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
524hunk ./src/allmydata/storage/server.py 143
525     name = 'storage'
526     LeaseCheckerClass = LeaseCheckingCrawler
527 
528-    def __init__(self, storedir, nodeid, reserved_space=0,
529-                 discard_storage=False, readonly_storage=False,
530+    def __init__(self, nodeid, backend, reserved_space=0,
531+                 readonly_storage=False,
532                  stats_provider=None,
533                  expiration_enabled=False,
534                  expiration_mode="age",
535hunk ./src/allmydata/storage/server.py 155
536         assert isinstance(nodeid, str)
537         assert len(nodeid) == 20
538         self.my_nodeid = nodeid
539-        self.storedir = storedir
540-        sharedir = os.path.join(storedir, "shares")
541-        fileutil.make_dirs(sharedir)
542-        self.sharedir = sharedir
543-        # we don't actually create the corruption-advisory dir until necessary
544-        self.corruption_advisory_dir = os.path.join(storedir,
545-                                                    "corruption-advisories")
546-        self.reserved_space = int(reserved_space)
547-        self.no_storage = discard_storage
548-        self.readonly_storage = readonly_storage
549         self.stats_provider = stats_provider
550         if self.stats_provider:
551             self.stats_provider.register_producer(self)
552hunk ./src/allmydata/storage/server.py 158
553-        self.incomingdir = os.path.join(sharedir, 'incoming')
554-        self._clean_incomplete()
555-        fileutil.make_dirs(self.incomingdir)
556         self._active_writers = weakref.WeakKeyDictionary()
557hunk ./src/allmydata/storage/server.py 159
558+        self.backend = backend
559+        self.backend.setServiceParent(self)
560         log.msg("StorageServer created", facility="tahoe.storage")
561 
562hunk ./src/allmydata/storage/server.py 163
563-        if reserved_space:
564-            if self.get_available_space() is None:
565-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
566-                        umin="0wZ27w", level=log.UNUSUAL)
567-
568         self.latencies = {"allocate": [], # immutable
569                           "write": [],
570                           "close": [],
571hunk ./src/allmydata/storage/server.py 174
572                           "renew": [],
573                           "cancel": [],
574                           }
575-        self.add_bucket_counter()
576-
577-        statefile = os.path.join(self.storedir, "lease_checker.state")
578-        historyfile = os.path.join(self.storedir, "lease_checker.history")
579-        klass = self.LeaseCheckerClass
580-        self.lease_checker = klass(self, statefile, historyfile,
581-                                   expiration_enabled, expiration_mode,
582-                                   expiration_override_lease_duration,
583-                                   expiration_cutoff_date,
584-                                   expiration_sharetypes)
585-        self.lease_checker.setServiceParent(self)
586 
587     def __repr__(self):
588         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
589hunk ./src/allmydata/storage/server.py 178
590 
591-    def add_bucket_counter(self):
592-        statefile = os.path.join(self.storedir, "bucket_counter.state")
593-        self.bucket_counter = BucketCountingCrawler(self, statefile)
594-        self.bucket_counter.setServiceParent(self)
595-
596     def count(self, name, delta=1):
597         if self.stats_provider:
598             self.stats_provider.count("storage_server." + name, delta)
599hunk ./src/allmydata/storage/server.py 233
600             kwargs["facility"] = "tahoe.storage"
601         return log.msg(*args, **kwargs)
602 
603-    def _clean_incomplete(self):
604-        fileutil.rm_dir(self.incomingdir)
605-
606     def get_stats(self):
607         # remember: RIStatsProvider requires that our return dict
608         # contains numeric values.
609hunk ./src/allmydata/storage/server.py 269
610             stats['storage_server.total_bucket_count'] = bucket_count
611         return stats
612 
613-    def get_available_space(self):
614-        """Returns available space for share storage in bytes, or None if no
615-        API to get this information is available."""
616-
617-        if self.readonly_storage:
618-            return 0
619-        return fileutil.get_available_space(self.storedir, self.reserved_space)
620-
621     def allocated_size(self):
622         space = 0
623         for bw in self._active_writers:
624hunk ./src/allmydata/storage/server.py 276
625         return space
626 
627     def remote_get_version(self):
628-        remaining_space = self.get_available_space()
629+        remaining_space = self.backend.get_available_space()
630         if remaining_space is None:
631             # We're on a platform that has no API to get disk stats.
632             remaining_space = 2**64
633hunk ./src/allmydata/storage/server.py 301
634         self.count("allocate")
635         alreadygot = set()
636         bucketwriters = {} # k: shnum, v: BucketWriter
637-        si_dir = storage_index_to_dir(storage_index)
638-        si_s = si_b2a(storage_index)
639 
640hunk ./src/allmydata/storage/server.py 302
641+        si_s = si_b2a(storage_index)
642         log.msg("storage: allocate_buckets %s" % si_s)
643 
644         # in this implementation, the lease information (including secrets)
645hunk ./src/allmydata/storage/server.py 316
646 
647         max_space_per_bucket = allocated_size
648 
649-        remaining_space = self.get_available_space()
650+        remaining_space = self.backend.get_available_space()
651         limited = remaining_space is not None
652         if limited:
653             # this is a bit conservative, since some of this allocated_size()
654hunk ./src/allmydata/storage/server.py 329
655         # they asked about: this will save them a lot of work. Add or update
656         # leases for all of them: if they want us to hold shares for this
657         # file, they'll want us to hold leases for this file.
658-        for (shnum, fn) in self._get_bucket_shares(storage_index):
659+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
660             alreadygot.add(shnum)
661             sf = ShareFile(fn)
662             sf.add_or_renew_lease(lease_info)
663hunk ./src/allmydata/storage/server.py 335
664 
665         for shnum in sharenums:
666-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
667-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
668-            if os.path.exists(finalhome):
669+            share = self.backend.get_share(storage_index, shnum)
670+
671+            if not share:
672+                if (not limited) or (remaining_space >= max_space_per_bucket):
673+                    # ok! we need to create the new share file.
674+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
675+                                      max_space_per_bucket, lease_info, canary)
676+                    bucketwriters[shnum] = bw
677+                    self._active_writers[bw] = 1
678+                    if limited:
679+                        remaining_space -= max_space_per_bucket
680+                else:
681+                    # bummer! not enough space to accept this bucket
682+                    pass
683+
684+            elif share.is_complete():
685                 # great! we already have it. easy.
686                 pass
687hunk ./src/allmydata/storage/server.py 353
688-            elif os.path.exists(incominghome):
689+            elif not share.is_complete():
690                 # Note that we don't create BucketWriters for shnums that
691                 # have a partial share (in incoming/), so if a second upload
692                 # occurs while the first is still in progress, the second
693hunk ./src/allmydata/storage/server.py 359
694                 # uploader will use different storage servers.
695                 pass
696-            elif (not limited) or (remaining_space >= max_space_per_bucket):
697-                # ok! we need to create the new share file.
698-                bw = BucketWriter(self, incominghome, finalhome,
699-                                  max_space_per_bucket, lease_info, canary)
700-                if self.no_storage:
701-                    bw.throw_out_all_data = True
702-                bucketwriters[shnum] = bw
703-                self._active_writers[bw] = 1
704-                if limited:
705-                    remaining_space -= max_space_per_bucket
706-            else:
707-                # bummer! not enough space to accept this bucket
708-                pass
709-
710-        if bucketwriters:
711-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
712 
713         self.add_latency("allocate", time.time() - start)
714         return alreadygot, bucketwriters
715hunk ./src/allmydata/storage/server.py 437
716             self.stats_provider.count('storage_server.bytes_added', consumed_size)
717         del self._active_writers[bw]
718 
719-    def _get_bucket_shares(self, storage_index):
720-        """Return a list of (shnum, pathname) tuples for files that hold
721-        shares for this storage_index. In each tuple, 'shnum' will always be
722-        the integer form of the last component of 'pathname'."""
723-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
724-        try:
725-            for f in os.listdir(storagedir):
726-                if NUM_RE.match(f):
727-                    filename = os.path.join(storagedir, f)
728-                    yield (int(f), filename)
729-        except OSError:
730-            # Commonly caused by there being no buckets at all.
731-            pass
732 
733     def remote_get_buckets(self, storage_index):
734         start = time.time()
735hunk ./src/allmydata/storage/server.py 444
736         si_s = si_b2a(storage_index)
737         log.msg("storage: get_buckets %s" % si_s)
738         bucketreaders = {} # k: sharenum, v: BucketReader
739-        for shnum, filename in self._get_bucket_shares(storage_index):
740+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
741             bucketreaders[shnum] = BucketReader(self, filename,
742                                                 storage_index, shnum)
743         self.add_latency("get", time.time() - start)
744hunk ./src/allmydata/test/test_backends.py 10
745 import mock
746 
747 # This is the code that we're going to be testing.
748-from allmydata.storage.server import StorageServer
749+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
750 
751 # The following share file contents was generated with
752 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
753hunk ./src/allmydata/test/test_backends.py 21
754 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
755 
756 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
757+    @mock.patch('time.time')
758+    @mock.patch('os.mkdir')
759+    @mock.patch('__builtin__.open')
760+    @mock.patch('os.listdir')
761+    @mock.patch('os.path.isdir')
762+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
763+        """ This tests whether a server instance can be constructed
764+        with a null backend. The server instance fails the test if it
765+        tries to read or write to the file system. """
766+
767+        # Now begin the test.
768+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
769+
770+        self.failIf(mockisdir.called)
771+        self.failIf(mocklistdir.called)
772+        self.failIf(mockopen.called)
773+        self.failIf(mockmkdir.called)
774+
775+        # You passed!
776+
777+    @mock.patch('time.time')
778+    @mock.patch('os.mkdir')
779     @mock.patch('__builtin__.open')
780hunk ./src/allmydata/test/test_backends.py 44
781-    def test_create_server(self, mockopen):
782-        """ This tests whether a server instance can be constructed. """
783+    @mock.patch('os.listdir')
784+    @mock.patch('os.path.isdir')
785+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
786+        """ This tests whether a server instance can be constructed
787+        with a filesystem backend. To pass the test, it has to use the
788+        filesystem in only the prescribed ways. """
789 
790         def call_open(fname, mode):
791             if fname == 'testdir/bucket_counter.state':
792hunk ./src/allmydata/test/test_backends.py 58
793                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
794             elif fname == 'testdir/lease_checker.history':
795                 return StringIO()
796+            else:
797+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
798         mockopen.side_effect = call_open
799 
800         # Now begin the test.
801hunk ./src/allmydata/test/test_backends.py 63
802-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
803+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
804+
805+        self.failIf(mockisdir.called)
806+        self.failIf(mocklistdir.called)
807+        self.failIf(mockopen.called)
808+        self.failIf(mockmkdir.called)
809+        self.failIf(mocktime.called)
810 
811         # You passed!
812 
813hunk ./src/allmydata/test/test_backends.py 73
814-class TestServer(unittest.TestCase, ReallyEqualMixin):
815+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
816+    def setUp(self):
817+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
818+
819+    @mock.patch('os.mkdir')
820+    @mock.patch('__builtin__.open')
821+    @mock.patch('os.listdir')
822+    @mock.patch('os.path.isdir')
823+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
824+        """ Write a new share. """
825+
826+        # Now begin the test.
827+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
828+        bs[0].remote_write(0, 'a')
829+        self.failIf(mockisdir.called)
830+        self.failIf(mocklistdir.called)
831+        self.failIf(mockopen.called)
832+        self.failIf(mockmkdir.called)
833+
834+    @mock.patch('os.path.exists')
835+    @mock.patch('os.path.getsize')
836+    @mock.patch('__builtin__.open')
837+    @mock.patch('os.listdir')
838+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
839+        """ This tests whether the code correctly finds and reads
840+        shares written out by old (Tahoe-LAFS <= v1.8.2)
841+        servers. There is a similar test in test_download, but that one
842+        is from the perspective of the client and exercises a deeper
843+        stack of code. This one is for exercising just the
844+        StorageServer object. """
845+
846+        # Now begin the test.
847+        bs = self.s.remote_get_buckets('teststorage_index')
848+
849+        self.failUnlessEqual(len(bs), 0)
850+        self.failIf(mocklistdir.called)
851+        self.failIf(mockopen.called)
852+        self.failIf(mockgetsize.called)
853+        self.failIf(mockexists.called)
854+
855+
856+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
857     @mock.patch('__builtin__.open')
858     def setUp(self, mockopen):
859         def call_open(fname, mode):
860hunk ./src/allmydata/test/test_backends.py 126
861                 return StringIO()
862         mockopen.side_effect = call_open
863 
864-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
865-
866+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
867 
868     @mock.patch('time.time')
869     @mock.patch('os.mkdir')
870hunk ./src/allmydata/test/test_backends.py 134
871     @mock.patch('os.listdir')
872     @mock.patch('os.path.isdir')
873     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
874-        """Handle a report of corruption."""
875+        """ Write a new share. """
876 
877         def call_listdir(dirname):
878             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
879hunk ./src/allmydata/test/test_backends.py 173
880         mockopen.side_effect = call_open
881         # Now begin the test.
882         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
883-        print bs
884         bs[0].remote_write(0, 'a')
885         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
886 
887hunk ./src/allmydata/test/test_backends.py 176
888-
889     @mock.patch('os.path.exists')
890     @mock.patch('os.path.getsize')
891     @mock.patch('__builtin__.open')
892hunk ./src/allmydata/test/test_backends.py 218
893 
894         self.failUnlessEqual(len(bs), 1)
895         b = bs[0]
896+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
897         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
898         # If you try to read past the end you get the as much data as is there.
899         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
900hunk ./src/allmydata/test/test_backends.py 224
901         # If you start reading past the end of the file you get the empty string.
902         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
903+
904+
905}
906[a temp patch used as a snapshot
907wilcoxjg@gmail.com**20110626052732
908 Ignore-this: 95f05e314eaec870afa04c76d979aa44
909] {
910hunk ./docs/configuration.rst 637
911   [storage]
912   enabled = True
913   readonly = True
914-  sizelimit = 10000000000
915 
916 
917   [helper]
918hunk ./docs/garbage-collection.rst 16
919 
920 When a file or directory in the virtual filesystem is no longer referenced,
921 the space that its shares occupied on each storage server can be freed,
922-making room for other shares. Tahoe currently uses a garbage collection
923+making room for other shares. Tahoe uses a garbage collection
924 ("GC") mechanism to implement this space-reclamation process. Each share has
925 one or more "leases", which are managed by clients who want the
926 file/directory to be retained. The storage server accepts each share for a
927hunk ./docs/garbage-collection.rst 34
928 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
929 If lease renewal occurs quickly and with 100% reliability, than any renewal
930 time that is shorter than the lease duration will suffice, but a larger ratio
931-of duration-over-renewal-time will be more robust in the face of occasional
932+of lease duration to renewal time will be more robust in the face of occasional
933 delays or failures.
934 
935 The current recommended values for a small Tahoe grid are to renew the leases
936replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
937hunk ./src/allmydata/client.py 260
938             sharetypes.append("mutable")
939         expiration_sharetypes = tuple(sharetypes)
940 
941+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
942+            xyz
943+        xyz
944         ss = StorageServer(storedir, self.nodeid,
945                            reserved_space=reserved,
946                            discard_storage=discard,
947hunk ./src/allmydata/storage/crawler.py 234
948         f = open(tmpfile, "wb")
949         pickle.dump(self.state, f)
950         f.close()
951-        fileutil.move_into_place(tmpfile, self.statefile)
952+        fileutil.move_into_place(tmpfile, self.statefname)
953 
954     def startService(self):
955         # arrange things to look like we were just sleeping, so
956}
957[snapshot of progress on backend implementation (not suitable for trunk)
958wilcoxjg@gmail.com**20110626053244
959 Ignore-this: 50c764af791c2b99ada8289546806a0a
960] {
961adddir ./src/allmydata/storage/backends
962adddir ./src/allmydata/storage/backends/das
963move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
964adddir ./src/allmydata/storage/backends/null
965hunk ./src/allmydata/interfaces.py 270
966         store that on disk.
967         """
968 
969+class IStorageBackend(Interface):
970+    """
971+    Objects of this kind live on the server side and are used by the
972+    storage server object.
973+    """
974+    def get_available_space(self, reserved_space):
975+        """ Returns available space for share storage in bytes, or
976+        None if this information is not available or if the available
977+        space is unlimited.
978+
979+        If the backend is configured for read-only mode then this will
980+        return 0.
981+
982+        reserved_space is how many bytes to subtract from the answer, so
983+        you can pass how many bytes you would like to leave unused on this
984+        filesystem as reserved_space. """
985+
986+    def get_bucket_shares(self):
987+        """XXX"""
988+
989+    def get_share(self):
990+        """XXX"""
991+
992+    def make_bucket_writer(self):
993+        """XXX"""
994+
995+class IStorageBackendShare(Interface):
996+    """
997+    This object contains as much as all of the share data.  It is intended
998+    for lazy evaluation such that in many use cases substantially less than
999+    all of the share data will be accessed.
1000+    """
1001+    def is_complete(self):
1002+        """
1003+        Returns the share state, or None if the share does not exist.
1004+        """
1005+
1006 class IStorageBucketWriter(Interface):
1007     """
1008     Objects of this kind live on the client side.
1009hunk ./src/allmydata/interfaces.py 2492
1010 
1011 class EmptyPathnameComponentError(Exception):
1012     """The webapi disallows empty pathname components."""
1013+
1014+class IShareStore(Interface):
1015+    pass
1016+
1017addfile ./src/allmydata/storage/backends/__init__.py
1018addfile ./src/allmydata/storage/backends/das/__init__.py
1019addfile ./src/allmydata/storage/backends/das/core.py
1020hunk ./src/allmydata/storage/backends/das/core.py 1
1021+from allmydata.interfaces import IStorageBackend
1022+from allmydata.storage.backends.base import Backend
1023+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1024+from allmydata.util.assertutil import precondition
1025+
1026+import os, re, weakref, struct, time
1027+
1028+from foolscap.api import Referenceable
1029+from twisted.application import service
1030+
1031+from zope.interface import implements
1032+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
1033+from allmydata.util import fileutil, idlib, log, time_format
1034+import allmydata # for __full_version__
1035+
1036+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1037+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
1038+from allmydata.storage.lease import LeaseInfo
1039+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1040+     create_mutable_sharefile
1041+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1042+from allmydata.storage.crawler import FSBucketCountingCrawler
1043+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1044+
1045+from zope.interface import implements
1046+
1047+class DASCore(Backend):
1048+    implements(IStorageBackend)
1049+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1050+        Backend.__init__(self)
1051+
1052+        self._setup_storage(storedir, readonly, reserved_space)
1053+        self._setup_corruption_advisory()
1054+        self._setup_bucket_counter()
1055+        self._setup_lease_checkerf(expiration_policy)
1056+
1057+    def _setup_storage(self, storedir, readonly, reserved_space):
1058+        self.storedir = storedir
1059+        self.readonly = readonly
1060+        self.reserved_space = int(reserved_space)
1061+        if self.reserved_space:
1062+            if self.get_available_space() is None:
1063+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1064+                        umid="0wZ27w", level=log.UNUSUAL)
1065+
1066+        self.sharedir = os.path.join(self.storedir, "shares")
1067+        fileutil.make_dirs(self.sharedir)
1068+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1069+        self._clean_incomplete()
1070+
1071+    def _clean_incomplete(self):
1072+        fileutil.rm_dir(self.incomingdir)
1073+        fileutil.make_dirs(self.incomingdir)
1074+
1075+    def _setup_corruption_advisory(self):
1076+        # we don't actually create the corruption-advisory dir until necessary
1077+        self.corruption_advisory_dir = os.path.join(self.storedir,
1078+                                                    "corruption-advisories")
1079+
1080+    def _setup_bucket_counter(self):
1081+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1082+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1083+        self.bucket_counter.setServiceParent(self)
1084+
1085+    def _setup_lease_checkerf(self, expiration_policy):
1086+        statefile = os.path.join(self.storedir, "lease_checker.state")
1087+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1088+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1089+        self.lease_checker.setServiceParent(self)
1090+
1091+    def get_available_space(self):
1092+        if self.readonly:
1093+            return 0
1094+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1095+
1096+    def get_shares(self, storage_index):
1097+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1098+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1099+        try:
1100+            for f in os.listdir(finalstoragedir):
1101+                if NUM_RE.match(f):
1102+                    filename = os.path.join(finalstoragedir, f)
1103+                    yield FSBShare(filename, int(f))
1104+        except OSError:
1105+            # Commonly caused by there being no buckets at all.
1106+            pass
1107+       
1108+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1109+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1110+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1111+        return bw
1112+       
1113+
1114+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1115+# and share data. The share data is accessed by RIBucketWriter.write and
1116+# RIBucketReader.read . The lease information is not accessible through these
1117+# interfaces.
1118+
1119+# The share file has the following layout:
1120+#  0x00: share file version number, four bytes, current version is 1
1121+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1122+#  0x08: number of leases, four bytes big-endian
1123+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1124+#  A+0x0c = B: first lease. Lease format is:
1125+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1126+#   B+0x04: renew secret, 32 bytes (SHA256)
1127+#   B+0x24: cancel secret, 32 bytes (SHA256)
1128+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1129+#   B+0x48: next lease, or end of record
1130+
1131+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1132+# but it is still filled in by storage servers in case the storage server
1133+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1134+# share file is moved from one storage server to another. The value stored in
1135+# this field is truncated, so if the actual share data length is >= 2**32,
1136+# then the value stored in this field will be the actual share data length
1137+# modulo 2**32.
1138+
1139+class ImmutableShare:
1140+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1141+    sharetype = "immutable"
1142+
1143+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1144+        """ If max_size is not None then I won't allow more than
1145+        max_size to be written to me. If create=True then max_size
1146+        must not be None. """
1147+        precondition((max_size is not None) or (not create), max_size, create)
1148+        self.shnum = shnum
1149+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1150+        self._max_size = max_size
1151+        if create:
1152+            # touch the file, so later callers will see that we're working on
1153+            # it. Also construct the metadata.
1154+            assert not os.path.exists(self.fname)
1155+            fileutil.make_dirs(os.path.dirname(self.fname))
1156+            f = open(self.fname, 'wb')
1157+            # The second field -- the four-byte share data length -- is no
1158+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1159+            # there in case someone downgrades a storage server from >=
1160+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1161+            # server to another, etc. We do saturation -- a share data length
1162+            # larger than 2**32-1 (what can fit into the field) is marked as
1163+            # the largest length that can fit into the field. That way, even
1164+            # if this does happen, the old < v1.3.0 server will still allow
1165+            # clients to read the first part of the share.
1166+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1167+            f.close()
1168+            self._lease_offset = max_size + 0x0c
1169+            self._num_leases = 0
1170+        else:
1171+            f = open(self.fname, 'rb')
1172+            filesize = os.path.getsize(self.fname)
1173+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1174+            f.close()
1175+            if version != 1:
1176+                msg = "sharefile %s had version %d but we wanted 1" % \
1177+                      (self.fname, version)
1178+                raise UnknownImmutableContainerVersionError(msg)
1179+            self._num_leases = num_leases
1180+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1181+        self._data_offset = 0xc
1182+
1183+    def unlink(self):
1184+        os.unlink(self.fname)
1185+
1186+    def read_share_data(self, offset, length):
1187+        precondition(offset >= 0)
1188+        # Reads beyond the end of the data are truncated. Reads that start
1189+        # beyond the end of the data return an empty string.
1190+        seekpos = self._data_offset+offset
1191+        fsize = os.path.getsize(self.fname)
1192+        actuallength = max(0, min(length, fsize-seekpos))
1193+        if actuallength == 0:
1194+            return ""
1195+        f = open(self.fname, 'rb')
1196+        f.seek(seekpos)
1197+        return f.read(actuallength)
1198+
1199+    def write_share_data(self, offset, data):
1200+        length = len(data)
1201+        precondition(offset >= 0, offset)
1202+        if self._max_size is not None and offset+length > self._max_size:
1203+            raise DataTooLargeError(self._max_size, offset, length)
1204+        f = open(self.fname, 'rb+')
1205+        real_offset = self._data_offset+offset
1206+        f.seek(real_offset)
1207+        assert f.tell() == real_offset
1208+        f.write(data)
1209+        f.close()
1210+
1211+    def _write_lease_record(self, f, lease_number, lease_info):
1212+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1213+        f.seek(offset)
1214+        assert f.tell() == offset
1215+        f.write(lease_info.to_immutable_data())
1216+
1217+    def _read_num_leases(self, f):
1218+        f.seek(0x08)
1219+        (num_leases,) = struct.unpack(">L", f.read(4))
1220+        return num_leases
1221+
1222+    def _write_num_leases(self, f, num_leases):
1223+        f.seek(0x08)
1224+        f.write(struct.pack(">L", num_leases))
1225+
1226+    def _truncate_leases(self, f, num_leases):
1227+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1228+
1229+    def get_leases(self):
1230+        """Yields a LeaseInfo instance for all leases."""
1231+        f = open(self.fname, 'rb')
1232+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1233+        f.seek(self._lease_offset)
1234+        for i in range(num_leases):
1235+            data = f.read(self.LEASE_SIZE)
1236+            if data:
1237+                yield LeaseInfo().from_immutable_data(data)
1238+
1239+    def add_lease(self, lease_info):
1240+        f = open(self.fname, 'rb+')
1241+        num_leases = self._read_num_leases(f)
1242+        self._write_lease_record(f, num_leases, lease_info)
1243+        self._write_num_leases(f, num_leases+1)
1244+        f.close()
1245+
1246+    def renew_lease(self, renew_secret, new_expire_time):
1247+        for i,lease in enumerate(self.get_leases()):
1248+            if constant_time_compare(lease.renew_secret, renew_secret):
1249+                # yup. See if we need to update the owner time.
1250+                if new_expire_time > lease.expiration_time:
1251+                    # yes
1252+                    lease.expiration_time = new_expire_time
1253+                    f = open(self.fname, 'rb+')
1254+                    self._write_lease_record(f, i, lease)
1255+                    f.close()
1256+                return
1257+        raise IndexError("unable to renew non-existent lease")
1258+
1259+    def add_or_renew_lease(self, lease_info):
1260+        try:
1261+            self.renew_lease(lease_info.renew_secret,
1262+                             lease_info.expiration_time)
1263+        except IndexError:
1264+            self.add_lease(lease_info)
1265+
1266+
1267+    def cancel_lease(self, cancel_secret):
1268+        """Remove a lease with the given cancel_secret. If the last lease is
1269+        cancelled, the file will be removed. Return the number of bytes that
1270+        were freed (by truncating the list of leases, and possibly by
1271+        deleting the file. Raise IndexError if there was no lease with the
1272+        given cancel_secret.
1273+        """
1274+
1275+        leases = list(self.get_leases())
1276+        num_leases_removed = 0
1277+        for i,lease in enumerate(leases):
1278+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1279+                leases[i] = None
1280+                num_leases_removed += 1
1281+        if not num_leases_removed:
1282+            raise IndexError("unable to find matching lease to cancel")
1283+        if num_leases_removed:
1284+            # pack and write out the remaining leases. We write these out in
1285+            # the same order as they were added, so that if we crash while
1286+            # doing this, we won't lose any non-cancelled leases.
1287+            leases = [l for l in leases if l] # remove the cancelled leases
1288+            f = open(self.fname, 'rb+')
1289+            for i,lease in enumerate(leases):
1290+                self._write_lease_record(f, i, lease)
1291+            self._write_num_leases(f, len(leases))
1292+            self._truncate_leases(f, len(leases))
1293+            f.close()
1294+        space_freed = self.LEASE_SIZE * num_leases_removed
1295+        if not len(leases):
1296+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1297+            self.unlink()
1298+        return space_freed
1299hunk ./src/allmydata/storage/backends/das/expirer.py 2
1300 import time, os, pickle, struct
1301-from allmydata.storage.crawler import ShareCrawler
1302-from allmydata.storage.shares import get_share_file
1303+from allmydata.storage.crawler import FSShareCrawler
1304 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1305      UnknownImmutableContainerVersionError
1306 from twisted.python import log as twlog
1307hunk ./src/allmydata/storage/backends/das/expirer.py 7
1308 
1309-class LeaseCheckingCrawler(ShareCrawler):
1310+class FSLeaseCheckingCrawler(FSShareCrawler):
1311     """I examine the leases on all shares, determining which are still valid
1312     and which have expired. I can remove the expired leases (if so
1313     configured), and the share will be deleted when the last lease is
1314hunk ./src/allmydata/storage/backends/das/expirer.py 50
1315     slow_start = 360 # wait 6 minutes after startup
1316     minimum_cycle_time = 12*60*60 # not more than twice per day
1317 
1318-    def __init__(self, statefile, historyfile,
1319-                 expiration_enabled, mode,
1320-                 override_lease_duration, # used if expiration_mode=="age"
1321-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1322-                 sharetypes):
1323+    def __init__(self, statefile, historyfile, expiration_policy):
1324         self.historyfile = historyfile
1325hunk ./src/allmydata/storage/backends/das/expirer.py 52
1326-        self.expiration_enabled = expiration_enabled
1327-        self.mode = mode
1328+        self.expiration_enabled = expiration_policy['enabled']
1329+        self.mode = expiration_policy['mode']
1330         self.override_lease_duration = None
1331         self.cutoff_date = None
1332         if self.mode == "age":
1333hunk ./src/allmydata/storage/backends/das/expirer.py 57
1334-            assert isinstance(override_lease_duration, (int, type(None)))
1335-            self.override_lease_duration = override_lease_duration # seconds
1336+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1337+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1338         elif self.mode == "cutoff-date":
1339hunk ./src/allmydata/storage/backends/das/expirer.py 60
1340-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1341+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1342             assert cutoff_date is not None
1343hunk ./src/allmydata/storage/backends/das/expirer.py 62
1344-            self.cutoff_date = cutoff_date
1345+            self.cutoff_date = expiration_policy['cutoff_date']
1346         else:
1347hunk ./src/allmydata/storage/backends/das/expirer.py 64
1348-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1349-        self.sharetypes_to_expire = sharetypes
1350-        ShareCrawler.__init__(self, statefile)
1351+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1352+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1353+        FSShareCrawler.__init__(self, statefile)
1354 
1355     def add_initial_state(self):
1356         # we fill ["cycle-to-date"] here (even though they will be reset in
1357hunk ./src/allmydata/storage/backends/das/expirer.py 156
1358 
1359     def process_share(self, sharefilename):
1360         # first, find out what kind of a share it is
1361-        sf = get_share_file(sharefilename)
1362+        f = open(sharefilename, "rb")
1363+        prefix = f.read(32)
1364+        f.close()
1365+        if prefix == MutableShareFile.MAGIC:
1366+            sf = MutableShareFile(sharefilename)
1367+        else:
1368+            # otherwise assume it's immutable
1369+            sf = FSBShare(sharefilename)
1370         sharetype = sf.sharetype
1371         now = time.time()
1372         s = self.stat(sharefilename)
1373addfile ./src/allmydata/storage/backends/null/__init__.py
1374addfile ./src/allmydata/storage/backends/null/core.py
1375hunk ./src/allmydata/storage/backends/null/core.py 1
1376+from allmydata.storage.backends.base import Backend
1377+
1378+class NullCore(Backend):
1379+    def __init__(self):
1380+        Backend.__init__(self)
1381+
1382+    def get_available_space(self):
1383+        return None
1384+
1385+    def get_shares(self, storage_index):
1386+        return set()
1387+
1388+    def get_share(self, storage_index, sharenum):
1389+        return None
1390+
1391+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1392+        return NullBucketWriter()
1393hunk ./src/allmydata/storage/crawler.py 12
1394 class TimeSliceExceeded(Exception):
1395     pass
1396 
1397-class ShareCrawler(service.MultiService):
1398+class FSShareCrawler(service.MultiService):
1399     """A subcless of ShareCrawler is attached to a StorageServer, and
1400     periodically walks all of its shares, processing each one in some
1401     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1402hunk ./src/allmydata/storage/crawler.py 68
1403     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1404     minimum_cycle_time = 300 # don't run a cycle faster than this
1405 
1406-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1407+    def __init__(self, statefname, allowed_cpu_percentage=None):
1408         service.MultiService.__init__(self)
1409         if allowed_cpu_percentage is not None:
1410             self.allowed_cpu_percentage = allowed_cpu_percentage
1411hunk ./src/allmydata/storage/crawler.py 72
1412-        self.backend = backend
1413+        self.statefname = statefname
1414         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1415                          for i in range(2**10)]
1416         self.prefixes.sort()
1417hunk ./src/allmydata/storage/crawler.py 192
1418         #                            of the last bucket to be processed, or
1419         #                            None if we are sleeping between cycles
1420         try:
1421-            f = open(self.statefile, "rb")
1422+            f = open(self.statefname, "rb")
1423             state = pickle.load(f)
1424             f.close()
1425         except EnvironmentError:
1426hunk ./src/allmydata/storage/crawler.py 230
1427         else:
1428             last_complete_prefix = self.prefixes[lcpi]
1429         self.state["last-complete-prefix"] = last_complete_prefix
1430-        tmpfile = self.statefile + ".tmp"
1431+        tmpfile = self.statefname + ".tmp"
1432         f = open(tmpfile, "wb")
1433         pickle.dump(self.state, f)
1434         f.close()
1435hunk ./src/allmydata/storage/crawler.py 433
1436         pass
1437 
1438 
1439-class BucketCountingCrawler(ShareCrawler):
1440+class FSBucketCountingCrawler(FSShareCrawler):
1441     """I keep track of how many buckets are being managed by this server.
1442     This is equivalent to the number of distributed files and directories for
1443     which I am providing storage. The actual number of files+directories in
1444hunk ./src/allmydata/storage/crawler.py 446
1445 
1446     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1447 
1448-    def __init__(self, statefile, num_sample_prefixes=1):
1449-        ShareCrawler.__init__(self, statefile)
1450+    def __init__(self, statefname, num_sample_prefixes=1):
1451+        FSShareCrawler.__init__(self, statefname)
1452         self.num_sample_prefixes = num_sample_prefixes
1453 
1454     def add_initial_state(self):
1455hunk ./src/allmydata/storage/immutable.py 14
1456 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1457      DataTooLargeError
1458 
1459-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1460-# and share data. The share data is accessed by RIBucketWriter.write and
1461-# RIBucketReader.read . The lease information is not accessible through these
1462-# interfaces.
1463-
1464-# The share file has the following layout:
1465-#  0x00: share file version number, four bytes, current version is 1
1466-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1467-#  0x08: number of leases, four bytes big-endian
1468-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1469-#  A+0x0c = B: first lease. Lease format is:
1470-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1471-#   B+0x04: renew secret, 32 bytes (SHA256)
1472-#   B+0x24: cancel secret, 32 bytes (SHA256)
1473-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1474-#   B+0x48: next lease, or end of record
1475-
1476-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1477-# but it is still filled in by storage servers in case the storage server
1478-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1479-# share file is moved from one storage server to another. The value stored in
1480-# this field is truncated, so if the actual share data length is >= 2**32,
1481-# then the value stored in this field will be the actual share data length
1482-# modulo 2**32.
1483-
1484-class ShareFile:
1485-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1486-    sharetype = "immutable"
1487-
1488-    def __init__(self, filename, max_size=None, create=False):
1489-        """ If max_size is not None then I won't allow more than
1490-        max_size to be written to me. If create=True then max_size
1491-        must not be None. """
1492-        precondition((max_size is not None) or (not create), max_size, create)
1493-        self.home = filename
1494-        self._max_size = max_size
1495-        if create:
1496-            # touch the file, so later callers will see that we're working on
1497-            # it. Also construct the metadata.
1498-            assert not os.path.exists(self.home)
1499-            fileutil.make_dirs(os.path.dirname(self.home))
1500-            f = open(self.home, 'wb')
1501-            # The second field -- the four-byte share data length -- is no
1502-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1503-            # there in case someone downgrades a storage server from >=
1504-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1505-            # server to another, etc. We do saturation -- a share data length
1506-            # larger than 2**32-1 (what can fit into the field) is marked as
1507-            # the largest length that can fit into the field. That way, even
1508-            # if this does happen, the old < v1.3.0 server will still allow
1509-            # clients to read the first part of the share.
1510-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1511-            f.close()
1512-            self._lease_offset = max_size + 0x0c
1513-            self._num_leases = 0
1514-        else:
1515-            f = open(self.home, 'rb')
1516-            filesize = os.path.getsize(self.home)
1517-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1518-            f.close()
1519-            if version != 1:
1520-                msg = "sharefile %s had version %d but we wanted 1" % \
1521-                      (filename, version)
1522-                raise UnknownImmutableContainerVersionError(msg)
1523-            self._num_leases = num_leases
1524-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1525-        self._data_offset = 0xc
1526-
1527-    def unlink(self):
1528-        os.unlink(self.home)
1529-
1530-    def read_share_data(self, offset, length):
1531-        precondition(offset >= 0)
1532-        # Reads beyond the end of the data are truncated. Reads that start
1533-        # beyond the end of the data return an empty string.
1534-        seekpos = self._data_offset+offset
1535-        fsize = os.path.getsize(self.home)
1536-        actuallength = max(0, min(length, fsize-seekpos))
1537-        if actuallength == 0:
1538-            return ""
1539-        f = open(self.home, 'rb')
1540-        f.seek(seekpos)
1541-        return f.read(actuallength)
1542-
1543-    def write_share_data(self, offset, data):
1544-        length = len(data)
1545-        precondition(offset >= 0, offset)
1546-        if self._max_size is not None and offset+length > self._max_size:
1547-            raise DataTooLargeError(self._max_size, offset, length)
1548-        f = open(self.home, 'rb+')
1549-        real_offset = self._data_offset+offset
1550-        f.seek(real_offset)
1551-        assert f.tell() == real_offset
1552-        f.write(data)
1553-        f.close()
1554-
1555-    def _write_lease_record(self, f, lease_number, lease_info):
1556-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1557-        f.seek(offset)
1558-        assert f.tell() == offset
1559-        f.write(lease_info.to_immutable_data())
1560-
1561-    def _read_num_leases(self, f):
1562-        f.seek(0x08)
1563-        (num_leases,) = struct.unpack(">L", f.read(4))
1564-        return num_leases
1565-
1566-    def _write_num_leases(self, f, num_leases):
1567-        f.seek(0x08)
1568-        f.write(struct.pack(">L", num_leases))
1569-
1570-    def _truncate_leases(self, f, num_leases):
1571-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1572-
1573-    def get_leases(self):
1574-        """Yields a LeaseInfo instance for all leases."""
1575-        f = open(self.home, 'rb')
1576-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1577-        f.seek(self._lease_offset)
1578-        for i in range(num_leases):
1579-            data = f.read(self.LEASE_SIZE)
1580-            if data:
1581-                yield LeaseInfo().from_immutable_data(data)
1582-
1583-    def add_lease(self, lease_info):
1584-        f = open(self.home, 'rb+')
1585-        num_leases = self._read_num_leases(f)
1586-        self._write_lease_record(f, num_leases, lease_info)
1587-        self._write_num_leases(f, num_leases+1)
1588-        f.close()
1589-
1590-    def renew_lease(self, renew_secret, new_expire_time):
1591-        for i,lease in enumerate(self.get_leases()):
1592-            if constant_time_compare(lease.renew_secret, renew_secret):
1593-                # yup. See if we need to update the owner time.
1594-                if new_expire_time > lease.expiration_time:
1595-                    # yes
1596-                    lease.expiration_time = new_expire_time
1597-                    f = open(self.home, 'rb+')
1598-                    self._write_lease_record(f, i, lease)
1599-                    f.close()
1600-                return
1601-        raise IndexError("unable to renew non-existent lease")
1602-
1603-    def add_or_renew_lease(self, lease_info):
1604-        try:
1605-            self.renew_lease(lease_info.renew_secret,
1606-                             lease_info.expiration_time)
1607-        except IndexError:
1608-            self.add_lease(lease_info)
1609-
1610-
1611-    def cancel_lease(self, cancel_secret):
1612-        """Remove a lease with the given cancel_secret. If the last lease is
1613-        cancelled, the file will be removed. Return the number of bytes that
1614-        were freed (by truncating the list of leases, and possibly by
1615-        deleting the file. Raise IndexError if there was no lease with the
1616-        given cancel_secret.
1617-        """
1618-
1619-        leases = list(self.get_leases())
1620-        num_leases_removed = 0
1621-        for i,lease in enumerate(leases):
1622-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1623-                leases[i] = None
1624-                num_leases_removed += 1
1625-        if not num_leases_removed:
1626-            raise IndexError("unable to find matching lease to cancel")
1627-        if num_leases_removed:
1628-            # pack and write out the remaining leases. We write these out in
1629-            # the same order as they were added, so that if we crash while
1630-            # doing this, we won't lose any non-cancelled leases.
1631-            leases = [l for l in leases if l] # remove the cancelled leases
1632-            f = open(self.home, 'rb+')
1633-            for i,lease in enumerate(leases):
1634-                self._write_lease_record(f, i, lease)
1635-            self._write_num_leases(f, len(leases))
1636-            self._truncate_leases(f, len(leases))
1637-            f.close()
1638-        space_freed = self.LEASE_SIZE * num_leases_removed
1639-        if not len(leases):
1640-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1641-            self.unlink()
1642-        return space_freed
1643-class NullBucketWriter(Referenceable):
1644-    implements(RIBucketWriter)
1645-
1646-    def remote_write(self, offset, data):
1647-        return
1648-
1649 class BucketWriter(Referenceable):
1650     implements(RIBucketWriter)
1651 
1652hunk ./src/allmydata/storage/immutable.py 17
1653-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1654+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1655         self.ss = ss
1656hunk ./src/allmydata/storage/immutable.py 19
1657-        self.incominghome = incominghome
1658-        self.finalhome = finalhome
1659         self._max_size = max_size # don't allow the client to write more than this
1660         self._canary = canary
1661         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1662hunk ./src/allmydata/storage/immutable.py 24
1663         self.closed = False
1664         self.throw_out_all_data = False
1665-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1666+        self._sharefile = immutableshare
1667         # also, add our lease to the file now, so that other ones can be
1668         # added by simultaneous uploaders
1669         self._sharefile.add_lease(lease_info)
1670hunk ./src/allmydata/storage/server.py 16
1671 from allmydata.storage.lease import LeaseInfo
1672 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1673      create_mutable_sharefile
1674-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1675-from allmydata.storage.crawler import BucketCountingCrawler
1676-from allmydata.storage.expirer import LeaseCheckingCrawler
1677 
1678 from zope.interface import implements
1679 
1680hunk ./src/allmydata/storage/server.py 19
1681-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1682-# be started and stopped.
1683-class Backend(service.MultiService):
1684-    implements(IStatsProducer)
1685-    def __init__(self):
1686-        service.MultiService.__init__(self)
1687-
1688-    def get_bucket_shares(self):
1689-        """XXX"""
1690-        raise NotImplementedError
1691-
1692-    def get_share(self):
1693-        """XXX"""
1694-        raise NotImplementedError
1695-
1696-    def make_bucket_writer(self):
1697-        """XXX"""
1698-        raise NotImplementedError
1699-
1700-class NullBackend(Backend):
1701-    def __init__(self):
1702-        Backend.__init__(self)
1703-
1704-    def get_available_space(self):
1705-        return None
1706-
1707-    def get_bucket_shares(self, storage_index):
1708-        return set()
1709-
1710-    def get_share(self, storage_index, sharenum):
1711-        return None
1712-
1713-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1714-        return NullBucketWriter()
1715-
1716-class FSBackend(Backend):
1717-    def __init__(self, storedir, readonly=False, reserved_space=0):
1718-        Backend.__init__(self)
1719-
1720-        self._setup_storage(storedir, readonly, reserved_space)
1721-        self._setup_corruption_advisory()
1722-        self._setup_bucket_counter()
1723-        self._setup_lease_checkerf()
1724-
1725-    def _setup_storage(self, storedir, readonly, reserved_space):
1726-        self.storedir = storedir
1727-        self.readonly = readonly
1728-        self.reserved_space = int(reserved_space)
1729-        if self.reserved_space:
1730-            if self.get_available_space() is None:
1731-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1732-                        umid="0wZ27w", level=log.UNUSUAL)
1733-
1734-        self.sharedir = os.path.join(self.storedir, "shares")
1735-        fileutil.make_dirs(self.sharedir)
1736-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1737-        self._clean_incomplete()
1738-
1739-    def _clean_incomplete(self):
1740-        fileutil.rm_dir(self.incomingdir)
1741-        fileutil.make_dirs(self.incomingdir)
1742-
1743-    def _setup_corruption_advisory(self):
1744-        # we don't actually create the corruption-advisory dir until necessary
1745-        self.corruption_advisory_dir = os.path.join(self.storedir,
1746-                                                    "corruption-advisories")
1747-
1748-    def _setup_bucket_counter(self):
1749-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1750-        self.bucket_counter = BucketCountingCrawler(statefile)
1751-        self.bucket_counter.setServiceParent(self)
1752-
1753-    def _setup_lease_checkerf(self):
1754-        statefile = os.path.join(self.storedir, "lease_checker.state")
1755-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1756-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1757-                                   expiration_enabled, expiration_mode,
1758-                                   expiration_override_lease_duration,
1759-                                   expiration_cutoff_date,
1760-                                   expiration_sharetypes)
1761-        self.lease_checker.setServiceParent(self)
1762-
1763-    def get_available_space(self):
1764-        if self.readonly:
1765-            return 0
1766-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1767-
1768-    def get_bucket_shares(self, storage_index):
1769-        """Return a list of (shnum, pathname) tuples for files that hold
1770-        shares for this storage_index. In each tuple, 'shnum' will always be
1771-        the integer form of the last component of 'pathname'."""
1772-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1773-        try:
1774-            for f in os.listdir(storagedir):
1775-                if NUM_RE.match(f):
1776-                    filename = os.path.join(storagedir, f)
1777-                    yield (int(f), filename)
1778-        except OSError:
1779-            # Commonly caused by there being no buckets at all.
1780-            pass
1781-
1782 # storage/
1783 # storage/shares/incoming
1784 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1785hunk ./src/allmydata/storage/server.py 32
1786 # $SHARENUM matches this regex:
1787 NUM_RE=re.compile("^[0-9]+$")
1788 
1789-
1790-
1791 class StorageServer(service.MultiService, Referenceable):
1792     implements(RIStorageServer, IStatsProducer)
1793     name = 'storage'
1794hunk ./src/allmydata/storage/server.py 35
1795-    LeaseCheckerClass = LeaseCheckingCrawler
1796 
1797     def __init__(self, nodeid, backend, reserved_space=0,
1798                  readonly_storage=False,
1799hunk ./src/allmydata/storage/server.py 38
1800-                 stats_provider=None,
1801-                 expiration_enabled=False,
1802-                 expiration_mode="age",
1803-                 expiration_override_lease_duration=None,
1804-                 expiration_cutoff_date=None,
1805-                 expiration_sharetypes=("mutable", "immutable")):
1806+                 stats_provider=None ):
1807         service.MultiService.__init__(self)
1808         assert isinstance(nodeid, str)
1809         assert len(nodeid) == 20
1810hunk ./src/allmydata/storage/server.py 217
1811         # they asked about: this will save them a lot of work. Add or update
1812         # leases for all of them: if they want us to hold shares for this
1813         # file, they'll want us to hold leases for this file.
1814-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1815-            alreadygot.add(shnum)
1816-            sf = ShareFile(fn)
1817-            sf.add_or_renew_lease(lease_info)
1818-
1819-        for shnum in sharenums:
1820-            share = self.backend.get_share(storage_index, shnum)
1821+        for share in self.backend.get_shares(storage_index):
1822+            alreadygot.add(share.shnum)
1823+            share.add_or_renew_lease(lease_info)
1824 
1825hunk ./src/allmydata/storage/server.py 221
1826-            if not share:
1827-                if (not limited) or (remaining_space >= max_space_per_bucket):
1828-                    # ok! we need to create the new share file.
1829-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1830-                                      max_space_per_bucket, lease_info, canary)
1831-                    bucketwriters[shnum] = bw
1832-                    self._active_writers[bw] = 1
1833-                    if limited:
1834-                        remaining_space -= max_space_per_bucket
1835-                else:
1836-                    # bummer! not enough space to accept this bucket
1837-                    pass
1838+        for shnum in (sharenums - alreadygot):
1839+            if (not limited) or (remaining_space >= max_space_per_bucket):
1840+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1841+                self.backend.set_storage_server(self)
1842+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1843+                                                     max_space_per_bucket, lease_info, canary)
1844+                bucketwriters[shnum] = bw
1845+                self._active_writers[bw] = 1
1846+                if limited:
1847+                    remaining_space -= max_space_per_bucket
1848 
1849hunk ./src/allmydata/storage/server.py 232
1850-            elif share.is_complete():
1851-                # great! we already have it. easy.
1852-                pass
1853-            elif not share.is_complete():
1854-                # Note that we don't create BucketWriters for shnums that
1855-                # have a partial share (in incoming/), so if a second upload
1856-                # occurs while the first is still in progress, the second
1857-                # uploader will use different storage servers.
1858-                pass
1859+        #XXX We SHOULD DOCUMENT LATER.
1860 
1861         self.add_latency("allocate", time.time() - start)
1862         return alreadygot, bucketwriters
1863hunk ./src/allmydata/storage/server.py 238
1864 
1865     def _iter_share_files(self, storage_index):
1866-        for shnum, filename in self._get_bucket_shares(storage_index):
1867+        for shnum, filename in self._get_shares(storage_index):
1868             f = open(filename, 'rb')
1869             header = f.read(32)
1870             f.close()
1871hunk ./src/allmydata/storage/server.py 318
1872         si_s = si_b2a(storage_index)
1873         log.msg("storage: get_buckets %s" % si_s)
1874         bucketreaders = {} # k: sharenum, v: BucketReader
1875-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1876+        for shnum, filename in self.backend.get_shares(storage_index):
1877             bucketreaders[shnum] = BucketReader(self, filename,
1878                                                 storage_index, shnum)
1879         self.add_latency("get", time.time() - start)
1880hunk ./src/allmydata/storage/server.py 334
1881         # since all shares get the same lease data, we just grab the leases
1882         # from the first share
1883         try:
1884-            shnum, filename = self._get_bucket_shares(storage_index).next()
1885+            shnum, filename = self._get_shares(storage_index).next()
1886             sf = ShareFile(filename)
1887             return sf.get_leases()
1888         except StopIteration:
1889hunk ./src/allmydata/storage/shares.py 1
1890-#! /usr/bin/python
1891-
1892-from allmydata.storage.mutable import MutableShareFile
1893-from allmydata.storage.immutable import ShareFile
1894-
1895-def get_share_file(filename):
1896-    f = open(filename, "rb")
1897-    prefix = f.read(32)
1898-    f.close()
1899-    if prefix == MutableShareFile.MAGIC:
1900-        return MutableShareFile(filename)
1901-    # otherwise assume it's immutable
1902-    return ShareFile(filename)
1903-
1904rmfile ./src/allmydata/storage/shares.py
1905hunk ./src/allmydata/test/common_util.py 20
1906 
1907 def flip_one_bit(s, offset=0, size=None):
1908     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1909-    than offset+size. """
1910+    than offset+size. Return the new string. """
1911     if size is None:
1912         size=len(s)-offset
1913     i = randrange(offset, offset+size)
1914hunk ./src/allmydata/test/test_backends.py 7
1915 
1916 from allmydata.test.common_util import ReallyEqualMixin
1917 
1918-import mock
1919+import mock, os
1920 
1921 # This is the code that we're going to be testing.
1922hunk ./src/allmydata/test/test_backends.py 10
1923-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1924+from allmydata.storage.server import StorageServer
1925+
1926+from allmydata.storage.backends.das.core import DASCore
1927+from allmydata.storage.backends.null.core import NullCore
1928+
1929 
1930 # The following share file contents was generated with
1931 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1932hunk ./src/allmydata/test/test_backends.py 22
1933 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1934 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1935 
1936-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1937+tempdir = 'teststoredir'
1938+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1939+sharefname = os.path.join(sharedirname, '0')
1940 
1941 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1942     @mock.patch('time.time')
1943hunk ./src/allmydata/test/test_backends.py 58
1944         filesystem in only the prescribed ways. """
1945 
1946         def call_open(fname, mode):
1947-            if fname == 'testdir/bucket_counter.state':
1948-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1949-            elif fname == 'testdir/lease_checker.state':
1950-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1951-            elif fname == 'testdir/lease_checker.history':
1952+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1953+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1954+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1955+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1956+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1957                 return StringIO()
1958             else:
1959                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1960hunk ./src/allmydata/test/test_backends.py 124
1961     @mock.patch('__builtin__.open')
1962     def setUp(self, mockopen):
1963         def call_open(fname, mode):
1964-            if fname == 'testdir/bucket_counter.state':
1965-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1966-            elif fname == 'testdir/lease_checker.state':
1967-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1968-            elif fname == 'testdir/lease_checker.history':
1969+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1970+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1971+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1972+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1973+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1974                 return StringIO()
1975         mockopen.side_effect = call_open
1976hunk ./src/allmydata/test/test_backends.py 131
1977-
1978-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1979+        expiration_policy = {'enabled' : False,
1980+                             'mode' : 'age',
1981+                             'override_lease_duration' : None,
1982+                             'cutoff_date' : None,
1983+                             'sharetypes' : None}
1984+        testbackend = DASCore(tempdir, expiration_policy)
1985+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1986 
1987     @mock.patch('time.time')
1988     @mock.patch('os.mkdir')
1989hunk ./src/allmydata/test/test_backends.py 148
1990         """ Write a new share. """
1991 
1992         def call_listdir(dirname):
1993-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1994-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1995+            self.failUnlessReallyEqual(dirname, sharedirname)
1996+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1997 
1998         mocklistdir.side_effect = call_listdir
1999 
2000hunk ./src/allmydata/test/test_backends.py 178
2001 
2002         sharefile = MockFile()
2003         def call_open(fname, mode):
2004-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
2005+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
2006             return sharefile
2007 
2008         mockopen.side_effect = call_open
2009hunk ./src/allmydata/test/test_backends.py 200
2010         StorageServer object. """
2011 
2012         def call_listdir(dirname):
2013-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
2014+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2015             return ['0']
2016 
2017         mocklistdir.side_effect = call_listdir
2018}
2019[checkpoint patch
2020wilcoxjg@gmail.com**20110626165715
2021 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
2022] {
2023hunk ./src/allmydata/storage/backends/das/core.py 21
2024 from allmydata.storage.lease import LeaseInfo
2025 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2026      create_mutable_sharefile
2027-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
2028+from allmydata.storage.immutable import BucketWriter, BucketReader
2029 from allmydata.storage.crawler import FSBucketCountingCrawler
2030 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
2031 
2032hunk ./src/allmydata/storage/backends/das/core.py 27
2033 from zope.interface import implements
2034 
2035+# $SHARENUM matches this regex:
2036+NUM_RE=re.compile("^[0-9]+$")
2037+
2038 class DASCore(Backend):
2039     implements(IStorageBackend)
2040     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
2041hunk ./src/allmydata/storage/backends/das/core.py 80
2042         return fileutil.get_available_space(self.storedir, self.reserved_space)
2043 
2044     def get_shares(self, storage_index):
2045-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
2046+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
2047         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
2048         try:
2049             for f in os.listdir(finalstoragedir):
2050hunk ./src/allmydata/storage/backends/das/core.py 86
2051                 if NUM_RE.match(f):
2052                     filename = os.path.join(finalstoragedir, f)
2053-                    yield FSBShare(filename, int(f))
2054+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
2055         except OSError:
2056             # Commonly caused by there being no buckets at all.
2057             pass
2058hunk ./src/allmydata/storage/backends/das/core.py 95
2059         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
2060         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2061         return bw
2062+
2063+    def set_storage_server(self, ss):
2064+        self.ss = ss
2065         
2066 
2067 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2068hunk ./src/allmydata/storage/server.py 29
2069 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2070 # base-32 chars).
2071 
2072-# $SHARENUM matches this regex:
2073-NUM_RE=re.compile("^[0-9]+$")
2074 
2075 class StorageServer(service.MultiService, Referenceable):
2076     implements(RIStorageServer, IStatsProducer)
2077}
2078[checkpoint4
2079wilcoxjg@gmail.com**20110628202202
2080 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2081] {
2082hunk ./src/allmydata/storage/backends/das/core.py 96
2083         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2084         return bw
2085 
2086+    def make_bucket_reader(self, share):
2087+        return BucketReader(self.ss, share)
2088+
2089     def set_storage_server(self, ss):
2090         self.ss = ss
2091         
2092hunk ./src/allmydata/storage/backends/das/core.py 138
2093         must not be None. """
2094         precondition((max_size is not None) or (not create), max_size, create)
2095         self.shnum = shnum
2096+        self.storage_index = storageindex
2097         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2098         self._max_size = max_size
2099         if create:
2100hunk ./src/allmydata/storage/backends/das/core.py 173
2101             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2102         self._data_offset = 0xc
2103 
2104+    def get_shnum(self):
2105+        return self.shnum
2106+
2107     def unlink(self):
2108         os.unlink(self.fname)
2109 
2110hunk ./src/allmydata/storage/backends/null/core.py 2
2111 from allmydata.storage.backends.base import Backend
2112+from allmydata.storage.immutable import BucketWriter, BucketReader
2113 
2114 class NullCore(Backend):
2115     def __init__(self):
2116hunk ./src/allmydata/storage/backends/null/core.py 17
2117     def get_share(self, storage_index, sharenum):
2118         return None
2119 
2120-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2121-        return NullBucketWriter()
2122+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2123+       
2124+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2125+
2126+    def set_storage_server(self, ss):
2127+        self.ss = ss
2128+
2129+class ImmutableShare:
2130+    sharetype = "immutable"
2131+
2132+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2133+        """ If max_size is not None then I won't allow more than
2134+        max_size to be written to me. If create=True then max_size
2135+        must not be None. """
2136+        precondition((max_size is not None) or (not create), max_size, create)
2137+        self.shnum = shnum
2138+        self.storage_index = storageindex
2139+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2140+        self._max_size = max_size
2141+        if create:
2142+            # touch the file, so later callers will see that we're working on
2143+            # it. Also construct the metadata.
2144+            assert not os.path.exists(self.fname)
2145+            fileutil.make_dirs(os.path.dirname(self.fname))
2146+            f = open(self.fname, 'wb')
2147+            # The second field -- the four-byte share data length -- is no
2148+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2149+            # there in case someone downgrades a storage server from >=
2150+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2151+            # server to another, etc. We do saturation -- a share data length
2152+            # larger than 2**32-1 (what can fit into the field) is marked as
2153+            # the largest length that can fit into the field. That way, even
2154+            # if this does happen, the old < v1.3.0 server will still allow
2155+            # clients to read the first part of the share.
2156+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2157+            f.close()
2158+            self._lease_offset = max_size + 0x0c
2159+            self._num_leases = 0
2160+        else:
2161+            f = open(self.fname, 'rb')
2162+            filesize = os.path.getsize(self.fname)
2163+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2164+            f.close()
2165+            if version != 1:
2166+                msg = "sharefile %s had version %d but we wanted 1" % \
2167+                      (self.fname, version)
2168+                raise UnknownImmutableContainerVersionError(msg)
2169+            self._num_leases = num_leases
2170+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2171+        self._data_offset = 0xc
2172+
2173+    def get_shnum(self):
2174+        return self.shnum
2175+
2176+    def unlink(self):
2177+        os.unlink(self.fname)
2178+
2179+    def read_share_data(self, offset, length):
2180+        precondition(offset >= 0)
2181+        # Reads beyond the end of the data are truncated. Reads that start
2182+        # beyond the end of the data return an empty string.
2183+        seekpos = self._data_offset+offset
2184+        fsize = os.path.getsize(self.fname)
2185+        actuallength = max(0, min(length, fsize-seekpos))
2186+        if actuallength == 0:
2187+            return ""
2188+        f = open(self.fname, 'rb')
2189+        f.seek(seekpos)
2190+        return f.read(actuallength)
2191+
2192+    def write_share_data(self, offset, data):
2193+        length = len(data)
2194+        precondition(offset >= 0, offset)
2195+        if self._max_size is not None and offset+length > self._max_size:
2196+            raise DataTooLargeError(self._max_size, offset, length)
2197+        f = open(self.fname, 'rb+')
2198+        real_offset = self._data_offset+offset
2199+        f.seek(real_offset)
2200+        assert f.tell() == real_offset
2201+        f.write(data)
2202+        f.close()
2203+
2204+    def _write_lease_record(self, f, lease_number, lease_info):
2205+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2206+        f.seek(offset)
2207+        assert f.tell() == offset
2208+        f.write(lease_info.to_immutable_data())
2209+
2210+    def _read_num_leases(self, f):
2211+        f.seek(0x08)
2212+        (num_leases,) = struct.unpack(">L", f.read(4))
2213+        return num_leases
2214+
2215+    def _write_num_leases(self, f, num_leases):
2216+        f.seek(0x08)
2217+        f.write(struct.pack(">L", num_leases))
2218+
2219+    def _truncate_leases(self, f, num_leases):
2220+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2221+
2222+    def get_leases(self):
2223+        """Yields a LeaseInfo instance for all leases."""
2224+        f = open(self.fname, 'rb')
2225+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2226+        f.seek(self._lease_offset)
2227+        for i in range(num_leases):
2228+            data = f.read(self.LEASE_SIZE)
2229+            if data:
2230+                yield LeaseInfo().from_immutable_data(data)
2231+
2232+    def add_lease(self, lease_info):
2233+        f = open(self.fname, 'rb+')
2234+        num_leases = self._read_num_leases(f)
2235+        self._write_lease_record(f, num_leases, lease_info)
2236+        self._write_num_leases(f, num_leases+1)
2237+        f.close()
2238+
2239+    def renew_lease(self, renew_secret, new_expire_time):
2240+        for i,lease in enumerate(self.get_leases()):
2241+            if constant_time_compare(lease.renew_secret, renew_secret):
2242+                # yup. See if we need to update the owner time.
2243+                if new_expire_time > lease.expiration_time:
2244+                    # yes
2245+                    lease.expiration_time = new_expire_time
2246+                    f = open(self.fname, 'rb+')
2247+                    self._write_lease_record(f, i, lease)
2248+                    f.close()
2249+                return
2250+        raise IndexError("unable to renew non-existent lease")
2251+
2252+    def add_or_renew_lease(self, lease_info):
2253+        try:
2254+            self.renew_lease(lease_info.renew_secret,
2255+                             lease_info.expiration_time)
2256+        except IndexError:
2257+            self.add_lease(lease_info)
2258+
2259+
2260+    def cancel_lease(self, cancel_secret):
2261+        """Remove a lease with the given cancel_secret. If the last lease is
2262+        cancelled, the file will be removed. Return the number of bytes that
2263+        were freed (by truncating the list of leases, and possibly by
2264+        deleting the file. Raise IndexError if there was no lease with the
2265+        given cancel_secret.
2266+        """
2267+
2268+        leases = list(self.get_leases())
2269+        num_leases_removed = 0
2270+        for i,lease in enumerate(leases):
2271+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2272+                leases[i] = None
2273+                num_leases_removed += 1
2274+        if not num_leases_removed:
2275+            raise IndexError("unable to find matching lease to cancel")
2276+        if num_leases_removed:
2277+            # pack and write out the remaining leases. We write these out in
2278+            # the same order as they were added, so that if we crash while
2279+            # doing this, we won't lose any non-cancelled leases.
2280+            leases = [l for l in leases if l] # remove the cancelled leases
2281+            f = open(self.fname, 'rb+')
2282+            for i,lease in enumerate(leases):
2283+                self._write_lease_record(f, i, lease)
2284+            self._write_num_leases(f, len(leases))
2285+            self._truncate_leases(f, len(leases))
2286+            f.close()
2287+        space_freed = self.LEASE_SIZE * num_leases_removed
2288+        if not len(leases):
2289+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2290+            self.unlink()
2291+        return space_freed
2292hunk ./src/allmydata/storage/immutable.py 114
2293 class BucketReader(Referenceable):
2294     implements(RIBucketReader)
2295 
2296-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2297+    def __init__(self, ss, share):
2298         self.ss = ss
2299hunk ./src/allmydata/storage/immutable.py 116
2300-        self._share_file = ShareFile(sharefname)
2301-        self.storage_index = storage_index
2302-        self.shnum = shnum
2303+        self._share_file = share
2304+        self.storage_index = share.storage_index
2305+        self.shnum = share.shnum
2306 
2307     def __repr__(self):
2308         return "<%s %s %s>" % (self.__class__.__name__,
2309hunk ./src/allmydata/storage/server.py 316
2310         si_s = si_b2a(storage_index)
2311         log.msg("storage: get_buckets %s" % si_s)
2312         bucketreaders = {} # k: sharenum, v: BucketReader
2313-        for shnum, filename in self.backend.get_shares(storage_index):
2314-            bucketreaders[shnum] = BucketReader(self, filename,
2315-                                                storage_index, shnum)
2316+        self.backend.set_storage_server(self)
2317+        for share in self.backend.get_shares(storage_index):
2318+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2319         self.add_latency("get", time.time() - start)
2320         return bucketreaders
2321 
2322hunk ./src/allmydata/test/test_backends.py 25
2323 tempdir = 'teststoredir'
2324 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2325 sharefname = os.path.join(sharedirname, '0')
2326+expiration_policy = {'enabled' : False,
2327+                     'mode' : 'age',
2328+                     'override_lease_duration' : None,
2329+                     'cutoff_date' : None,
2330+                     'sharetypes' : None}
2331 
2332 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2333     @mock.patch('time.time')
2334hunk ./src/allmydata/test/test_backends.py 43
2335         tries to read or write to the file system. """
2336 
2337         # Now begin the test.
2338-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2339+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2340 
2341         self.failIf(mockisdir.called)
2342         self.failIf(mocklistdir.called)
2343hunk ./src/allmydata/test/test_backends.py 74
2344         mockopen.side_effect = call_open
2345 
2346         # Now begin the test.
2347-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2348+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2349 
2350         self.failIf(mockisdir.called)
2351         self.failIf(mocklistdir.called)
2352hunk ./src/allmydata/test/test_backends.py 86
2353 
2354 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2355     def setUp(self):
2356-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2357+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2358 
2359     @mock.patch('os.mkdir')
2360     @mock.patch('__builtin__.open')
2361hunk ./src/allmydata/test/test_backends.py 136
2362             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2363                 return StringIO()
2364         mockopen.side_effect = call_open
2365-        expiration_policy = {'enabled' : False,
2366-                             'mode' : 'age',
2367-                             'override_lease_duration' : None,
2368-                             'cutoff_date' : None,
2369-                             'sharetypes' : None}
2370         testbackend = DASCore(tempdir, expiration_policy)
2371         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2372 
2373}
2374[checkpoint5
2375wilcoxjg@gmail.com**20110705034626
2376 Ignore-this: 255780bd58299b0aa33c027e9d008262
2377] {
2378addfile ./src/allmydata/storage/backends/base.py
2379hunk ./src/allmydata/storage/backends/base.py 1
2380+from twisted.application import service
2381+
2382+class Backend(service.MultiService):
2383+    def __init__(self):
2384+        service.MultiService.__init__(self)
2385hunk ./src/allmydata/storage/backends/null/core.py 19
2386 
2387     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2388         
2389+        immutableshare = ImmutableShare()
2390         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2391 
2392     def set_storage_server(self, ss):
2393hunk ./src/allmydata/storage/backends/null/core.py 28
2394 class ImmutableShare:
2395     sharetype = "immutable"
2396 
2397-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2398+    def __init__(self):
2399         """ If max_size is not None then I won't allow more than
2400         max_size to be written to me. If create=True then max_size
2401         must not be None. """
2402hunk ./src/allmydata/storage/backends/null/core.py 32
2403-        precondition((max_size is not None) or (not create), max_size, create)
2404-        self.shnum = shnum
2405-        self.storage_index = storageindex
2406-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2407-        self._max_size = max_size
2408-        if create:
2409-            # touch the file, so later callers will see that we're working on
2410-            # it. Also construct the metadata.
2411-            assert not os.path.exists(self.fname)
2412-            fileutil.make_dirs(os.path.dirname(self.fname))
2413-            f = open(self.fname, 'wb')
2414-            # The second field -- the four-byte share data length -- is no
2415-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2416-            # there in case someone downgrades a storage server from >=
2417-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2418-            # server to another, etc. We do saturation -- a share data length
2419-            # larger than 2**32-1 (what can fit into the field) is marked as
2420-            # the largest length that can fit into the field. That way, even
2421-            # if this does happen, the old < v1.3.0 server will still allow
2422-            # clients to read the first part of the share.
2423-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2424-            f.close()
2425-            self._lease_offset = max_size + 0x0c
2426-            self._num_leases = 0
2427-        else:
2428-            f = open(self.fname, 'rb')
2429-            filesize = os.path.getsize(self.fname)
2430-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2431-            f.close()
2432-            if version != 1:
2433-                msg = "sharefile %s had version %d but we wanted 1" % \
2434-                      (self.fname, version)
2435-                raise UnknownImmutableContainerVersionError(msg)
2436-            self._num_leases = num_leases
2437-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2438-        self._data_offset = 0xc
2439+        pass
2440 
2441     def get_shnum(self):
2442         return self.shnum
2443hunk ./src/allmydata/storage/backends/null/core.py 54
2444         return f.read(actuallength)
2445 
2446     def write_share_data(self, offset, data):
2447-        length = len(data)
2448-        precondition(offset >= 0, offset)
2449-        if self._max_size is not None and offset+length > self._max_size:
2450-            raise DataTooLargeError(self._max_size, offset, length)
2451-        f = open(self.fname, 'rb+')
2452-        real_offset = self._data_offset+offset
2453-        f.seek(real_offset)
2454-        assert f.tell() == real_offset
2455-        f.write(data)
2456-        f.close()
2457+        pass
2458 
2459     def _write_lease_record(self, f, lease_number, lease_info):
2460         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2461hunk ./src/allmydata/storage/backends/null/core.py 84
2462             if data:
2463                 yield LeaseInfo().from_immutable_data(data)
2464 
2465-    def add_lease(self, lease_info):
2466-        f = open(self.fname, 'rb+')
2467-        num_leases = self._read_num_leases(f)
2468-        self._write_lease_record(f, num_leases, lease_info)
2469-        self._write_num_leases(f, num_leases+1)
2470-        f.close()
2471+    def add_lease(self, lease):
2472+        pass
2473 
2474     def renew_lease(self, renew_secret, new_expire_time):
2475         for i,lease in enumerate(self.get_leases()):
2476hunk ./src/allmydata/test/test_backends.py 32
2477                      'sharetypes' : None}
2478 
2479 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2480-    @mock.patch('time.time')
2481-    @mock.patch('os.mkdir')
2482-    @mock.patch('__builtin__.open')
2483-    @mock.patch('os.listdir')
2484-    @mock.patch('os.path.isdir')
2485-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2486-        """ This tests whether a server instance can be constructed
2487-        with a null backend. The server instance fails the test if it
2488-        tries to read or write to the file system. """
2489-
2490-        # Now begin the test.
2491-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2492-
2493-        self.failIf(mockisdir.called)
2494-        self.failIf(mocklistdir.called)
2495-        self.failIf(mockopen.called)
2496-        self.failIf(mockmkdir.called)
2497-
2498-        # You passed!
2499-
2500     @mock.patch('time.time')
2501     @mock.patch('os.mkdir')
2502     @mock.patch('__builtin__.open')
2503hunk ./src/allmydata/test/test_backends.py 53
2504                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2505         mockopen.side_effect = call_open
2506 
2507-        # Now begin the test.
2508-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2509-
2510-        self.failIf(mockisdir.called)
2511-        self.failIf(mocklistdir.called)
2512-        self.failIf(mockopen.called)
2513-        self.failIf(mockmkdir.called)
2514-        self.failIf(mocktime.called)
2515-
2516-        # You passed!
2517-
2518-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2519-    def setUp(self):
2520-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2521-
2522-    @mock.patch('os.mkdir')
2523-    @mock.patch('__builtin__.open')
2524-    @mock.patch('os.listdir')
2525-    @mock.patch('os.path.isdir')
2526-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2527-        """ Write a new share. """
2528-
2529-        # Now begin the test.
2530-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2531-        bs[0].remote_write(0, 'a')
2532-        self.failIf(mockisdir.called)
2533-        self.failIf(mocklistdir.called)
2534-        self.failIf(mockopen.called)
2535-        self.failIf(mockmkdir.called)
2536+        def call_isdir(fname):
2537+            if fname == os.path.join(tempdir,'shares'):
2538+                return True
2539+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2540+                return True
2541+            else:
2542+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2543+        mockisdir.side_effect = call_isdir
2544 
2545hunk ./src/allmydata/test/test_backends.py 62
2546-    @mock.patch('os.path.exists')
2547-    @mock.patch('os.path.getsize')
2548-    @mock.patch('__builtin__.open')
2549-    @mock.patch('os.listdir')
2550-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2551-        """ This tests whether the code correctly finds and reads
2552-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2553-        servers. There is a similar test in test_download, but that one
2554-        is from the perspective of the client and exercises a deeper
2555-        stack of code. This one is for exercising just the
2556-        StorageServer object. """
2557+        def call_mkdir(fname, mode):
2558+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2559+            self.failUnlessEqual(0777, mode)
2560+            if fname == tempdir:
2561+                return None
2562+            elif fname == os.path.join(tempdir,'shares'):
2563+                return None
2564+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2565+                return None
2566+            else:
2567+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2568+        mockmkdir.side_effect = call_mkdir
2569 
2570         # Now begin the test.
2571hunk ./src/allmydata/test/test_backends.py 76
2572-        bs = self.s.remote_get_buckets('teststorage_index')
2573+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2574 
2575hunk ./src/allmydata/test/test_backends.py 78
2576-        self.failUnlessEqual(len(bs), 0)
2577-        self.failIf(mocklistdir.called)
2578-        self.failIf(mockopen.called)
2579-        self.failIf(mockgetsize.called)
2580-        self.failIf(mockexists.called)
2581+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2582 
2583 
2584 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2585hunk ./src/allmydata/test/test_backends.py 193
2586         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2587 
2588 
2589+
2590+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2591+    @mock.patch('time.time')
2592+    @mock.patch('os.mkdir')
2593+    @mock.patch('__builtin__.open')
2594+    @mock.patch('os.listdir')
2595+    @mock.patch('os.path.isdir')
2596+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2597+        """ This tests whether a file system backend instance can be
2598+        constructed. To pass the test, it has to use the
2599+        filesystem in only the prescribed ways. """
2600+
2601+        def call_open(fname, mode):
2602+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2603+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2604+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2605+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2606+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2607+                return StringIO()
2608+            else:
2609+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2610+        mockopen.side_effect = call_open
2611+
2612+        def call_isdir(fname):
2613+            if fname == os.path.join(tempdir,'shares'):
2614+                return True
2615+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2616+                return True
2617+            else:
2618+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2619+        mockisdir.side_effect = call_isdir
2620+
2621+        def call_mkdir(fname, mode):
2622+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2623+            self.failUnlessEqual(0777, mode)
2624+            if fname == tempdir:
2625+                return None
2626+            elif fname == os.path.join(tempdir,'shares'):
2627+                return None
2628+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2629+                return None
2630+            else:
2631+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2632+        mockmkdir.side_effect = call_mkdir
2633+
2634+        # Now begin the test.
2635+        DASCore('teststoredir', expiration_policy)
2636+
2637+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2638}
2639[checkpoint 6
2640wilcoxjg@gmail.com**20110706190824
2641 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2642] {
2643hunk ./src/allmydata/interfaces.py 100
2644                          renew_secret=LeaseRenewSecret,
2645                          cancel_secret=LeaseCancelSecret,
2646                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2647-                         allocated_size=Offset, canary=Referenceable):
2648+                         allocated_size=Offset,
2649+                         canary=Referenceable):
2650         """
2651hunk ./src/allmydata/interfaces.py 103
2652-        @param storage_index: the index of the bucket to be created or
2653+        @param storage_index: the index of the shares to be created or
2654                               increfed.
2655hunk ./src/allmydata/interfaces.py 105
2656-        @param sharenums: these are the share numbers (probably between 0 and
2657-                          99) that the sender is proposing to store on this
2658-                          server.
2659-        @param renew_secret: This is the secret used to protect bucket refresh
2660+        @param renew_secret: This is the secret used to protect shares refresh
2661                              This secret is generated by the client and
2662                              stored for later comparison by the server. Each
2663                              server is given a different secret.
2664hunk ./src/allmydata/interfaces.py 109
2665-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2666-        @param canary: If the canary is lost before close(), the bucket is
2667+        @param cancel_secret: Like renew_secret, but protects shares decref.
2668+        @param sharenums: these are the share numbers (probably between 0 and
2669+                          99) that the sender is proposing to store on this
2670+                          server.
2671+        @param allocated_size: XXX The size of the shares the client wishes to store.
2672+        @param canary: If the canary is lost before close(), the shares are
2673                        deleted.
2674hunk ./src/allmydata/interfaces.py 116
2675+
2676         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2677                  already have and allocated is what we hereby agree to accept.
2678                  New leases are added for shares in both lists.
2679hunk ./src/allmydata/interfaces.py 128
2680                   renew_secret=LeaseRenewSecret,
2681                   cancel_secret=LeaseCancelSecret):
2682         """
2683-        Add a new lease on the given bucket. If the renew_secret matches an
2684+        Add a new lease on the given shares. If the renew_secret matches an
2685         existing lease, that lease will be renewed instead. If there is no
2686         bucket for the given storage_index, return silently. (note that in
2687         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2688hunk ./src/allmydata/storage/server.py 17
2689 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2690      create_mutable_sharefile
2691 
2692-from zope.interface import implements
2693-
2694 # storage/
2695 # storage/shares/incoming
2696 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2697hunk ./src/allmydata/test/test_backends.py 6
2698 from StringIO import StringIO
2699 
2700 from allmydata.test.common_util import ReallyEqualMixin
2701+from allmydata.util.assertutil import _assert
2702 
2703 import mock, os
2704 
2705hunk ./src/allmydata/test/test_backends.py 92
2706                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2707             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2708                 return StringIO()
2709+            else:
2710+                _assert(False, "The tester code doesn't recognize this case.") 
2711+
2712         mockopen.side_effect = call_open
2713         testbackend = DASCore(tempdir, expiration_policy)
2714         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2715hunk ./src/allmydata/test/test_backends.py 109
2716 
2717         def call_listdir(dirname):
2718             self.failUnlessReallyEqual(dirname, sharedirname)
2719-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2720+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2721 
2722         mocklistdir.side_effect = call_listdir
2723 
2724hunk ./src/allmydata/test/test_backends.py 113
2725+        def call_isdir(dirname):
2726+            self.failUnlessReallyEqual(dirname, sharedirname)
2727+            return True
2728+
2729+        mockisdir.side_effect = call_isdir
2730+
2731+        def call_mkdir(dirname, permissions):
2732+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2733+                self.Fail
2734+            else:
2735+                return True
2736+
2737+        mockmkdir.side_effect = call_mkdir
2738+
2739         class MockFile:
2740             def __init__(self):
2741                 self.buffer = ''
2742hunk ./src/allmydata/test/test_backends.py 156
2743             return sharefile
2744 
2745         mockopen.side_effect = call_open
2746+
2747         # Now begin the test.
2748         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2749         bs[0].remote_write(0, 'a')
2750hunk ./src/allmydata/test/test_backends.py 161
2751         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2752+       
2753+        # Now test the allocated_size method.
2754+        spaceint = self.s.allocated_size()
2755 
2756     @mock.patch('os.path.exists')
2757     @mock.patch('os.path.getsize')
2758}
2759[checkpoint 7
2760wilcoxjg@gmail.com**20110706200820
2761 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2762] hunk ./src/allmydata/test/test_backends.py 164
2763         
2764         # Now test the allocated_size method.
2765         spaceint = self.s.allocated_size()
2766+        self.failUnlessReallyEqual(spaceint, 1)
2767 
2768     @mock.patch('os.path.exists')
2769     @mock.patch('os.path.getsize')
2770[checkpoint8
2771wilcoxjg@gmail.com**20110706223126
2772 Ignore-this: 97336180883cb798b16f15411179f827
2773   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2774] hunk ./src/allmydata/test/test_backends.py 32
2775                      'cutoff_date' : None,
2776                      'sharetypes' : None}
2777 
2778+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2779+    def setUp(self):
2780+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2781+
2782+    @mock.patch('os.mkdir')
2783+    @mock.patch('__builtin__.open')
2784+    @mock.patch('os.listdir')
2785+    @mock.patch('os.path.isdir')
2786+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2787+        """ Write a new share. """
2788+
2789+        # Now begin the test.
2790+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2791+        bs[0].remote_write(0, 'a')
2792+        self.failIf(mockisdir.called)
2793+        self.failIf(mocklistdir.called)
2794+        self.failIf(mockopen.called)
2795+        self.failIf(mockmkdir.called)
2796+
2797 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2798     @mock.patch('time.time')
2799     @mock.patch('os.mkdir')
2800[checkpoint 9
2801wilcoxjg@gmail.com**20110707042942
2802 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2803] {
2804hunk ./src/allmydata/storage/backends/das/core.py 88
2805                     filename = os.path.join(finalstoragedir, f)
2806                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2807         except OSError:
2808-            # Commonly caused by there being no buckets at all.
2809+            # Commonly caused by there being no shares at all.
2810             pass
2811         
2812     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2813hunk ./src/allmydata/storage/backends/das/core.py 141
2814         self.storage_index = storageindex
2815         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2816         self._max_size = max_size
2817+        self.incomingdir = os.path.join(sharedir, 'incoming')
2818+        si_dir = storage_index_to_dir(storageindex)
2819+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2820+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2821         if create:
2822             # touch the file, so later callers will see that we're working on
2823             # it. Also construct the metadata.
2824hunk ./src/allmydata/storage/backends/das/core.py 177
2825             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2826         self._data_offset = 0xc
2827 
2828+    def close(self):
2829+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2830+        fileutil.rename(self.incominghome, self.finalhome)
2831+        try:
2832+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2833+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2834+            # these directories lying around forever, but the delete might
2835+            # fail if we're working on another share for the same storage
2836+            # index (like ab/abcde/5). The alternative approach would be to
2837+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2838+            # ShareWriter), each of which is responsible for a single
2839+            # directory on disk, and have them use reference counting of
2840+            # their children to know when they should do the rmdir. This
2841+            # approach is simpler, but relies on os.rmdir refusing to delete
2842+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2843+            os.rmdir(os.path.dirname(self.incominghome))
2844+            # we also delete the grandparent (prefix) directory, .../ab ,
2845+            # again to avoid leaving directories lying around. This might
2846+            # fail if there is another bucket open that shares a prefix (like
2847+            # ab/abfff).
2848+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2849+            # we leave the great-grandparent (incoming/) directory in place.
2850+        except EnvironmentError:
2851+            # ignore the "can't rmdir because the directory is not empty"
2852+            # exceptions, those are normal consequences of the
2853+            # above-mentioned conditions.
2854+            pass
2855+        pass
2856+       
2857+    def stat(self):
2858+        return os.stat(self.finalhome)[stat.ST_SIZE]
2859+
2860     def get_shnum(self):
2861         return self.shnum
2862 
2863hunk ./src/allmydata/storage/immutable.py 7
2864 
2865 from zope.interface import implements
2866 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2867-from allmydata.util import base32, fileutil, log
2868+from allmydata.util import base32, log
2869 from allmydata.util.assertutil import precondition
2870 from allmydata.util.hashutil import constant_time_compare
2871 from allmydata.storage.lease import LeaseInfo
2872hunk ./src/allmydata/storage/immutable.py 44
2873     def remote_close(self):
2874         precondition(not self.closed)
2875         start = time.time()
2876-
2877-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2878-        fileutil.rename(self.incominghome, self.finalhome)
2879-        try:
2880-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2881-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2882-            # these directories lying around forever, but the delete might
2883-            # fail if we're working on another share for the same storage
2884-            # index (like ab/abcde/5). The alternative approach would be to
2885-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2886-            # ShareWriter), each of which is responsible for a single
2887-            # directory on disk, and have them use reference counting of
2888-            # their children to know when they should do the rmdir. This
2889-            # approach is simpler, but relies on os.rmdir refusing to delete
2890-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2891-            os.rmdir(os.path.dirname(self.incominghome))
2892-            # we also delete the grandparent (prefix) directory, .../ab ,
2893-            # again to avoid leaving directories lying around. This might
2894-            # fail if there is another bucket open that shares a prefix (like
2895-            # ab/abfff).
2896-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2897-            # we leave the great-grandparent (incoming/) directory in place.
2898-        except EnvironmentError:
2899-            # ignore the "can't rmdir because the directory is not empty"
2900-            # exceptions, those are normal consequences of the
2901-            # above-mentioned conditions.
2902-            pass
2903+        self._sharefile.close()
2904         self._sharefile = None
2905         self.closed = True
2906         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2907hunk ./src/allmydata/storage/immutable.py 49
2908 
2909-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2910+        filelen = self._sharefile.stat()
2911         self.ss.bucket_writer_closed(self, filelen)
2912         self.ss.add_latency("close", time.time() - start)
2913         self.ss.count("close")
2914hunk ./src/allmydata/storage/server.py 45
2915         self._active_writers = weakref.WeakKeyDictionary()
2916         self.backend = backend
2917         self.backend.setServiceParent(self)
2918+        self.backend.set_storage_server(self)
2919         log.msg("StorageServer created", facility="tahoe.storage")
2920 
2921         self.latencies = {"allocate": [], # immutable
2922hunk ./src/allmydata/storage/server.py 220
2923 
2924         for shnum in (sharenums - alreadygot):
2925             if (not limited) or (remaining_space >= max_space_per_bucket):
2926-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2927-                self.backend.set_storage_server(self)
2928                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2929                                                      max_space_per_bucket, lease_info, canary)
2930                 bucketwriters[shnum] = bw
2931hunk ./src/allmydata/test/test_backends.py 117
2932         mockopen.side_effect = call_open
2933         testbackend = DASCore(tempdir, expiration_policy)
2934         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2935-
2936+   
2937+    @mock.patch('allmydata.util.fileutil.get_available_space')
2938     @mock.patch('time.time')
2939     @mock.patch('os.mkdir')
2940     @mock.patch('__builtin__.open')
2941hunk ./src/allmydata/test/test_backends.py 124
2942     @mock.patch('os.listdir')
2943     @mock.patch('os.path.isdir')
2944-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2945+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2946+                             mockget_available_space):
2947         """ Write a new share. """
2948 
2949         def call_listdir(dirname):
2950hunk ./src/allmydata/test/test_backends.py 148
2951 
2952         mockmkdir.side_effect = call_mkdir
2953 
2954+        def call_get_available_space(storedir, reserved_space):
2955+            self.failUnlessReallyEqual(storedir, tempdir)
2956+            return 1
2957+
2958+        mockget_available_space.side_effect = call_get_available_space
2959+
2960         class MockFile:
2961             def __init__(self):
2962                 self.buffer = ''
2963hunk ./src/allmydata/test/test_backends.py 188
2964         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2965         bs[0].remote_write(0, 'a')
2966         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2967-       
2968+
2969+        # What happens when there's not enough space for the client's request?
2970+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2971+
2972         # Now test the allocated_size method.
2973         spaceint = self.s.allocated_size()
2974         self.failUnlessReallyEqual(spaceint, 1)
2975}
2976[checkpoint10
2977wilcoxjg@gmail.com**20110707172049
2978 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2979] {
2980hunk ./src/allmydata/test/test_backends.py 20
2981 # The following share file contents was generated with
2982 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2983 # with share data == 'a'.
2984-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2985+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2986+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2987+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2988 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2989 
2990hunk ./src/allmydata/test/test_backends.py 25
2991+testnodeid = 'testnodeidxxxxxxxxxx'
2992 tempdir = 'teststoredir'
2993 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2994 sharefname = os.path.join(sharedirname, '0')
2995hunk ./src/allmydata/test/test_backends.py 37
2996 
2997 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2998     def setUp(self):
2999-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
3000+        self.s = StorageServer(testnodeid, backend=NullCore())
3001 
3002     @mock.patch('os.mkdir')
3003     @mock.patch('__builtin__.open')
3004hunk ./src/allmydata/test/test_backends.py 99
3005         mockmkdir.side_effect = call_mkdir
3006 
3007         # Now begin the test.
3008-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
3009+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
3010 
3011         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
3012 
3013hunk ./src/allmydata/test/test_backends.py 119
3014 
3015         mockopen.side_effect = call_open
3016         testbackend = DASCore(tempdir, expiration_policy)
3017-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
3018-   
3019+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3020+       
3021+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3022     @mock.patch('allmydata.util.fileutil.get_available_space')
3023     @mock.patch('time.time')
3024     @mock.patch('os.mkdir')
3025hunk ./src/allmydata/test/test_backends.py 129
3026     @mock.patch('os.listdir')
3027     @mock.patch('os.path.isdir')
3028     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3029-                             mockget_available_space):
3030+                             mockget_available_space, mockget_shares):
3031         """ Write a new share. """
3032 
3033         def call_listdir(dirname):
3034hunk ./src/allmydata/test/test_backends.py 139
3035         mocklistdir.side_effect = call_listdir
3036 
3037         def call_isdir(dirname):
3038+            #XXX Should there be any other tests here?
3039             self.failUnlessReallyEqual(dirname, sharedirname)
3040             return True
3041 
3042hunk ./src/allmydata/test/test_backends.py 159
3043 
3044         mockget_available_space.side_effect = call_get_available_space
3045 
3046+        mocktime.return_value = 0
3047+        class MockShare:
3048+            def __init__(self):
3049+                self.shnum = 1
3050+               
3051+            def add_or_renew_lease(elf, lease_info):
3052+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3053+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3054+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3055+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3056+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3057+               
3058+
3059+        share = MockShare()
3060+        def call_get_shares(storageindex):
3061+            return [share]
3062+
3063+        mockget_shares.side_effect = call_get_shares
3064+
3065         class MockFile:
3066             def __init__(self):
3067                 self.buffer = ''
3068hunk ./src/allmydata/test/test_backends.py 199
3069             def tell(self):
3070                 return self.pos
3071 
3072-        mocktime.return_value = 0
3073 
3074         sharefile = MockFile()
3075         def call_open(fname, mode):
3076}
3077[jacp 11
3078wilcoxjg@gmail.com**20110708213919
3079 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3080] {
3081hunk ./src/allmydata/storage/backends/das/core.py 144
3082         self.incomingdir = os.path.join(sharedir, 'incoming')
3083         si_dir = storage_index_to_dir(storageindex)
3084         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3085+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3086         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3087         if create:
3088             # touch the file, so later callers will see that we're working on
3089hunk ./src/allmydata/storage/backends/das/core.py 208
3090         pass
3091         
3092     def stat(self):
3093-        return os.stat(self.finalhome)[stat.ST_SIZE]
3094+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3095 
3096     def get_shnum(self):
3097         return self.shnum
3098hunk ./src/allmydata/storage/immutable.py 44
3099     def remote_close(self):
3100         precondition(not self.closed)
3101         start = time.time()
3102+
3103         self._sharefile.close()
3104hunk ./src/allmydata/storage/immutable.py 46
3105+        filelen = self._sharefile.stat()
3106         self._sharefile = None
3107hunk ./src/allmydata/storage/immutable.py 48
3108+
3109         self.closed = True
3110         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3111 
3112hunk ./src/allmydata/storage/immutable.py 52
3113-        filelen = self._sharefile.stat()
3114         self.ss.bucket_writer_closed(self, filelen)
3115         self.ss.add_latency("close", time.time() - start)
3116         self.ss.count("close")
3117hunk ./src/allmydata/storage/server.py 220
3118 
3119         for shnum in (sharenums - alreadygot):
3120             if (not limited) or (remaining_space >= max_space_per_bucket):
3121-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3122-                                                     max_space_per_bucket, lease_info, canary)
3123+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3124                 bucketwriters[shnum] = bw
3125                 self._active_writers[bw] = 1
3126                 if limited:
3127hunk ./src/allmydata/test/test_backends.py 20
3128 # The following share file contents was generated with
3129 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3130 # with share data == 'a'.
3131-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3132-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3133+renew_secret  = 'x'*32
3134+cancel_secret = 'y'*32
3135 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3136 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3137 
3138hunk ./src/allmydata/test/test_backends.py 27
3139 testnodeid = 'testnodeidxxxxxxxxxx'
3140 tempdir = 'teststoredir'
3141-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3142-sharefname = os.path.join(sharedirname, '0')
3143+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3144+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3145+shareincomingname = os.path.join(sharedirincomingname, '0')
3146+sharefname = os.path.join(sharedirfinalname, '0')
3147+
3148 expiration_policy = {'enabled' : False,
3149                      'mode' : 'age',
3150                      'override_lease_duration' : None,
3151hunk ./src/allmydata/test/test_backends.py 123
3152         mockopen.side_effect = call_open
3153         testbackend = DASCore(tempdir, expiration_policy)
3154         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3155-       
3156+
3157+    @mock.patch('allmydata.util.fileutil.rename')
3158+    @mock.patch('allmydata.util.fileutil.make_dirs')
3159+    @mock.patch('os.path.exists')
3160+    @mock.patch('os.stat')
3161     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3162     @mock.patch('allmydata.util.fileutil.get_available_space')
3163     @mock.patch('time.time')
3164hunk ./src/allmydata/test/test_backends.py 136
3165     @mock.patch('os.listdir')
3166     @mock.patch('os.path.isdir')
3167     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3168-                             mockget_available_space, mockget_shares):
3169+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3170+                             mockmake_dirs, mockrename):
3171         """ Write a new share. """
3172 
3173         def call_listdir(dirname):
3174hunk ./src/allmydata/test/test_backends.py 141
3175-            self.failUnlessReallyEqual(dirname, sharedirname)
3176+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3177             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3178 
3179         mocklistdir.side_effect = call_listdir
3180hunk ./src/allmydata/test/test_backends.py 148
3181 
3182         def call_isdir(dirname):
3183             #XXX Should there be any other tests here?
3184-            self.failUnlessReallyEqual(dirname, sharedirname)
3185+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3186             return True
3187 
3188         mockisdir.side_effect = call_isdir
3189hunk ./src/allmydata/test/test_backends.py 154
3190 
3191         def call_mkdir(dirname, permissions):
3192-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3193+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3194                 self.Fail
3195             else:
3196                 return True
3197hunk ./src/allmydata/test/test_backends.py 208
3198                 return self.pos
3199 
3200 
3201-        sharefile = MockFile()
3202+        fobj = MockFile()
3203         def call_open(fname, mode):
3204             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3205hunk ./src/allmydata/test/test_backends.py 211
3206-            return sharefile
3207+            return fobj
3208 
3209         mockopen.side_effect = call_open
3210 
3211hunk ./src/allmydata/test/test_backends.py 215
3212+        def call_make_dirs(dname):
3213+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3214+           
3215+        mockmake_dirs.side_effect = call_make_dirs
3216+
3217+        def call_rename(src, dst):
3218+           self.failUnlessReallyEqual(src, shareincomingname)
3219+           self.failUnlessReallyEqual(dst, sharefname)
3220+           
3221+        mockrename.side_effect = call_rename
3222+
3223+        def call_exists(fname):
3224+            self.failUnlessReallyEqual(fname, sharefname)
3225+
3226+        mockexists.side_effect = call_exists
3227+
3228         # Now begin the test.
3229         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3230         bs[0].remote_write(0, 'a')
3231hunk ./src/allmydata/test/test_backends.py 234
3232-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3233+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3234+        spaceint = self.s.allocated_size()
3235+        self.failUnlessReallyEqual(spaceint, 1)
3236+
3237+        bs[0].remote_close()
3238 
3239         # What happens when there's not enough space for the client's request?
3240hunk ./src/allmydata/test/test_backends.py 241
3241-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3242+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3243 
3244         # Now test the allocated_size method.
3245hunk ./src/allmydata/test/test_backends.py 244
3246-        spaceint = self.s.allocated_size()
3247-        self.failUnlessReallyEqual(spaceint, 1)
3248+        #self.failIf(mockexists.called, mockexists.call_args_list)
3249+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3250+        #self.failIf(mockrename.called, mockrename.call_args_list)
3251+        #self.failIf(mockstat.called, mockstat.call_args_list)
3252 
3253     @mock.patch('os.path.exists')
3254     @mock.patch('os.path.getsize')
3255}
3256[checkpoint12 testing correct behavior with regard to incoming and final
3257wilcoxjg@gmail.com**20110710191915
3258 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3259] {
3260hunk ./src/allmydata/storage/backends/das/core.py 74
3261         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3262         self.lease_checker.setServiceParent(self)
3263 
3264+    def get_incoming(self, storageindex):
3265+        return set((1,))
3266+
3267     def get_available_space(self):
3268         if self.readonly:
3269             return 0
3270hunk ./src/allmydata/storage/server.py 77
3271         """Return a dict, indexed by category, that contains a dict of
3272         latency numbers for each category. If there are sufficient samples
3273         for unambiguous interpretation, each dict will contain the
3274-        following keys: mean, 01_0_percentile, 10_0_percentile,
3275+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3276         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3277         99_0_percentile, 99_9_percentile.  If there are insufficient
3278         samples for a given percentile to be interpreted unambiguously
3279hunk ./src/allmydata/storage/server.py 120
3280 
3281     def get_stats(self):
3282         # remember: RIStatsProvider requires that our return dict
3283-        # contains numeric values.
3284+        # contains numeric, or None values.
3285         stats = { 'storage_server.allocated': self.allocated_size(), }
3286         stats['storage_server.reserved_space'] = self.reserved_space
3287         for category,ld in self.get_latencies().items():
3288hunk ./src/allmydata/storage/server.py 185
3289         start = time.time()
3290         self.count("allocate")
3291         alreadygot = set()
3292+        incoming = set()
3293         bucketwriters = {} # k: shnum, v: BucketWriter
3294 
3295         si_s = si_b2a(storage_index)
3296hunk ./src/allmydata/storage/server.py 219
3297             alreadygot.add(share.shnum)
3298             share.add_or_renew_lease(lease_info)
3299 
3300-        for shnum in (sharenums - alreadygot):
3301+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3302+        incoming = self.backend.get_incoming(storageindex)
3303+
3304+        for shnum in ((sharenums - alreadygot) - incoming):
3305             if (not limited) or (remaining_space >= max_space_per_bucket):
3306                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3307                 bucketwriters[shnum] = bw
3308hunk ./src/allmydata/storage/server.py 229
3309                 self._active_writers[bw] = 1
3310                 if limited:
3311                     remaining_space -= max_space_per_bucket
3312-
3313-        #XXX We SHOULD DOCUMENT LATER.
3314+            else:
3315+                # Bummer not enough space to accept this share.
3316+                pass
3317 
3318         self.add_latency("allocate", time.time() - start)
3319         return alreadygot, bucketwriters
3320hunk ./src/allmydata/storage/server.py 323
3321         self.add_latency("get", time.time() - start)
3322         return bucketreaders
3323 
3324-    def get_leases(self, storage_index):
3325+    def remote_get_incoming(self, storageindex):
3326+        incoming_share_set = self.backend.get_incoming(storageindex)
3327+        return incoming_share_set
3328+
3329+    def get_leases(self, storageindex):
3330         """Provide an iterator that yields all of the leases attached to this
3331         bucket. Each lease is returned as a LeaseInfo instance.
3332 
3333hunk ./src/allmydata/storage/server.py 337
3334         # since all shares get the same lease data, we just grab the leases
3335         # from the first share
3336         try:
3337-            shnum, filename = self._get_shares(storage_index).next()
3338+            shnum, filename = self._get_shares(storageindex).next()
3339             sf = ShareFile(filename)
3340             return sf.get_leases()
3341         except StopIteration:
3342hunk ./src/allmydata/test/test_backends.py 182
3343 
3344         share = MockShare()
3345         def call_get_shares(storageindex):
3346-            return [share]
3347+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3348+            return []#share]
3349 
3350         mockget_shares.side_effect = call_get_shares
3351 
3352hunk ./src/allmydata/test/test_backends.py 222
3353         mockmake_dirs.side_effect = call_make_dirs
3354 
3355         def call_rename(src, dst):
3356-           self.failUnlessReallyEqual(src, shareincomingname)
3357-           self.failUnlessReallyEqual(dst, sharefname)
3358+            self.failUnlessReallyEqual(src, shareincomingname)
3359+            self.failUnlessReallyEqual(dst, sharefname)
3360             
3361         mockrename.side_effect = call_rename
3362 
3363hunk ./src/allmydata/test/test_backends.py 233
3364         mockexists.side_effect = call_exists
3365 
3366         # Now begin the test.
3367+
3368+        # XXX (0) ???  Fail unless something is not properly set-up?
3369         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3370hunk ./src/allmydata/test/test_backends.py 236
3371+
3372+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3373+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3374+
3375+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3376+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3377+        # with the same si, until BucketWriter.remote_close() has been called.
3378+        # self.failIf(bsa)
3379+
3380+        # XXX (3) Inspect final and fail unless there's nothing there.
3381         bs[0].remote_write(0, 'a')
3382hunk ./src/allmydata/test/test_backends.py 247
3383+        # XXX (4a) Inspect final and fail unless share 0 is there.
3384+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3385         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3386         spaceint = self.s.allocated_size()
3387         self.failUnlessReallyEqual(spaceint, 1)
3388hunk ./src/allmydata/test/test_backends.py 253
3389 
3390+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3391         bs[0].remote_close()
3392 
3393         # What happens when there's not enough space for the client's request?
3394hunk ./src/allmydata/test/test_backends.py 260
3395         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3396 
3397         # Now test the allocated_size method.
3398-        #self.failIf(mockexists.called, mockexists.call_args_list)
3399+        # self.failIf(mockexists.called, mockexists.call_args_list)
3400         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3401         #self.failIf(mockrename.called, mockrename.call_args_list)
3402         #self.failIf(mockstat.called, mockstat.call_args_list)
3403}
3404[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3405wilcoxjg@gmail.com**20110710195139
3406 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3407] {
3408hunk ./src/allmydata/storage/server.py 220
3409             share.add_or_renew_lease(lease_info)
3410 
3411         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3412-        incoming = self.backend.get_incoming(storageindex)
3413+        incoming = self.backend.get_incoming(storage_index)
3414 
3415         for shnum in ((sharenums - alreadygot) - incoming):
3416             if (not limited) or (remaining_space >= max_space_per_bucket):
3417hunk ./src/allmydata/storage/server.py 323
3418         self.add_latency("get", time.time() - start)
3419         return bucketreaders
3420 
3421-    def remote_get_incoming(self, storageindex):
3422-        incoming_share_set = self.backend.get_incoming(storageindex)
3423+    def remote_get_incoming(self, storage_index):
3424+        incoming_share_set = self.backend.get_incoming(storage_index)
3425         return incoming_share_set
3426 
3427hunk ./src/allmydata/storage/server.py 327
3428-    def get_leases(self, storageindex):
3429+    def get_leases(self, storage_index):
3430         """Provide an iterator that yields all of the leases attached to this
3431         bucket. Each lease is returned as a LeaseInfo instance.
3432 
3433hunk ./src/allmydata/storage/server.py 337
3434         # since all shares get the same lease data, we just grab the leases
3435         # from the first share
3436         try:
3437-            shnum, filename = self._get_shares(storageindex).next()
3438+            shnum, filename = self._get_shares(storage_index).next()
3439             sf = ShareFile(filename)
3440             return sf.get_leases()
3441         except StopIteration:
3442replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3443}
3444[adding comments to clarify what I'm about to do.
3445wilcoxjg@gmail.com**20110710220623
3446 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3447] {
3448hunk ./src/allmydata/storage/backends/das/core.py 8
3449 
3450 import os, re, weakref, struct, time
3451 
3452-from foolscap.api import Referenceable
3453+#from foolscap.api import Referenceable
3454 from twisted.application import service
3455 
3456 from zope.interface import implements
3457hunk ./src/allmydata/storage/backends/das/core.py 12
3458-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3459+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3460 from allmydata.util import fileutil, idlib, log, time_format
3461 import allmydata # for __full_version__
3462 
3463hunk ./src/allmydata/storage/server.py 219
3464             alreadygot.add(share.shnum)
3465             share.add_or_renew_lease(lease_info)
3466 
3467-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3468+        # fill incoming with all shares that are incoming use a set operation
3469+        # since there's no need to operate on individual pieces
3470         incoming = self.backend.get_incoming(storageindex)
3471 
3472         for shnum in ((sharenums - alreadygot) - incoming):
3473hunk ./src/allmydata/test/test_backends.py 245
3474         # with the same si, until BucketWriter.remote_close() has been called.
3475         # self.failIf(bsa)
3476 
3477-        # XXX (3) Inspect final and fail unless there's nothing there.
3478         bs[0].remote_write(0, 'a')
3479hunk ./src/allmydata/test/test_backends.py 246
3480-        # XXX (4a) Inspect final and fail unless share 0 is there.
3481-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3482         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3483         spaceint = self.s.allocated_size()
3484         self.failUnlessReallyEqual(spaceint, 1)
3485hunk ./src/allmydata/test/test_backends.py 250
3486 
3487-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3488+        # XXX (3) Inspect final and fail unless there's nothing there.
3489         bs[0].remote_close()
3490hunk ./src/allmydata/test/test_backends.py 252
3491+        # XXX (4a) Inspect final and fail unless share 0 is there.
3492+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3493 
3494         # What happens when there's not enough space for the client's request?
3495         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3496}
3497[branching back, no longer attempting to mock inside TestServerFSBackend
3498wilcoxjg@gmail.com**20110711190849
3499 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3500] {
3501hunk ./src/allmydata/storage/backends/das/core.py 75
3502         self.lease_checker.setServiceParent(self)
3503 
3504     def get_incoming(self, storageindex):
3505-        return set((1,))
3506-
3507-    def get_available_space(self):
3508-        if self.readonly:
3509-            return 0
3510-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3511+        """Return the set of incoming shnums."""
3512+        return set(os.listdir(self.incomingdir))
3513 
3514     def get_shares(self, storage_index):
3515         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3516hunk ./src/allmydata/storage/backends/das/core.py 90
3517             # Commonly caused by there being no shares at all.
3518             pass
3519         
3520+    def get_available_space(self):
3521+        if self.readonly:
3522+            return 0
3523+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3524+
3525     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3526         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3527         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3528hunk ./src/allmydata/test/test_backends.py 27
3529 
3530 testnodeid = 'testnodeidxxxxxxxxxx'
3531 tempdir = 'teststoredir'
3532-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3533-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3534+basedir = os.path.join(tempdir, 'shares')
3535+baseincdir = os.path.join(basedir, 'incoming')
3536+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3537+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3538 shareincomingname = os.path.join(sharedirincomingname, '0')
3539 sharefname = os.path.join(sharedirfinalname, '0')
3540 
3541hunk ./src/allmydata/test/test_backends.py 142
3542                              mockmake_dirs, mockrename):
3543         """ Write a new share. """
3544 
3545-        def call_listdir(dirname):
3546-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3547-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3548-
3549-        mocklistdir.side_effect = call_listdir
3550-
3551-        def call_isdir(dirname):
3552-            #XXX Should there be any other tests here?
3553-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3554-            return True
3555-
3556-        mockisdir.side_effect = call_isdir
3557-
3558-        def call_mkdir(dirname, permissions):
3559-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3560-                self.Fail
3561-            else:
3562-                return True
3563-
3564-        mockmkdir.side_effect = call_mkdir
3565-
3566-        def call_get_available_space(storedir, reserved_space):
3567-            self.failUnlessReallyEqual(storedir, tempdir)
3568-            return 1
3569-
3570-        mockget_available_space.side_effect = call_get_available_space
3571-
3572-        mocktime.return_value = 0
3573         class MockShare:
3574             def __init__(self):
3575                 self.shnum = 1
3576hunk ./src/allmydata/test/test_backends.py 152
3577                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3578                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3579                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3580-               
3581 
3582         share = MockShare()
3583hunk ./src/allmydata/test/test_backends.py 154
3584-        def call_get_shares(storageindex):
3585-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3586-            return []#share]
3587-
3588-        mockget_shares.side_effect = call_get_shares
3589 
3590         class MockFile:
3591             def __init__(self):
3592hunk ./src/allmydata/test/test_backends.py 176
3593             def tell(self):
3594                 return self.pos
3595 
3596-
3597         fobj = MockFile()
3598hunk ./src/allmydata/test/test_backends.py 177
3599+
3600+        directories = {}
3601+        def call_listdir(dirname):
3602+            if dirname not in directories:
3603+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3604+            else:
3605+                return directories[dirname].get_contents()
3606+
3607+        mocklistdir.side_effect = call_listdir
3608+
3609+        class MockDir:
3610+            def __init__(self, dirname):
3611+                self.name = dirname
3612+                self.contents = []
3613+   
3614+            def get_contents(self):
3615+                return self.contents
3616+
3617+        def call_isdir(dirname):
3618+            #XXX Should there be any other tests here?
3619+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3620+            return True
3621+
3622+        mockisdir.side_effect = call_isdir
3623+
3624+        def call_mkdir(dirname, permissions):
3625+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3626+                self.Fail
3627+            if dirname in directories:
3628+                raise OSError(17, "File exists: '%s'" % dirname)
3629+                self.Fail
3630+            elif dirname not in directories:
3631+                directories[dirname] = MockDir(dirname)
3632+                return True
3633+
3634+        mockmkdir.side_effect = call_mkdir
3635+
3636+        def call_get_available_space(storedir, reserved_space):
3637+            self.failUnlessReallyEqual(storedir, tempdir)
3638+            return 1
3639+
3640+        mockget_available_space.side_effect = call_get_available_space
3641+
3642+        mocktime.return_value = 0
3643+        def call_get_shares(storageindex):
3644+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3645+            return []#share]
3646+
3647+        mockget_shares.side_effect = call_get_shares
3648+
3649         def call_open(fname, mode):
3650             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3651             return fobj
3652}
3653[checkpoint12 TestServerFSBackend no longer mocks filesystem
3654wilcoxjg@gmail.com**20110711193357
3655 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3656] {
3657hunk ./src/allmydata/storage/backends/das/core.py 23
3658      create_mutable_sharefile
3659 from allmydata.storage.immutable import BucketWriter, BucketReader
3660 from allmydata.storage.crawler import FSBucketCountingCrawler
3661+from allmydata.util.hashutil import constant_time_compare
3662 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3663 
3664 from zope.interface import implements
3665hunk ./src/allmydata/storage/backends/das/core.py 28
3666 
3667+# storage/
3668+# storage/shares/incoming
3669+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3670+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3671+# storage/shares/$START/$STORAGEINDEX
3672+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3673+
3674+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3675+# base-32 chars).
3676 # $SHARENUM matches this regex:
3677 NUM_RE=re.compile("^[0-9]+$")
3678 
3679hunk ./src/allmydata/test/test_backends.py 126
3680         testbackend = DASCore(tempdir, expiration_policy)
3681         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3682 
3683-    @mock.patch('allmydata.util.fileutil.rename')
3684-    @mock.patch('allmydata.util.fileutil.make_dirs')
3685-    @mock.patch('os.path.exists')
3686-    @mock.patch('os.stat')
3687-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3688-    @mock.patch('allmydata.util.fileutil.get_available_space')
3689     @mock.patch('time.time')
3690hunk ./src/allmydata/test/test_backends.py 127
3691-    @mock.patch('os.mkdir')
3692-    @mock.patch('__builtin__.open')
3693-    @mock.patch('os.listdir')
3694-    @mock.patch('os.path.isdir')
3695-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3696-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3697-                             mockmake_dirs, mockrename):
3698+    def test_write_share(self, mocktime):
3699         """ Write a new share. """
3700 
3701         class MockShare:
3702hunk ./src/allmydata/test/test_backends.py 143
3703 
3704         share = MockShare()
3705 
3706-        class MockFile:
3707-            def __init__(self):
3708-                self.buffer = ''
3709-                self.pos = 0
3710-            def write(self, instring):
3711-                begin = self.pos
3712-                padlen = begin - len(self.buffer)
3713-                if padlen > 0:
3714-                    self.buffer += '\x00' * padlen
3715-                end = self.pos + len(instring)
3716-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3717-                self.pos = end
3718-            def close(self):
3719-                pass
3720-            def seek(self, pos):
3721-                self.pos = pos
3722-            def read(self, numberbytes):
3723-                return self.buffer[self.pos:self.pos+numberbytes]
3724-            def tell(self):
3725-                return self.pos
3726-
3727-        fobj = MockFile()
3728-
3729-        directories = {}
3730-        def call_listdir(dirname):
3731-            if dirname not in directories:
3732-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3733-            else:
3734-                return directories[dirname].get_contents()
3735-
3736-        mocklistdir.side_effect = call_listdir
3737-
3738-        class MockDir:
3739-            def __init__(self, dirname):
3740-                self.name = dirname
3741-                self.contents = []
3742-   
3743-            def get_contents(self):
3744-                return self.contents
3745-
3746-        def call_isdir(dirname):
3747-            #XXX Should there be any other tests here?
3748-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3749-            return True
3750-
3751-        mockisdir.side_effect = call_isdir
3752-
3753-        def call_mkdir(dirname, permissions):
3754-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3755-                self.Fail
3756-            if dirname in directories:
3757-                raise OSError(17, "File exists: '%s'" % dirname)
3758-                self.Fail
3759-            elif dirname not in directories:
3760-                directories[dirname] = MockDir(dirname)
3761-                return True
3762-
3763-        mockmkdir.side_effect = call_mkdir
3764-
3765-        def call_get_available_space(storedir, reserved_space):
3766-            self.failUnlessReallyEqual(storedir, tempdir)
3767-            return 1
3768-
3769-        mockget_available_space.side_effect = call_get_available_space
3770-
3771-        mocktime.return_value = 0
3772-        def call_get_shares(storageindex):
3773-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3774-            return []#share]
3775-
3776-        mockget_shares.side_effect = call_get_shares
3777-
3778-        def call_open(fname, mode):
3779-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3780-            return fobj
3781-
3782-        mockopen.side_effect = call_open
3783-
3784-        def call_make_dirs(dname):
3785-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3786-           
3787-        mockmake_dirs.side_effect = call_make_dirs
3788-
3789-        def call_rename(src, dst):
3790-            self.failUnlessReallyEqual(src, shareincomingname)
3791-            self.failUnlessReallyEqual(dst, sharefname)
3792-           
3793-        mockrename.side_effect = call_rename
3794-
3795-        def call_exists(fname):
3796-            self.failUnlessReallyEqual(fname, sharefname)
3797-
3798-        mockexists.side_effect = call_exists
3799-
3800         # Now begin the test.
3801 
3802         # XXX (0) ???  Fail unless something is not properly set-up?
3803}
3804[JACP
3805wilcoxjg@gmail.com**20110711194407
3806 Ignore-this: b54745de777c4bb58d68d708f010bbb
3807] {
3808hunk ./src/allmydata/storage/backends/das/core.py 86
3809 
3810     def get_incoming(self, storageindex):
3811         """Return the set of incoming shnums."""
3812-        return set(os.listdir(self.incomingdir))
3813+        try:
3814+            incominglist = os.listdir(self.incomingdir)
3815+            print "incominglist: ", incominglist
3816+            return set(incominglist)
3817+        except OSError:
3818+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3819+            pass
3820 
3821     def get_shares(self, storage_index):
3822         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3823hunk ./src/allmydata/storage/server.py 17
3824 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3825      create_mutable_sharefile
3826 
3827-# storage/
3828-# storage/shares/incoming
3829-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3830-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3831-# storage/shares/$START/$STORAGEINDEX
3832-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3833-
3834-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3835-# base-32 chars).
3836-
3837-
3838 class StorageServer(service.MultiService, Referenceable):
3839     implements(RIStorageServer, IStatsProducer)
3840     name = 'storage'
3841}
3842[testing get incoming
3843wilcoxjg@gmail.com**20110711210224
3844 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3845] {
3846hunk ./src/allmydata/storage/backends/das/core.py 87
3847     def get_incoming(self, storageindex):
3848         """Return the set of incoming shnums."""
3849         try:
3850-            incominglist = os.listdir(self.incomingdir)
3851+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3852+            incominglist = os.listdir(incomingsharesdir)
3853             print "incominglist: ", incominglist
3854             return set(incominglist)
3855         except OSError:
3856hunk ./src/allmydata/storage/backends/das/core.py 92
3857-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3858-            pass
3859-
3860+            # XXX I'd like to make this more specific. If there are no shares at all.
3861+            return set()
3862+           
3863     def get_shares(self, storage_index):
3864         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3865         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3866hunk ./src/allmydata/test/test_backends.py 149
3867         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3868 
3869         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3870+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3871         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3872 
3873hunk ./src/allmydata/test/test_backends.py 152
3874-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3875         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3876         # with the same si, until BucketWriter.remote_close() has been called.
3877         # self.failIf(bsa)
3878}
3879[ImmutableShareFile does not know its StorageIndex
3880wilcoxjg@gmail.com**20110711211424
3881 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3882] {
3883hunk ./src/allmydata/storage/backends/das/core.py 112
3884             return 0
3885         return fileutil.get_available_space(self.storedir, self.reserved_space)
3886 
3887-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3888-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3889+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3890+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3891+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3892+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3893         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3894         return bw
3895 
3896hunk ./src/allmydata/storage/backends/das/core.py 155
3897     LEASE_SIZE = struct.calcsize(">L32s32sL")
3898     sharetype = "immutable"
3899 
3900-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3901+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3902         """ If max_size is not None then I won't allow more than
3903         max_size to be written to me. If create=True then max_size
3904         must not be None. """
3905}
3906[get_incoming correctly reports the 0 share after it has arrived
3907wilcoxjg@gmail.com**20110712025157
3908 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3909] {
3910hunk ./src/allmydata/storage/backends/das/core.py 1
3911+import os, re, weakref, struct, time, stat
3912+
3913 from allmydata.interfaces import IStorageBackend
3914 from allmydata.storage.backends.base import Backend
3915 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3916hunk ./src/allmydata/storage/backends/das/core.py 8
3917 from allmydata.util.assertutil import precondition
3918 
3919-import os, re, weakref, struct, time
3920-
3921 #from foolscap.api import Referenceable
3922 from twisted.application import service
3923 
3924hunk ./src/allmydata/storage/backends/das/core.py 89
3925         try:
3926             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3927             incominglist = os.listdir(incomingsharesdir)
3928-            print "incominglist: ", incominglist
3929-            return set(incominglist)
3930+            incomingshnums = [int(x) for x in incominglist]
3931+            return set(incomingshnums)
3932         except OSError:
3933             # XXX I'd like to make this more specific. If there are no shares at all.
3934             return set()
3935hunk ./src/allmydata/storage/backends/das/core.py 113
3936         return fileutil.get_available_space(self.storedir, self.reserved_space)
3937 
3938     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3939-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3940-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3941-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3942+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3943+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3944+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3945         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3946         return bw
3947 
3948hunk ./src/allmydata/storage/backends/das/core.py 160
3949         max_size to be written to me. If create=True then max_size
3950         must not be None. """
3951         precondition((max_size is not None) or (not create), max_size, create)
3952-        self.shnum = shnum
3953-        self.storage_index = storageindex
3954-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3955         self._max_size = max_size
3956hunk ./src/allmydata/storage/backends/das/core.py 161
3957-        self.incomingdir = os.path.join(sharedir, 'incoming')
3958-        si_dir = storage_index_to_dir(storageindex)
3959-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3960-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3961-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3962+        self.incominghome = incominghome
3963+        self.finalhome = finalhome
3964         if create:
3965             # touch the file, so later callers will see that we're working on
3966             # it. Also construct the metadata.
3967hunk ./src/allmydata/storage/backends/das/core.py 166
3968-            assert not os.path.exists(self.fname)
3969-            fileutil.make_dirs(os.path.dirname(self.fname))
3970-            f = open(self.fname, 'wb')
3971+            assert not os.path.exists(self.finalhome)
3972+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3973+            f = open(self.incominghome, 'wb')
3974             # The second field -- the four-byte share data length -- is no
3975             # longer used as of Tahoe v1.3.0, but we continue to write it in
3976             # there in case someone downgrades a storage server from >=
3977hunk ./src/allmydata/storage/backends/das/core.py 183
3978             self._lease_offset = max_size + 0x0c
3979             self._num_leases = 0
3980         else:
3981-            f = open(self.fname, 'rb')
3982-            filesize = os.path.getsize(self.fname)
3983+            f = open(self.finalhome, 'rb')
3984+            filesize = os.path.getsize(self.finalhome)
3985             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3986             f.close()
3987             if version != 1:
3988hunk ./src/allmydata/storage/backends/das/core.py 189
3989                 msg = "sharefile %s had version %d but we wanted 1" % \
3990-                      (self.fname, version)
3991+                      (self.finalhome, version)
3992                 raise UnknownImmutableContainerVersionError(msg)
3993             self._num_leases = num_leases
3994             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3995hunk ./src/allmydata/storage/backends/das/core.py 225
3996         pass
3997         
3998     def stat(self):
3999-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
4000+        return os.stat(self.finalhome)[stat.ST_SIZE]
4001+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
4002 
4003     def get_shnum(self):
4004         return self.shnum
4005hunk ./src/allmydata/storage/backends/das/core.py 232
4006 
4007     def unlink(self):
4008-        os.unlink(self.fname)
4009+        os.unlink(self.finalhome)
4010 
4011     def read_share_data(self, offset, length):
4012         precondition(offset >= 0)
4013hunk ./src/allmydata/storage/backends/das/core.py 239
4014         # Reads beyond the end of the data are truncated. Reads that start
4015         # beyond the end of the data return an empty string.
4016         seekpos = self._data_offset+offset
4017-        fsize = os.path.getsize(self.fname)
4018+        fsize = os.path.getsize(self.finalhome)
4019         actuallength = max(0, min(length, fsize-seekpos))
4020         if actuallength == 0:
4021             return ""
4022hunk ./src/allmydata/storage/backends/das/core.py 243
4023-        f = open(self.fname, 'rb')
4024+        f = open(self.finalhome, 'rb')
4025         f.seek(seekpos)
4026         return f.read(actuallength)
4027 
4028hunk ./src/allmydata/storage/backends/das/core.py 252
4029         precondition(offset >= 0, offset)
4030         if self._max_size is not None and offset+length > self._max_size:
4031             raise DataTooLargeError(self._max_size, offset, length)
4032-        f = open(self.fname, 'rb+')
4033+        f = open(self.incominghome, 'rb+')
4034         real_offset = self._data_offset+offset
4035         f.seek(real_offset)
4036         assert f.tell() == real_offset
4037hunk ./src/allmydata/storage/backends/das/core.py 279
4038 
4039     def get_leases(self):
4040         """Yields a LeaseInfo instance for all leases."""
4041-        f = open(self.fname, 'rb')
4042+        f = open(self.finalhome, 'rb')
4043         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
4044         f.seek(self._lease_offset)
4045         for i in range(num_leases):
4046hunk ./src/allmydata/storage/backends/das/core.py 288
4047                 yield LeaseInfo().from_immutable_data(data)
4048 
4049     def add_lease(self, lease_info):
4050-        f = open(self.fname, 'rb+')
4051+        f = open(self.incominghome, 'rb+')
4052         num_leases = self._read_num_leases(f)
4053         self._write_lease_record(f, num_leases, lease_info)
4054         self._write_num_leases(f, num_leases+1)
4055hunk ./src/allmydata/storage/backends/das/core.py 301
4056                 if new_expire_time > lease.expiration_time:
4057                     # yes
4058                     lease.expiration_time = new_expire_time
4059-                    f = open(self.fname, 'rb+')
4060+                    f = open(self.finalhome, 'rb+')
4061                     self._write_lease_record(f, i, lease)
4062                     f.close()
4063                 return
4064hunk ./src/allmydata/storage/backends/das/core.py 336
4065             # the same order as they were added, so that if we crash while
4066             # doing this, we won't lose any non-cancelled leases.
4067             leases = [l for l in leases if l] # remove the cancelled leases
4068-            f = open(self.fname, 'rb+')
4069+            f = open(self.finalhome, 'rb+')
4070             for i,lease in enumerate(leases):
4071                 self._write_lease_record(f, i, lease)
4072             self._write_num_leases(f, len(leases))
4073hunk ./src/allmydata/storage/backends/das/core.py 344
4074             f.close()
4075         space_freed = self.LEASE_SIZE * num_leases_removed
4076         if not len(leases):
4077-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
4078+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
4079             self.unlink()
4080         return space_freed
4081hunk ./src/allmydata/test/test_backends.py 129
4082     @mock.patch('time.time')
4083     def test_write_share(self, mocktime):
4084         """ Write a new share. """
4085-
4086-        class MockShare:
4087-            def __init__(self):
4088-                self.shnum = 1
4089-               
4090-            def add_or_renew_lease(elf, lease_info):
4091-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4092-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4093-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4094-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4095-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4096-
4097-        share = MockShare()
4098-
4099         # Now begin the test.
4100 
4101         # XXX (0) ???  Fail unless something is not properly set-up?
4102hunk ./src/allmydata/test/test_backends.py 143
4103         # self.failIf(bsa)
4104 
4105         bs[0].remote_write(0, 'a')
4106-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4107+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4108         spaceint = self.s.allocated_size()
4109         self.failUnlessReallyEqual(spaceint, 1)
4110 
4111hunk ./src/allmydata/test/test_backends.py 161
4112         #self.failIf(mockrename.called, mockrename.call_args_list)
4113         #self.failIf(mockstat.called, mockstat.call_args_list)
4114 
4115+    def test_handle_incoming(self):
4116+        incomingset = self.s.backend.get_incoming('teststorage_index')
4117+        self.failUnlessReallyEqual(incomingset, set())
4118+
4119+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4120+       
4121+        incomingset = self.s.backend.get_incoming('teststorage_index')
4122+        self.failUnlessReallyEqual(incomingset, set((0,)))
4123+
4124+        bs[0].remote_close()
4125+        self.failUnlessReallyEqual(incomingset, set())
4126+
4127     @mock.patch('os.path.exists')
4128     @mock.patch('os.path.getsize')
4129     @mock.patch('__builtin__.open')
4130hunk ./src/allmydata/test/test_backends.py 223
4131         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4132 
4133 
4134-
4135 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4136     @mock.patch('time.time')
4137     @mock.patch('os.mkdir')
4138hunk ./src/allmydata/test/test_backends.py 271
4139         DASCore('teststoredir', expiration_policy)
4140 
4141         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4142+
4143}
4144[jacp14
4145wilcoxjg@gmail.com**20110712061211
4146 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4147] {
4148hunk ./src/allmydata/storage/backends/das/core.py 95
4149             # XXX I'd like to make this more specific. If there are no shares at all.
4150             return set()
4151             
4152-    def get_shares(self, storage_index):
4153+    def get_shares(self, storageindex):
4154         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4155hunk ./src/allmydata/storage/backends/das/core.py 97
4156-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4157+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4158         try:
4159             for f in os.listdir(finalstoragedir):
4160                 if NUM_RE.match(f):
4161hunk ./src/allmydata/storage/backends/das/core.py 102
4162                     filename = os.path.join(finalstoragedir, f)
4163-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4164+                    yield ImmutableShare(filename, storageindex, f)
4165         except OSError:
4166             # Commonly caused by there being no shares at all.
4167             pass
4168hunk ./src/allmydata/storage/backends/das/core.py 115
4169     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4170         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4171         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4172-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4173+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4174         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4175         return bw
4176 
4177hunk ./src/allmydata/storage/backends/das/core.py 155
4178     LEASE_SIZE = struct.calcsize(">L32s32sL")
4179     sharetype = "immutable"
4180 
4181-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4182+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4183         """ If max_size is not None then I won't allow more than
4184         max_size to be written to me. If create=True then max_size
4185         must not be None. """
4186hunk ./src/allmydata/storage/backends/das/core.py 160
4187         precondition((max_size is not None) or (not create), max_size, create)
4188+        self.storageindex = storageindex
4189         self._max_size = max_size
4190         self.incominghome = incominghome
4191         self.finalhome = finalhome
4192hunk ./src/allmydata/storage/backends/das/core.py 164
4193+        self.shnum = shnum
4194         if create:
4195             # touch the file, so later callers will see that we're working on
4196             # it. Also construct the metadata.
4197hunk ./src/allmydata/storage/backends/das/core.py 212
4198             # their children to know when they should do the rmdir. This
4199             # approach is simpler, but relies on os.rmdir refusing to delete
4200             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4201+            #print "os.path.dirname(self.incominghome): "
4202+            #print os.path.dirname(self.incominghome)
4203             os.rmdir(os.path.dirname(self.incominghome))
4204             # we also delete the grandparent (prefix) directory, .../ab ,
4205             # again to avoid leaving directories lying around. This might
4206hunk ./src/allmydata/storage/immutable.py 93
4207     def __init__(self, ss, share):
4208         self.ss = ss
4209         self._share_file = share
4210-        self.storage_index = share.storage_index
4211+        self.storageindex = share.storageindex
4212         self.shnum = share.shnum
4213 
4214     def __repr__(self):
4215hunk ./src/allmydata/storage/immutable.py 98
4216         return "<%s %s %s>" % (self.__class__.__name__,
4217-                               base32.b2a_l(self.storage_index[:8], 60),
4218+                               base32.b2a_l(self.storageindex[:8], 60),
4219                                self.shnum)
4220 
4221     def remote_read(self, offset, length):
4222hunk ./src/allmydata/storage/immutable.py 110
4223 
4224     def remote_advise_corrupt_share(self, reason):
4225         return self.ss.remote_advise_corrupt_share("immutable",
4226-                                                   self.storage_index,
4227+                                                   self.storageindex,
4228                                                    self.shnum,
4229                                                    reason)
4230hunk ./src/allmydata/test/test_backends.py 20
4231 # The following share file contents was generated with
4232 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4233 # with share data == 'a'.
4234-renew_secret  = 'x'*32
4235-cancel_secret = 'y'*32
4236-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4237-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4238+shareversionnumber = '\x00\x00\x00\x01'
4239+sharedatalength = '\x00\x00\x00\x01'
4240+numberofleases = '\x00\x00\x00\x01'
4241+shareinputdata = 'a'
4242+ownernumber = '\x00\x00\x00\x00'
4243+renewsecret  = 'x'*32
4244+cancelsecret = 'y'*32
4245+expirationtime = '\x00(\xde\x80'
4246+nextlease = ''
4247+containerdata = shareversionnumber + sharedatalength + numberofleases
4248+client_data = shareinputdata + ownernumber + renewsecret + \
4249+    cancelsecret + expirationtime + nextlease
4250+share_data = containerdata + client_data
4251+
4252 
4253 testnodeid = 'testnodeidxxxxxxxxxx'
4254 tempdir = 'teststoredir'
4255hunk ./src/allmydata/test/test_backends.py 52
4256 
4257 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4258     def setUp(self):
4259-        self.s = StorageServer(testnodeid, backend=NullCore())
4260+        self.ss = StorageServer(testnodeid, backend=NullCore())
4261 
4262     @mock.patch('os.mkdir')
4263     @mock.patch('__builtin__.open')
4264hunk ./src/allmydata/test/test_backends.py 62
4265         """ Write a new share. """
4266 
4267         # Now begin the test.
4268-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4269+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4270         bs[0].remote_write(0, 'a')
4271         self.failIf(mockisdir.called)
4272         self.failIf(mocklistdir.called)
4273hunk ./src/allmydata/test/test_backends.py 133
4274                 _assert(False, "The tester code doesn't recognize this case.") 
4275 
4276         mockopen.side_effect = call_open
4277-        testbackend = DASCore(tempdir, expiration_policy)
4278-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4279+        self.backend = DASCore(tempdir, expiration_policy)
4280+        self.ss = StorageServer(testnodeid, self.backend)
4281+        self.ssinf = StorageServer(testnodeid, self.backend)
4282 
4283     @mock.patch('time.time')
4284     def test_write_share(self, mocktime):
4285hunk ./src/allmydata/test/test_backends.py 142
4286         """ Write a new share. """
4287         # Now begin the test.
4288 
4289-        # XXX (0) ???  Fail unless something is not properly set-up?
4290-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4291+        mocktime.return_value = 0
4292+        # Inspect incoming and fail unless it's empty.
4293+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4294+        self.failUnlessReallyEqual(incomingset, set())
4295+       
4296+        # Among other things, populate incoming with the sharenum: 0.
4297+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4298 
4299hunk ./src/allmydata/test/test_backends.py 150
4300-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4301-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4302-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4303+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4304+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4305+       
4306+        # Attempt to create a second share writer with the same share.
4307+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4308 
4309hunk ./src/allmydata/test/test_backends.py 156
4310-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4311+        # Show that no sharewriter results from a remote_allocate_buckets
4312         # with the same si, until BucketWriter.remote_close() has been called.
4313hunk ./src/allmydata/test/test_backends.py 158
4314-        # self.failIf(bsa)
4315+        self.failIf(bsa)
4316 
4317hunk ./src/allmydata/test/test_backends.py 160
4318+        # Write 'a' to shnum 0. Only tested together with close and read.
4319         bs[0].remote_write(0, 'a')
4320hunk ./src/allmydata/test/test_backends.py 162
4321-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4322-        spaceint = self.s.allocated_size()
4323+
4324+        # Test allocated size.
4325+        spaceint = self.ss.allocated_size()
4326         self.failUnlessReallyEqual(spaceint, 1)
4327 
4328         # XXX (3) Inspect final and fail unless there's nothing there.
4329hunk ./src/allmydata/test/test_backends.py 168
4330+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4331         bs[0].remote_close()
4332         # XXX (4a) Inspect final and fail unless share 0 is there.
4333hunk ./src/allmydata/test/test_backends.py 171
4334+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4335+        #contents = sharesinfinal[0].read_share_data(0,999)
4336+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4337         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4338 
4339         # What happens when there's not enough space for the client's request?
4340hunk ./src/allmydata/test/test_backends.py 177
4341-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4342+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4343 
4344         # Now test the allocated_size method.
4345         # self.failIf(mockexists.called, mockexists.call_args_list)
4346hunk ./src/allmydata/test/test_backends.py 185
4347         #self.failIf(mockrename.called, mockrename.call_args_list)
4348         #self.failIf(mockstat.called, mockstat.call_args_list)
4349 
4350-    def test_handle_incoming(self):
4351-        incomingset = self.s.backend.get_incoming('teststorage_index')
4352-        self.failUnlessReallyEqual(incomingset, set())
4353-
4354-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4355-       
4356-        incomingset = self.s.backend.get_incoming('teststorage_index')
4357-        self.failUnlessReallyEqual(incomingset, set((0,)))
4358-
4359-        bs[0].remote_close()
4360-        self.failUnlessReallyEqual(incomingset, set())
4361-
4362     @mock.patch('os.path.exists')
4363     @mock.patch('os.path.getsize')
4364     @mock.patch('__builtin__.open')
4365hunk ./src/allmydata/test/test_backends.py 208
4366             self.failUnless('r' in mode, mode)
4367             self.failUnless('b' in mode, mode)
4368 
4369-            return StringIO(share_file_data)
4370+            return StringIO(share_data)
4371         mockopen.side_effect = call_open
4372 
4373hunk ./src/allmydata/test/test_backends.py 211
4374-        datalen = len(share_file_data)
4375+        datalen = len(share_data)
4376         def call_getsize(fname):
4377             self.failUnlessReallyEqual(fname, sharefname)
4378             return datalen
4379hunk ./src/allmydata/test/test_backends.py 223
4380         mockexists.side_effect = call_exists
4381 
4382         # Now begin the test.
4383-        bs = self.s.remote_get_buckets('teststorage_index')
4384+        bs = self.ss.remote_get_buckets('teststorage_index')
4385 
4386         self.failUnlessEqual(len(bs), 1)
4387hunk ./src/allmydata/test/test_backends.py 226
4388-        b = bs[0]
4389+        b = bs['0']
4390         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4391hunk ./src/allmydata/test/test_backends.py 228
4392-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4393+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4394         # If you try to read past the end you get the as much data as is there.
4395hunk ./src/allmydata/test/test_backends.py 230
4396-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4397+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4398         # If you start reading past the end of the file you get the empty string.
4399         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4400 
4401}
4402[jacp14 or so
4403wilcoxjg@gmail.com**20110713060346
4404 Ignore-this: 7026810f60879d65b525d450e43ff87a
4405] {
4406hunk ./src/allmydata/storage/backends/das/core.py 102
4407             for f in os.listdir(finalstoragedir):
4408                 if NUM_RE.match(f):
4409                     filename = os.path.join(finalstoragedir, f)
4410-                    yield ImmutableShare(filename, storageindex, f)
4411+                    yield ImmutableShare(filename, storageindex, int(f))
4412         except OSError:
4413             # Commonly caused by there being no shares at all.
4414             pass
4415hunk ./src/allmydata/storage/backends/null/core.py 25
4416     def set_storage_server(self, ss):
4417         self.ss = ss
4418 
4419+    def get_incoming(self, storageindex):
4420+        return set()
4421+
4422 class ImmutableShare:
4423     sharetype = "immutable"
4424 
4425hunk ./src/allmydata/storage/immutable.py 19
4426 
4427     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4428         self.ss = ss
4429-        self._max_size = max_size # don't allow the client to write more than this
4430+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4431+
4432         self._canary = canary
4433         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4434         self.closed = False
4435hunk ./src/allmydata/test/test_backends.py 135
4436         mockopen.side_effect = call_open
4437         self.backend = DASCore(tempdir, expiration_policy)
4438         self.ss = StorageServer(testnodeid, self.backend)
4439-        self.ssinf = StorageServer(testnodeid, self.backend)
4440+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4441+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4442 
4443     @mock.patch('time.time')
4444     def test_write_share(self, mocktime):
4445hunk ./src/allmydata/test/test_backends.py 161
4446         # with the same si, until BucketWriter.remote_close() has been called.
4447         self.failIf(bsa)
4448 
4449-        # Write 'a' to shnum 0. Only tested together with close and read.
4450-        bs[0].remote_write(0, 'a')
4451-
4452         # Test allocated size.
4453         spaceint = self.ss.allocated_size()
4454         self.failUnlessReallyEqual(spaceint, 1)
4455hunk ./src/allmydata/test/test_backends.py 165
4456 
4457-        # XXX (3) Inspect final and fail unless there's nothing there.
4458+        # Write 'a' to shnum 0. Only tested together with close and read.
4459+        bs[0].remote_write(0, 'a')
4460+       
4461+        # Preclose: Inspect final, failUnless nothing there.
4462         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4463         bs[0].remote_close()
4464hunk ./src/allmydata/test/test_backends.py 171
4465-        # XXX (4a) Inspect final and fail unless share 0 is there.
4466-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4467-        #contents = sharesinfinal[0].read_share_data(0,999)
4468-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4469-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4470 
4471hunk ./src/allmydata/test/test_backends.py 172
4472-        # What happens when there's not enough space for the client's request?
4473-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4474+        # Postclose: (Omnibus) failUnless written data is in final.
4475+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4476+        contents = sharesinfinal[0].read_share_data(0,73)
4477+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4478 
4479hunk ./src/allmydata/test/test_backends.py 177
4480-        # Now test the allocated_size method.
4481-        # self.failIf(mockexists.called, mockexists.call_args_list)
4482-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4483-        #self.failIf(mockrename.called, mockrename.call_args_list)
4484-        #self.failIf(mockstat.called, mockstat.call_args_list)
4485+        # Cover interior of for share in get_shares loop.
4486+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4487+       
4488+    @mock.patch('time.time')
4489+    @mock.patch('allmydata.util.fileutil.get_available_space')
4490+    def test_out_of_space(self, mockget_available_space, mocktime):
4491+        mocktime.return_value = 0
4492+       
4493+        def call_get_available_space(dir, reserve):
4494+            return 0
4495+
4496+        mockget_available_space.side_effect = call_get_available_space
4497+       
4498+       
4499+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4500 
4501     @mock.patch('os.path.exists')
4502     @mock.patch('os.path.getsize')
4503hunk ./src/allmydata/test/test_backends.py 234
4504         bs = self.ss.remote_get_buckets('teststorage_index')
4505 
4506         self.failUnlessEqual(len(bs), 1)
4507-        b = bs['0']
4508+        b = bs[0]
4509         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4510         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4511         # If you try to read past the end you get the as much data as is there.
4512}
4513[temporary work-in-progress patch to be unrecorded
4514zooko@zooko.com**20110714003008
4515 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4516 tidy up a few tests, work done in pair-programming with Zancas
4517] {
4518hunk ./src/allmydata/storage/backends/das/core.py 65
4519         self._clean_incomplete()
4520 
4521     def _clean_incomplete(self):
4522-        fileutil.rm_dir(self.incomingdir)
4523+        fileutil.rmtree(self.incomingdir)
4524         fileutil.make_dirs(self.incomingdir)
4525 
4526     def _setup_corruption_advisory(self):
4527hunk ./src/allmydata/storage/immutable.py 1
4528-import os, stat, struct, time
4529+import os, time
4530 
4531 from foolscap.api import Referenceable
4532 
4533hunk ./src/allmydata/storage/server.py 1
4534-import os, re, weakref, struct, time
4535+import os, weakref, struct, time
4536 
4537 from foolscap.api import Referenceable
4538 from twisted.application import service
4539hunk ./src/allmydata/storage/server.py 7
4540 
4541 from zope.interface import implements
4542-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4543+from allmydata.interfaces import RIStorageServer, IStatsProducer
4544 from allmydata.util import fileutil, idlib, log, time_format
4545 import allmydata # for __full_version__
4546 
4547hunk ./src/allmydata/storage/server.py 313
4548         self.add_latency("get", time.time() - start)
4549         return bucketreaders
4550 
4551-    def remote_get_incoming(self, storageindex):
4552-        incoming_share_set = self.backend.get_incoming(storageindex)
4553-        return incoming_share_set
4554-
4555     def get_leases(self, storageindex):
4556         """Provide an iterator that yields all of the leases attached to this
4557         bucket. Each lease is returned as a LeaseInfo instance.
4558hunk ./src/allmydata/test/test_backends.py 3
4559 from twisted.trial import unittest
4560 
4561+from twisted.path.filepath import FilePath
4562+
4563 from StringIO import StringIO
4564 
4565 from allmydata.test.common_util import ReallyEqualMixin
4566hunk ./src/allmydata/test/test_backends.py 38
4567 
4568 
4569 testnodeid = 'testnodeidxxxxxxxxxx'
4570-tempdir = 'teststoredir'
4571-basedir = os.path.join(tempdir, 'shares')
4572+storedir = 'teststoredir'
4573+storedirfp = FilePath(storedir)
4574+basedir = os.path.join(storedir, 'shares')
4575 baseincdir = os.path.join(basedir, 'incoming')
4576 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4577 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4578hunk ./src/allmydata/test/test_backends.py 53
4579                      'cutoff_date' : None,
4580                      'sharetypes' : None}
4581 
4582-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4583+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4584+    """ NullBackend is just for testing and executable documentation, so
4585+    this test is actually a test of StorageServer in which we're using
4586+    NullBackend as helper code for the test, rather than a test of
4587+    NullBackend. """
4588     def setUp(self):
4589         self.ss = StorageServer(testnodeid, backend=NullCore())
4590 
4591hunk ./src/allmydata/test/test_backends.py 62
4592     @mock.patch('os.mkdir')
4593+
4594     @mock.patch('__builtin__.open')
4595     @mock.patch('os.listdir')
4596     @mock.patch('os.path.isdir')
4597hunk ./src/allmydata/test/test_backends.py 69
4598     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4599         """ Write a new share. """
4600 
4601-        # Now begin the test.
4602         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4603         bs[0].remote_write(0, 'a')
4604         self.failIf(mockisdir.called)
4605hunk ./src/allmydata/test/test_backends.py 83
4606     @mock.patch('os.listdir')
4607     @mock.patch('os.path.isdir')
4608     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4609-        """ This tests whether a server instance can be constructed
4610-        with a filesystem backend. To pass the test, it has to use the
4611-        filesystem in only the prescribed ways. """
4612+        """ This tests whether a server instance can be constructed with a
4613+        filesystem backend. To pass the test, it mustn't use the filesystem
4614+        outside of its configured storedir. """
4615 
4616         def call_open(fname, mode):
4617hunk ./src/allmydata/test/test_backends.py 88
4618-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4619-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4620-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4621-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4622-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4623+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4624+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4625+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4626+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4627+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4628                 return StringIO()
4629             else:
4630hunk ./src/allmydata/test/test_backends.py 95
4631-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4632+                fnamefp = FilePath(fname)
4633+                self.failUnless(storedirfp in fnamefp.parents(),
4634+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4635         mockopen.side_effect = call_open
4636 
4637         def call_isdir(fname):
4638hunk ./src/allmydata/test/test_backends.py 101
4639-            if fname == os.path.join(tempdir,'shares'):
4640+            if fname == os.path.join(storedir, 'shares'):
4641                 return True
4642hunk ./src/allmydata/test/test_backends.py 103
4643-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4644+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4645                 return True
4646             else:
4647                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4648hunk ./src/allmydata/test/test_backends.py 109
4649         mockisdir.side_effect = call_isdir
4650 
4651+        mocklistdir.return_value = []
4652+
4653         def call_mkdir(fname, mode):
4654hunk ./src/allmydata/test/test_backends.py 112
4655-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4656             self.failUnlessEqual(0777, mode)
4657hunk ./src/allmydata/test/test_backends.py 113
4658-            if fname == tempdir:
4659-                return None
4660-            elif fname == os.path.join(tempdir,'shares'):
4661-                return None
4662-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4663-                return None
4664-            else:
4665-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4666+            self.failUnlessIn(fname,
4667+                              [storedir,
4668+                               os.path.join(storedir, 'shares'),
4669+                               os.path.join(storedir, 'shares', 'incoming')],
4670+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4671         mockmkdir.side_effect = call_mkdir
4672 
4673         # Now begin the test.
4674hunk ./src/allmydata/test/test_backends.py 121
4675-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4676+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4677 
4678         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4679 
4680hunk ./src/allmydata/test/test_backends.py 126
4681 
4682-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4683+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4684+    """ This tests both the StorageServer xyz """
4685     @mock.patch('__builtin__.open')
4686     def setUp(self, mockopen):
4687         def call_open(fname, mode):
4688hunk ./src/allmydata/test/test_backends.py 131
4689-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4690-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4691-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4692-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4693-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4694+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4695+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4696+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4697+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4698+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4699                 return StringIO()
4700             else:
4701                 _assert(False, "The tester code doesn't recognize this case.") 
4702hunk ./src/allmydata/test/test_backends.py 141
4703 
4704         mockopen.side_effect = call_open
4705-        self.backend = DASCore(tempdir, expiration_policy)
4706+        self.backend = DASCore(storedir, expiration_policy)
4707         self.ss = StorageServer(testnodeid, self.backend)
4708hunk ./src/allmydata/test/test_backends.py 143
4709-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4710+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4711         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4712 
4713     @mock.patch('time.time')
4714hunk ./src/allmydata/test/test_backends.py 147
4715-    def test_write_share(self, mocktime):
4716-        """ Write a new share. """
4717-        # Now begin the test.
4718+    def test_write_and_read_share(self, mocktime):
4719+        """
4720+        Write a new share, read it, and test the server's (and FS backend's)
4721+        handling of simultaneous and successive attempts to write the same
4722+        share.
4723+        """
4724 
4725         mocktime.return_value = 0
4726         # Inspect incoming and fail unless it's empty.
4727hunk ./src/allmydata/test/test_backends.py 159
4728         incomingset = self.ss.backend.get_incoming('teststorage_index')
4729         self.failUnlessReallyEqual(incomingset, set())
4730         
4731-        # Among other things, populate incoming with the sharenum: 0.
4732+        # Populate incoming with the sharenum: 0.
4733         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4734 
4735         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4736hunk ./src/allmydata/test/test_backends.py 163
4737-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4738+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4739         
4740hunk ./src/allmydata/test/test_backends.py 165
4741-        # Attempt to create a second share writer with the same share.
4742+        # Attempt to create a second share writer with the same sharenum.
4743         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4744 
4745         # Show that no sharewriter results from a remote_allocate_buckets
4746hunk ./src/allmydata/test/test_backends.py 169
4747-        # with the same si, until BucketWriter.remote_close() has been called.
4748+        # with the same si and sharenum, until BucketWriter.remote_close()
4749+        # has been called.
4750         self.failIf(bsa)
4751 
4752         # Test allocated size.
4753hunk ./src/allmydata/test/test_backends.py 187
4754         # Postclose: (Omnibus) failUnless written data is in final.
4755         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4756         contents = sharesinfinal[0].read_share_data(0,73)
4757-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4758+        self.failUnlessReallyEqual(contents, client_data)
4759 
4760hunk ./src/allmydata/test/test_backends.py 189
4761-        # Cover interior of for share in get_shares loop.
4762-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4763+        # Exercise the case that the share we're asking to allocate is
4764+        # already (completely) uploaded.
4765+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4766         
4767     @mock.patch('time.time')
4768     @mock.patch('allmydata.util.fileutil.get_available_space')
4769hunk ./src/allmydata/test/test_backends.py 210
4770     @mock.patch('os.path.getsize')
4771     @mock.patch('__builtin__.open')
4772     @mock.patch('os.listdir')
4773-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4774+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4775         """ This tests whether the code correctly finds and reads
4776         shares written out by old (Tahoe-LAFS <= v1.8.2)
4777         servers. There is a similar test in test_download, but that one
4778hunk ./src/allmydata/test/test_backends.py 219
4779         StorageServer object. """
4780 
4781         def call_listdir(dirname):
4782-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4783+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4784             return ['0']
4785 
4786         mocklistdir.side_effect = call_listdir
4787hunk ./src/allmydata/test/test_backends.py 226
4788 
4789         def call_open(fname, mode):
4790             self.failUnlessReallyEqual(fname, sharefname)
4791-            self.failUnless('r' in mode, mode)
4792+            self.failUnlessEqual(mode[0], 'r', mode)
4793             self.failUnless('b' in mode, mode)
4794 
4795             return StringIO(share_data)
4796hunk ./src/allmydata/test/test_backends.py 268
4797         filesystem in only the prescribed ways. """
4798 
4799         def call_open(fname, mode):
4800-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4801-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4802-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4803-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4804-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4805+            if fname == os.path.join(storedir,'bucket_counter.state'):
4806+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4807+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4808+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4809+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4810                 return StringIO()
4811             else:
4812                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4813hunk ./src/allmydata/test/test_backends.py 279
4814         mockopen.side_effect = call_open
4815 
4816         def call_isdir(fname):
4817-            if fname == os.path.join(tempdir,'shares'):
4818+            if fname == os.path.join(storedir,'shares'):
4819                 return True
4820hunk ./src/allmydata/test/test_backends.py 281
4821-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4822+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4823                 return True
4824             else:
4825                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4826hunk ./src/allmydata/test/test_backends.py 290
4827         def call_mkdir(fname, mode):
4828             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4829             self.failUnlessEqual(0777, mode)
4830-            if fname == tempdir:
4831+            if fname == storedir:
4832                 return None
4833hunk ./src/allmydata/test/test_backends.py 292
4834-            elif fname == os.path.join(tempdir,'shares'):
4835+            elif fname == os.path.join(storedir,'shares'):
4836                 return None
4837hunk ./src/allmydata/test/test_backends.py 294
4838-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4839+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4840                 return None
4841             else:
4842                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4843hunk ./src/allmydata/util/fileutil.py 5
4844 Futz with files like a pro.
4845 """
4846 
4847-import sys, exceptions, os, stat, tempfile, time, binascii
4848+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4849 
4850 from twisted.python import log
4851 
4852hunk ./src/allmydata/util/fileutil.py 186
4853             raise tx
4854         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4855 
4856-def rm_dir(dirname):
4857+def rmtree(dirname):
4858     """
4859     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4860     already gone, do nothing and return without raising an exception.  If this
4861hunk ./src/allmydata/util/fileutil.py 205
4862             else:
4863                 remove(fullname)
4864         os.rmdir(dirname)
4865-    except Exception, le:
4866-        # Ignore "No such file or directory"
4867-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4868+    except EnvironmentError, le:
4869+        # Ignore "No such file or directory", collect any other exception.
4870+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4871             excs.append(le)
4872hunk ./src/allmydata/util/fileutil.py 209
4873+    except Exception, le:
4874+        excs.append(le)
4875 
4876     # Okay, now we've recursively removed everything, ignoring any "No
4877     # such file or directory" errors, and collecting any other errors.
4878hunk ./src/allmydata/util/fileutil.py 222
4879             raise OSError, "Failed to remove dir for unknown reason."
4880         raise OSError, excs
4881 
4882+def rm_dir(dirname):
4883+    # Renamed to be like shutil.rmtree and unlike rmdir.
4884+    return rmtree(dirname)
4885 
4886 def remove_if_possible(f):
4887     try:
4888}
4889[work in progress intended to be unrecorded and never committed to trunk
4890zooko@zooko.com**20110714212139
4891 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
4892 switch from os.path.join to filepath
4893 incomplete refactoring of common "stay in your subtree" tester code into a superclass
4894 
4895] {
4896hunk ./src/allmydata/test/test_backends.py 3
4897 from twisted.trial import unittest
4898 
4899-from twisted.path.filepath import FilePath
4900+from twisted.python.filepath import FilePath
4901 
4902 from StringIO import StringIO
4903 
4904hunk ./src/allmydata/test/test_backends.py 10
4905 from allmydata.test.common_util import ReallyEqualMixin
4906 from allmydata.util.assertutil import _assert
4907 
4908-import mock, os
4909+import mock
4910 
4911 # This is the code that we're going to be testing.
4912 from allmydata.storage.server import StorageServer
4913hunk ./src/allmydata/test/test_backends.py 25
4914 shareversionnumber = '\x00\x00\x00\x01'
4915 sharedatalength = '\x00\x00\x00\x01'
4916 numberofleases = '\x00\x00\x00\x01'
4917+
4918 shareinputdata = 'a'
4919 ownernumber = '\x00\x00\x00\x00'
4920 renewsecret  = 'x'*32
4921hunk ./src/allmydata/test/test_backends.py 39
4922 
4923 
4924 testnodeid = 'testnodeidxxxxxxxxxx'
4925-storedir = 'teststoredir'
4926-storedirfp = FilePath(storedir)
4927-basedir = os.path.join(storedir, 'shares')
4928-baseincdir = os.path.join(basedir, 'incoming')
4929-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4930-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4931-shareincomingname = os.path.join(sharedirincomingname, '0')
4932-sharefname = os.path.join(sharedirfinalname, '0')
4933+
4934+class TestFilesMixin(unittest.TestCase):
4935+    def setUp(self):
4936+        self.storedir = FilePath('teststoredir')
4937+        self.basedir = self.storedir.child('shares')
4938+        self.baseincdir = self.basedir.child('incoming')
4939+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4940+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4941+        self.shareincomingname = self.sharedirincomingname.child('0')
4942+        self.sharefname = self.sharedirfinalname.child('0')
4943+
4944+    def call_open(self, fname, mode):
4945+        fnamefp = FilePath(fname)
4946+        if fnamefp == self.storedir.child('bucket_counter.state'):
4947+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
4948+        elif fnamefp == self.storedir.child('lease_checker.state'):
4949+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
4950+        elif fnamefp == self.storedir.child('lease_checker.history'):
4951+            return StringIO()
4952+        else:
4953+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4954+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
4955+
4956+    def call_isdir(self, fname):
4957+        fnamefp = FilePath(fname)
4958+        if fnamefp == self.storedir.child('shares'):
4959+            return True
4960+        elif fnamefp == self.storedir.child('shares').child('incoming'):
4961+            return True
4962+        else:
4963+            self.failUnless(self.storedir in fnamefp.parents(),
4964+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4965+
4966+    def call_mkdir(self, fname, mode):
4967+        self.failUnlessEqual(0777, mode)
4968+        fnamefp = FilePath(fname)
4969+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4970+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4971+
4972+
4973+    @mock.patch('os.mkdir')
4974+    @mock.patch('__builtin__.open')
4975+    @mock.patch('os.listdir')
4976+    @mock.patch('os.path.isdir')
4977+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4978+        mocklistdir.return_value = []
4979+        mockmkdir.side_effect = self.call_mkdir
4980+        mockisdir.side_effect = self.call_isdir
4981+        mockopen.side_effect = self.call_open
4982+        mocklistdir.return_value = []
4983+       
4984+        test_func()
4985+       
4986+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4987 
4988 expiration_policy = {'enabled' : False,
4989                      'mode' : 'age',
4990hunk ./src/allmydata/test/test_backends.py 123
4991         self.failIf(mockopen.called)
4992         self.failIf(mockmkdir.called)
4993 
4994-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
4995-    @mock.patch('time.time')
4996-    @mock.patch('os.mkdir')
4997-    @mock.patch('__builtin__.open')
4998-    @mock.patch('os.listdir')
4999-    @mock.patch('os.path.isdir')
5000-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5001+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5002+    def test_create_server_fs_backend(self):
5003         """ This tests whether a server instance can be constructed with a
5004         filesystem backend. To pass the test, it mustn't use the filesystem
5005         outside of its configured storedir. """
5006hunk ./src/allmydata/test/test_backends.py 129
5007 
5008-        def call_open(fname, mode):
5009-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5010-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5011-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5012-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5013-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5014-                return StringIO()
5015-            else:
5016-                fnamefp = FilePath(fname)
5017-                self.failUnless(storedirfp in fnamefp.parents(),
5018-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5019-        mockopen.side_effect = call_open
5020+        def _f():
5021+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5022 
5023hunk ./src/allmydata/test/test_backends.py 132
5024-        def call_isdir(fname):
5025-            if fname == os.path.join(storedir, 'shares'):
5026-                return True
5027-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
5028-                return True
5029-            else:
5030-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5031-        mockisdir.side_effect = call_isdir
5032-
5033-        mocklistdir.return_value = []
5034-
5035-        def call_mkdir(fname, mode):
5036-            self.failUnlessEqual(0777, mode)
5037-            self.failUnlessIn(fname,
5038-                              [storedir,
5039-                               os.path.join(storedir, 'shares'),
5040-                               os.path.join(storedir, 'shares', 'incoming')],
5041-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
5042-        mockmkdir.side_effect = call_mkdir
5043-
5044-        # Now begin the test.
5045-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5046-
5047-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5048+        self._help_test_stay_in_your_subtree(_f)
5049 
5050 
5051 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5052}
5053[another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
5054zooko@zooko.com**20110715191500
5055 Ignore-this: af33336789041800761e80510ea2f583
5056 In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
5057] {
5058hunk ./src/allmydata/storage/backends/das/core.py 59
5059                 log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5060                         umid="0wZ27w", level=log.UNUSUAL)
5061 
5062-        self.sharedir = os.path.join(self.storedir, "shares")
5063-        fileutil.make_dirs(self.sharedir)
5064-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
5065+        self.sharedir = self.storedir.child("shares")
5066+        fileutil.fp_make_dirs(self.sharedir)
5067+        self.incomingdir = self.sharedir.child('incoming')
5068         self._clean_incomplete()
5069 
5070     def _clean_incomplete(self):
5071hunk ./src/allmydata/storage/backends/das/core.py 65
5072-        fileutil.rmtree(self.incomingdir)
5073-        fileutil.make_dirs(self.incomingdir)
5074+        fileutil.fp_remove(self.incomingdir)
5075+        fileutil.fp_make_dirs(self.incomingdir)
5076 
5077     def _setup_corruption_advisory(self):
5078         # we don't actually create the corruption-advisory dir until necessary
5079hunk ./src/allmydata/storage/backends/das/core.py 70
5080-        self.corruption_advisory_dir = os.path.join(self.storedir,
5081-                                                    "corruption-advisories")
5082+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
5083 
5084     def _setup_bucket_counter(self):
5085hunk ./src/allmydata/storage/backends/das/core.py 73
5086-        statefname = os.path.join(self.storedir, "bucket_counter.state")
5087+        statefname = self.storedir.child("bucket_counter.state")
5088         self.bucket_counter = FSBucketCountingCrawler(statefname)
5089         self.bucket_counter.setServiceParent(self)
5090 
5091hunk ./src/allmydata/storage/backends/das/core.py 78
5092     def _setup_lease_checkerf(self, expiration_policy):
5093-        statefile = os.path.join(self.storedir, "lease_checker.state")
5094-        historyfile = os.path.join(self.storedir, "lease_checker.history")
5095+        statefile = self.storedir.child("lease_checker.state")
5096+        historyfile = self.storedir.child("lease_checker.history")
5097         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
5098         self.lease_checker.setServiceParent(self)
5099 
5100hunk ./src/allmydata/storage/backends/das/core.py 83
5101-    def get_incoming(self, storageindex):
5102+    def get_incoming_shnums(self, storageindex):
5103         """Return the set of incoming shnums."""
5104         try:
5105hunk ./src/allmydata/storage/backends/das/core.py 86
5106-            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
5107-            incominglist = os.listdir(incomingsharesdir)
5108-            incomingshnums = [int(x) for x in incominglist]
5109-            return set(incomingshnums)
5110-        except OSError:
5111-            # XXX I'd like to make this more specific. If there are no shares at all.
5112-            return set()
5113+           
5114+            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5115+            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5116+            return frozenset(incomingshnums)
5117+        except UnlistableError:
5118+            # There is no shares directory at all.
5119+            return frozenset()
5120             
5121     def get_shares(self, storageindex):
5122         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5123hunk ./src/allmydata/storage/backends/das/core.py 96
5124-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
5125+        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5126         try:
5127hunk ./src/allmydata/storage/backends/das/core.py 98
5128-            for f in os.listdir(finalstoragedir):
5129-                if NUM_RE.match(f):
5130-                    filename = os.path.join(finalstoragedir, f)
5131-                    yield ImmutableShare(filename, storageindex, int(f))
5132-        except OSError:
5133-            # Commonly caused by there being no shares at all.
5134+            for f in finalstoragedir.listdir():
5135+                if NUM_RE.match(f.basename):
5136+                    yield ImmutableShare(f, storageindex, int(f))
5137+        except UnlistableError:
5138+            # There is no shares directory at all.
5139             pass
5140         
5141     def get_available_space(self):
5142hunk ./src/allmydata/storage/backends/das/core.py 149
5143 # then the value stored in this field will be the actual share data length
5144 # modulo 2**32.
5145 
5146-class ImmutableShare:
5147+class ImmutableShare(object):
5148     LEASE_SIZE = struct.calcsize(">L32s32sL")
5149     sharetype = "immutable"
5150 
5151hunk ./src/allmydata/storage/backends/das/core.py 166
5152         if create:
5153             # touch the file, so later callers will see that we're working on
5154             # it. Also construct the metadata.
5155-            assert not os.path.exists(self.finalhome)
5156-            fileutil.make_dirs(os.path.dirname(self.incominghome))
5157+            assert not finalhome.exists()
5158+            fp_make_dirs(self.incominghome)
5159             f = open(self.incominghome, 'wb')
5160             # The second field -- the four-byte share data length -- is no
5161             # longer used as of Tahoe v1.3.0, but we continue to write it in
5162hunk ./src/allmydata/storage/backends/das/core.py 316
5163         except IndexError:
5164             self.add_lease(lease_info)
5165 
5166-
5167     def cancel_lease(self, cancel_secret):
5168         """Remove a lease with the given cancel_secret. If the last lease is
5169         cancelled, the file will be removed. Return the number of bytes that
5170hunk ./src/allmydata/storage/common.py 19
5171 def si_a2b(ascii_storageindex):
5172     return base32.a2b(ascii_storageindex)
5173 
5174-def storage_index_to_dir(storageindex):
5175+def storage_index_to_dir(startfp, storageindex):
5176     sia = si_b2a(storageindex)
5177     return os.path.join(sia[:2], sia)
5178hunk ./src/allmydata/storage/server.py 210
5179 
5180         # fill incoming with all shares that are incoming use a set operation
5181         # since there's no need to operate on individual pieces
5182-        incoming = self.backend.get_incoming(storageindex)
5183+        incoming = self.backend.get_incoming_shnums(storageindex)
5184 
5185         for shnum in ((sharenums - alreadygot) - incoming):
5186             if (not limited) or (remaining_space >= max_space_per_bucket):
5187hunk ./src/allmydata/test/test_backends.py 5
5188 
5189 from twisted.python.filepath import FilePath
5190 
5191+from allmydata.util.log import msg
5192+
5193 from StringIO import StringIO
5194 
5195 from allmydata.test.common_util import ReallyEqualMixin
5196hunk ./src/allmydata/test/test_backends.py 42
5197 
5198 testnodeid = 'testnodeidxxxxxxxxxx'
5199 
5200-class TestFilesMixin(unittest.TestCase):
5201-    def setUp(self):
5202-        self.storedir = FilePath('teststoredir')
5203-        self.basedir = self.storedir.child('shares')
5204-        self.baseincdir = self.basedir.child('incoming')
5205-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5206-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5207-        self.shareincomingname = self.sharedirincomingname.child('0')
5208-        self.sharefname = self.sharedirfinalname.child('0')
5209+class MockStat:
5210+    def __init__(self):
5211+        self.st_mode = None
5212 
5213hunk ./src/allmydata/test/test_backends.py 46
5214+class MockFiles(unittest.TestCase):
5215+    """ I simulate a filesystem that the code under test can use. I flag the
5216+    code under test if it reads or writes outside of its prescribed
5217+    subtree. I simulate just the parts of the filesystem that the current
5218+    implementation of DAS backend needs. """
5219     def call_open(self, fname, mode):
5220         fnamefp = FilePath(fname)
5221hunk ./src/allmydata/test/test_backends.py 53
5222+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5223+                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5224+
5225         if fnamefp == self.storedir.child('bucket_counter.state'):
5226             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
5227         elif fnamefp == self.storedir.child('lease_checker.state'):
5228hunk ./src/allmydata/test/test_backends.py 61
5229             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
5230         elif fnamefp == self.storedir.child('lease_checker.history'):
5231+            # This is separated out from the else clause below just because
5232+            # we know this particular file is going to be used by the
5233+            # current implementation of DAS backend, and we might want to
5234+            # use this information in this test in the future...
5235             return StringIO()
5236         else:
5237hunk ./src/allmydata/test/test_backends.py 67
5238-            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5239-                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5240+            # Anything else you open inside your subtree appears to be an
5241+            # empty file.
5242+            return StringIO()
5243 
5244     def call_isdir(self, fname):
5245         fnamefp = FilePath(fname)
5246hunk ./src/allmydata/test/test_backends.py 73
5247-        if fnamefp == self.storedir.child('shares'):
5248+        return fnamefp.isdir()
5249+
5250+        self.failUnless(self.storedir == self or self.storedir in self.parents(),
5251+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
5252+
5253+        # The first two cases are separate from the else clause below just
5254+        # because we know that the current implementation of the DAS backend
5255+        # inspects these two directories and we might want to make use of
5256+        # that information in the tests in the future...
5257+        if self == self.storedir.child('shares'):
5258             return True
5259hunk ./src/allmydata/test/test_backends.py 84
5260-        elif fnamefp == self.storedir.child('shares').child('incoming'):
5261+        elif self == self.storedir.child('shares').child('incoming'):
5262             return True
5263         else:
5264hunk ./src/allmydata/test/test_backends.py 87
5265-            self.failUnless(self.storedir in fnamefp.parents(),
5266-                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5267+            # Anything else you open inside your subtree appears to be a
5268+            # directory.
5269+            return True
5270 
5271     def call_mkdir(self, fname, mode):
5272hunk ./src/allmydata/test/test_backends.py 92
5273-        self.failUnlessEqual(0777, mode)
5274         fnamefp = FilePath(fname)
5275         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5276                         "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5277hunk ./src/allmydata/test/test_backends.py 95
5278+        self.failUnlessEqual(0777, mode)
5279 
5280hunk ./src/allmydata/test/test_backends.py 97
5281+    def call_listdir(self, fname):
5282+        fnamefp = FilePath(fname)
5283+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5284+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5285 
5286hunk ./src/allmydata/test/test_backends.py 102
5287-    @mock.patch('os.mkdir')
5288-    @mock.patch('__builtin__.open')
5289-    @mock.patch('os.listdir')
5290-    @mock.patch('os.path.isdir')
5291-    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5292-        mocklistdir.return_value = []
5293+    def call_stat(self, fname):
5294+        fnamefp = FilePath(fname)
5295+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5296+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5297+
5298+        msg("%s.call_stat(%s)" % (self, fname,))
5299+        mstat = MockStat()
5300+        mstat.st_mode = 16893 # a directory
5301+        return mstat
5302+
5303+    def setUp(self):
5304+        msg( "%s.setUp()" % (self,))
5305+        self.storedir = FilePath('teststoredir')
5306+        self.basedir = self.storedir.child('shares')
5307+        self.baseincdir = self.basedir.child('incoming')
5308+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5309+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5310+        self.shareincomingname = self.sharedirincomingname.child('0')
5311+        self.sharefname = self.sharedirfinalname.child('0')
5312+
5313+        self.mocklistdirp = mock.patch('os.listdir')
5314+        mocklistdir = self.mocklistdirp.__enter__()
5315+        mocklistdir.side_effect = self.call_listdir
5316+
5317+        self.mockmkdirp = mock.patch('os.mkdir')
5318+        mockmkdir = self.mockmkdirp.__enter__()
5319         mockmkdir.side_effect = self.call_mkdir
5320hunk ./src/allmydata/test/test_backends.py 129
5321+
5322+        self.mockisdirp = mock.patch('os.path.isdir')
5323+        mockisdir = self.mockisdirp.__enter__()
5324         mockisdir.side_effect = self.call_isdir
5325hunk ./src/allmydata/test/test_backends.py 133
5326+
5327+        self.mockopenp = mock.patch('__builtin__.open')
5328+        mockopen = self.mockopenp.__enter__()
5329         mockopen.side_effect = self.call_open
5330hunk ./src/allmydata/test/test_backends.py 137
5331-        mocklistdir.return_value = []
5332-       
5333-        test_func()
5334-       
5335-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5336+
5337+        self.mockstatp = mock.patch('os.stat')
5338+        mockstat = self.mockstatp.__enter__()
5339+        mockstat.side_effect = self.call_stat
5340+
5341+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
5342+        mockfpstat = self.mockfpstatp.__enter__()
5343+        mockfpstat.side_effect = self.call_stat
5344+
5345+    def tearDown(self):
5346+        msg( "%s.tearDown()" % (self,))
5347+        self.mockfpstatp.__exit__()
5348+        self.mockstatp.__exit__()
5349+        self.mockopenp.__exit__()
5350+        self.mockisdirp.__exit__()
5351+        self.mockmkdirp.__exit__()
5352+        self.mocklistdirp.__exit__()
5353 
5354 expiration_policy = {'enabled' : False,
5355                      'mode' : 'age',
5356hunk ./src/allmydata/test/test_backends.py 184
5357         self.failIf(mockopen.called)
5358         self.failIf(mockmkdir.called)
5359 
5360-class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5361+class TestServerConstruction(MockFiles, ReallyEqualMixin):
5362     def test_create_server_fs_backend(self):
5363         """ This tests whether a server instance can be constructed with a
5364         filesystem backend. To pass the test, it mustn't use the filesystem
5365hunk ./src/allmydata/test/test_backends.py 190
5366         outside of its configured storedir. """
5367 
5368-        def _f():
5369-            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5370+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5371 
5372hunk ./src/allmydata/test/test_backends.py 192
5373-        self._help_test_stay_in_your_subtree(_f)
5374-
5375-
5376-class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5377-    """ This tests both the StorageServer xyz """
5378-    @mock.patch('__builtin__.open')
5379-    def setUp(self, mockopen):
5380-        def call_open(fname, mode):
5381-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5382-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5383-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5384-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5385-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5386-                return StringIO()
5387-            else:
5388-                _assert(False, "The tester code doesn't recognize this case.") 
5389-
5390-        mockopen.side_effect = call_open
5391-        self.backend = DASCore(storedir, expiration_policy)
5392-        self.ss = StorageServer(testnodeid, self.backend)
5393-        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
5394-        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5395+class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
5396+    """ This tests both the StorageServer and the DAS backend together. """
5397+    def setUp(self):
5398+        MockFiles.setUp(self)
5399+        try:
5400+            self.backend = DASCore(self.storedir, expiration_policy)
5401+            self.ss = StorageServer(testnodeid, self.backend)
5402+            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
5403+            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5404+        except:
5405+            MockFiles.tearDown(self)
5406+            raise
5407 
5408     @mock.patch('time.time')
5409     def test_write_and_read_share(self, mocktime):
5410hunk ./src/allmydata/util/fileutil.py 8
5411 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5412 
5413 from twisted.python import log
5414+from twisted.python.filepath import UnlistableError
5415 
5416 from pycryptopp.cipher.aes import AES
5417 
5418hunk ./src/allmydata/util/fileutil.py 187
5419             raise tx
5420         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5421 
5422+def fp_make_dirs(dirfp):
5423+    """
5424+    An idempotent version of FilePath.makedirs().  If the dir already
5425+    exists, do nothing and return without raising an exception.  If this
5426+    call creates the dir, return without raising an exception.  If there is
5427+    an error that prevents creation or if the directory gets deleted after
5428+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
5429+    exists, raise an exception.
5430+    """
5431+    log.msg( "xxx 0 %s" % (dirfp,))
5432+    tx = None
5433+    try:
5434+        dirfp.makedirs()
5435+    except OSError, x:
5436+        tx = x
5437+
5438+    if not dirfp.isdir():
5439+        if tx:
5440+            raise tx
5441+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5442+
5443 def rmtree(dirname):
5444     """
5445     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5446hunk ./src/allmydata/util/fileutil.py 244
5447             raise OSError, "Failed to remove dir for unknown reason."
5448         raise OSError, excs
5449 
5450+def fp_remove(dirfp):
5451+    try:
5452+        dirfp.remove()
5453+    except UnlistableError, e:
5454+        if e.originalException.errno != errno.ENOENT:
5455+            raise
5456+
5457 def rm_dir(dirname):
5458     # Renamed to be like shutil.rmtree and unlike rmdir.
5459     return rmtree(dirname)
5460}
5461[another temporary patch for sharing work-in-progress
5462zooko@zooko.com**20110720055918
5463 Ignore-this: dfa0270476cbc6511cdb54c5c9a55a8e
5464 A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
5465 There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
5466 (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
5467 
5468] {
5469hunk ./src/allmydata/storage/backends/das/core.py 5
5470 
5471 from allmydata.interfaces import IStorageBackend
5472 from allmydata.storage.backends.base import Backend
5473-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5474+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5475 from allmydata.util.assertutil import precondition
5476 
5477 #from foolscap.api import Referenceable
5478hunk ./src/allmydata/storage/backends/das/core.py 10
5479 from twisted.application import service
5480+from twisted.python.filepath import UnlistableError
5481 
5482 from zope.interface import implements
5483 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
5484hunk ./src/allmydata/storage/backends/das/core.py 17
5485 from allmydata.util import fileutil, idlib, log, time_format
5486 import allmydata # for __full_version__
5487 
5488-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5489-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5490+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5491+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5492 from allmydata.storage.lease import LeaseInfo
5493 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5494      create_mutable_sharefile
5495hunk ./src/allmydata/storage/backends/das/core.py 41
5496 # $SHARENUM matches this regex:
5497 NUM_RE=re.compile("^[0-9]+$")
5498 
5499+def is_num(fp):
5500+    return NUM_RE.match(fp.basename)
5501+
5502 class DASCore(Backend):
5503     implements(IStorageBackend)
5504     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
5505hunk ./src/allmydata/storage/backends/das/core.py 58
5506         self.storedir = storedir
5507         self.readonly = readonly
5508         self.reserved_space = int(reserved_space)
5509-        if self.reserved_space:
5510-            if self.get_available_space() is None:
5511-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5512-                        umid="0wZ27w", level=log.UNUSUAL)
5513-
5514         self.sharedir = self.storedir.child("shares")
5515         fileutil.fp_make_dirs(self.sharedir)
5516         self.incomingdir = self.sharedir.child('incoming')
5517hunk ./src/allmydata/storage/backends/das/core.py 62
5518         self._clean_incomplete()
5519+        if self.reserved_space and (self.get_available_space() is None):
5520+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5521+                    umid="0wZ27w", level=log.UNUSUAL)
5522+
5523 
5524     def _clean_incomplete(self):
5525         fileutil.fp_remove(self.incomingdir)
5526hunk ./src/allmydata/storage/backends/das/core.py 87
5527         self.lease_checker.setServiceParent(self)
5528 
5529     def get_incoming_shnums(self, storageindex):
5530-        """Return the set of incoming shnums."""
5531+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
5532+        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
5533         try:
5534hunk ./src/allmydata/storage/backends/das/core.py 90
5535-           
5536-            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5537-            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5538-            return frozenset(incomingshnums)
5539+            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
5540+            shnums = [ int(fp.basename) for fp in childfps ]
5541+            return frozenset(shnums)
5542         except UnlistableError:
5543             # There is no shares directory at all.
5544             return frozenset()
5545hunk ./src/allmydata/storage/backends/das/core.py 98
5546             
5547     def get_shares(self, storageindex):
5548-        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5549+        """ Generate ImmutableShare objects for shares we have for this
5550+        storageindex. ("Shares we have" means completed ones, excluding
5551+        incoming ones.)"""
5552         finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5553         try:
5554hunk ./src/allmydata/storage/backends/das/core.py 103
5555-            for f in finalstoragedir.listdir():
5556-                if NUM_RE.match(f.basename):
5557-                    yield ImmutableShare(f, storageindex, int(f))
5558+            for fp in finalstoragedir.children():
5559+                if is_num(fp):
5560+                    yield ImmutableShare(fp, storageindex)
5561         except UnlistableError:
5562             # There is no shares directory at all.
5563             pass
5564hunk ./src/allmydata/storage/backends/das/core.py 116
5565         return fileutil.get_available_space(self.storedir, self.reserved_space)
5566 
5567     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
5568-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
5569-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
5570+        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
5571+        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
5572         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
5573         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
5574         return bw
5575hunk ./src/allmydata/storage/backends/das/expirer.py 50
5576     slow_start = 360 # wait 6 minutes after startup
5577     minimum_cycle_time = 12*60*60 # not more than twice per day
5578 
5579-    def __init__(self, statefile, historyfile, expiration_policy):
5580-        self.historyfile = historyfile
5581+    def __init__(self, statefile, historyfp, expiration_policy):
5582+        self.historyfp = historyfp
5583         self.expiration_enabled = expiration_policy['enabled']
5584         self.mode = expiration_policy['mode']
5585         self.override_lease_duration = None
5586hunk ./src/allmydata/storage/backends/das/expirer.py 80
5587             self.state["cycle-to-date"].setdefault(k, so_far[k])
5588 
5589         # initialize history
5590-        if not os.path.exists(self.historyfile):
5591+        if not self.historyfp.exists():
5592             history = {} # cyclenum -> dict
5593hunk ./src/allmydata/storage/backends/das/expirer.py 82
5594-            f = open(self.historyfile, "wb")
5595-            pickle.dump(history, f)
5596-            f.close()
5597+            self.historyfp.setContent(pickle.dumps(history))
5598 
5599     def create_empty_cycle_dict(self):
5600         recovered = self.create_empty_recovered_dict()
5601hunk ./src/allmydata/storage/backends/das/expirer.py 305
5602         # copy() needs to become a deepcopy
5603         h["space-recovered"] = s["space-recovered"].copy()
5604 
5605-        history = pickle.load(open(self.historyfile, "rb"))
5606+        history = pickle.load(self.historyfp.getContent())
5607         history[cycle] = h
5608         while len(history) > 10:
5609             oldcycles = sorted(history.keys())
5610hunk ./src/allmydata/storage/backends/das/expirer.py 310
5611             del history[oldcycles[0]]
5612-        f = open(self.historyfile, "wb")
5613-        pickle.dump(history, f)
5614-        f.close()
5615+        self.historyfp.setContent(pickle.dumps(history))
5616 
5617     def get_state(self):
5618         """In addition to the crawler state described in
5619hunk ./src/allmydata/storage/backends/das/expirer.py 379
5620         progress = self.get_progress()
5621 
5622         state = ShareCrawler.get_state(self) # does a shallow copy
5623-        history = pickle.load(open(self.historyfile, "rb"))
5624+        history = pickle.load(self.historyfp.getContent())
5625         state["history"] = history
5626 
5627         if not progress["cycle-in-progress"]:
5628hunk ./src/allmydata/storage/common.py 19
5629 def si_a2b(ascii_storageindex):
5630     return base32.a2b(ascii_storageindex)
5631 
5632-def storage_index_to_dir(startfp, storageindex):
5633+def si_dir(startfp, storageindex):
5634     sia = si_b2a(storageindex)
5635hunk ./src/allmydata/storage/common.py 21
5636-    return os.path.join(sia[:2], sia)
5637+    return startfp.child(sia[:2]).child(sia)
5638hunk ./src/allmydata/storage/crawler.py 68
5639     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
5640     minimum_cycle_time = 300 # don't run a cycle faster than this
5641 
5642-    def __init__(self, statefname, allowed_cpu_percentage=None):
5643+    def __init__(self, statefp, allowed_cpu_percentage=None):
5644         service.MultiService.__init__(self)
5645         if allowed_cpu_percentage is not None:
5646             self.allowed_cpu_percentage = allowed_cpu_percentage
5647hunk ./src/allmydata/storage/crawler.py 72
5648-        self.statefname = statefname
5649+        self.statefp = statefp
5650         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
5651                          for i in range(2**10)]
5652         self.prefixes.sort()
5653hunk ./src/allmydata/storage/crawler.py 192
5654         #                            of the last bucket to be processed, or
5655         #                            None if we are sleeping between cycles
5656         try:
5657-            f = open(self.statefname, "rb")
5658-            state = pickle.load(f)
5659-            f.close()
5660+            state = pickle.loads(self.statefp.getContent())
5661         except EnvironmentError:
5662             state = {"version": 1,
5663                      "last-cycle-finished": None,
5664hunk ./src/allmydata/storage/crawler.py 228
5665         else:
5666             last_complete_prefix = self.prefixes[lcpi]
5667         self.state["last-complete-prefix"] = last_complete_prefix
5668-        tmpfile = self.statefname + ".tmp"
5669-        f = open(tmpfile, "wb")
5670-        pickle.dump(self.state, f)
5671-        f.close()
5672-        fileutil.move_into_place(tmpfile, self.statefname)
5673+        self.statefp.setContent(pickle.dumps(self.state))
5674 
5675     def startService(self):
5676         # arrange things to look like we were just sleeping, so
5677hunk ./src/allmydata/storage/crawler.py 440
5678 
5679     minimum_cycle_time = 60*60 # we don't need this more than once an hour
5680 
5681-    def __init__(self, statefname, num_sample_prefixes=1):
5682-        FSShareCrawler.__init__(self, statefname)
5683+    def __init__(self, statefp, num_sample_prefixes=1):
5684+        FSShareCrawler.__init__(self, statefp)
5685         self.num_sample_prefixes = num_sample_prefixes
5686 
5687     def add_initial_state(self):
5688hunk ./src/allmydata/storage/server.py 11
5689 from allmydata.util import fileutil, idlib, log, time_format
5690 import allmydata # for __full_version__
5691 
5692-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5693-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5694+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5695+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5696 from allmydata.storage.lease import LeaseInfo
5697 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5698      create_mutable_sharefile
5699hunk ./src/allmydata/storage/server.py 173
5700         # to a particular owner.
5701         start = time.time()
5702         self.count("allocate")
5703-        alreadygot = set()
5704         incoming = set()
5705         bucketwriters = {} # k: shnum, v: BucketWriter
5706 
5707hunk ./src/allmydata/storage/server.py 199
5708             remaining_space -= self.allocated_size()
5709         # self.readonly_storage causes remaining_space <= 0
5710 
5711-        # fill alreadygot with all shares that we have, not just the ones
5712+        # Fill alreadygot with all shares that we have, not just the ones
5713         # they asked about: this will save them a lot of work. Add or update
5714         # leases for all of them: if they want us to hold shares for this
5715hunk ./src/allmydata/storage/server.py 202
5716-        # file, they'll want us to hold leases for this file.
5717+        # file, they'll want us to hold leases for all the shares of it.
5718+        alreadygot = set()
5719         for share in self.backend.get_shares(storageindex):
5720hunk ./src/allmydata/storage/server.py 205
5721-            alreadygot.add(share.shnum)
5722             share.add_or_renew_lease(lease_info)
5723hunk ./src/allmydata/storage/server.py 206
5724+            alreadygot.add(share.shnum)
5725 
5726hunk ./src/allmydata/storage/server.py 208
5727-        # fill incoming with all shares that are incoming use a set operation
5728-        # since there's no need to operate on individual pieces
5729+        # all share numbers that are incoming
5730         incoming = self.backend.get_incoming_shnums(storageindex)
5731 
5732         for shnum in ((sharenums - alreadygot) - incoming):
5733hunk ./src/allmydata/storage/server.py 282
5734             total_space_freed += sf.cancel_lease(cancel_secret)
5735 
5736         if found_buckets:
5737-            storagedir = os.path.join(self.sharedir,
5738-                                      storage_index_to_dir(storageindex))
5739-            if not os.listdir(storagedir):
5740-                os.rmdir(storagedir)
5741+            storagedir = si_dir(self.sharedir, storageindex)
5742+            fp_rmdir_if_empty(storagedir)
5743 
5744         if self.stats_provider:
5745             self.stats_provider.count('storage_server.bytes_freed',
5746hunk ./src/allmydata/test/test_backends.py 52
5747     subtree. I simulate just the parts of the filesystem that the current
5748     implementation of DAS backend needs. """
5749     def call_open(self, fname, mode):
5750+        assert isinstance(fname, basestring), fname
5751         fnamefp = FilePath(fname)
5752         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5753                         "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5754hunk ./src/allmydata/test/test_backends.py 104
5755                         "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5756 
5757     def call_stat(self, fname):
5758+        assert isinstance(fname, basestring), fname
5759         fnamefp = FilePath(fname)
5760         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5761                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5762hunk ./src/allmydata/test/test_backends.py 217
5763 
5764         mocktime.return_value = 0
5765         # Inspect incoming and fail unless it's empty.
5766-        incomingset = self.ss.backend.get_incoming('teststorage_index')
5767-        self.failUnlessReallyEqual(incomingset, set())
5768+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
5769+        self.failUnlessReallyEqual(incomingset, frozenset())
5770         
5771         # Populate incoming with the sharenum: 0.
5772hunk ./src/allmydata/test/test_backends.py 221
5773-        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5774+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5775 
5776         # Inspect incoming and fail unless the sharenum: 0 is listed there.
5777hunk ./src/allmydata/test/test_backends.py 224
5778-        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
5779+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
5780         
5781         # Attempt to create a second share writer with the same sharenum.
5782hunk ./src/allmydata/test/test_backends.py 227
5783-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5784+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5785 
5786         # Show that no sharewriter results from a remote_allocate_buckets
5787         # with the same si and sharenum, until BucketWriter.remote_close()
5788hunk ./src/allmydata/test/test_backends.py 280
5789         StorageServer object. """
5790 
5791         def call_listdir(dirname):
5792+            precondition(isinstance(dirname, basestring), dirname)
5793             self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
5794             return ['0']
5795 
5796hunk ./src/allmydata/test/test_backends.py 287
5797         mocklistdir.side_effect = call_listdir
5798 
5799         def call_open(fname, mode):
5800+            precondition(isinstance(fname, basestring), fname)
5801             self.failUnlessReallyEqual(fname, sharefname)
5802             self.failUnlessEqual(mode[0], 'r', mode)
5803             self.failUnless('b' in mode, mode)
5804hunk ./src/allmydata/test/test_backends.py 297
5805 
5806         datalen = len(share_data)
5807         def call_getsize(fname):
5808+            precondition(isinstance(fname, basestring), fname)
5809             self.failUnlessReallyEqual(fname, sharefname)
5810             return datalen
5811         mockgetsize.side_effect = call_getsize
5812hunk ./src/allmydata/test/test_backends.py 303
5813 
5814         def call_exists(fname):
5815+            precondition(isinstance(fname, basestring), fname)
5816             self.failUnlessReallyEqual(fname, sharefname)
5817             return True
5818         mockexists.side_effect = call_exists
5819hunk ./src/allmydata/test/test_backends.py 321
5820         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
5821 
5822 
5823-class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
5824-    @mock.patch('time.time')
5825-    @mock.patch('os.mkdir')
5826-    @mock.patch('__builtin__.open')
5827-    @mock.patch('os.listdir')
5828-    @mock.patch('os.path.isdir')
5829-    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5830+class TestBackendConstruction(MockFiles, ReallyEqualMixin):
5831+    def test_create_fs_backend(self):
5832         """ This tests whether a file system backend instance can be
5833         constructed. To pass the test, it has to use the
5834         filesystem in only the prescribed ways. """
5835hunk ./src/allmydata/test/test_backends.py 327
5836 
5837-        def call_open(fname, mode):
5838-            if fname == os.path.join(storedir,'bucket_counter.state'):
5839-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5840-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5841-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5842-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5843-                return StringIO()
5844-            else:
5845-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5846-        mockopen.side_effect = call_open
5847-
5848-        def call_isdir(fname):
5849-            if fname == os.path.join(storedir,'shares'):
5850-                return True
5851-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5852-                return True
5853-            else:
5854-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5855-        mockisdir.side_effect = call_isdir
5856-
5857-        def call_mkdir(fname, mode):
5858-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
5859-            self.failUnlessEqual(0777, mode)
5860-            if fname == storedir:
5861-                return None
5862-            elif fname == os.path.join(storedir,'shares'):
5863-                return None
5864-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5865-                return None
5866-            else:
5867-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
5868-        mockmkdir.side_effect = call_mkdir
5869-
5870         # Now begin the test.
5871hunk ./src/allmydata/test/test_backends.py 328
5872-        DASCore('teststoredir', expiration_policy)
5873-
5874-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5875-
5876+        DASCore(self.storedir, expiration_policy)
5877hunk ./src/allmydata/util/fileutil.py 7
5878 
5879 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5880 
5881+from allmydata.util.assertutil import precondition
5882+
5883 from twisted.python import log
5884hunk ./src/allmydata/util/fileutil.py 10
5885-from twisted.python.filepath import UnlistableError
5886+from twisted.python.filepath import FilePath, UnlistableError
5887 
5888 from pycryptopp.cipher.aes import AES
5889 
5890hunk ./src/allmydata/util/fileutil.py 210
5891             raise tx
5892         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5893 
5894+def fp_rmdir_if_empty(dirfp):
5895+    """ Remove the directory if it is empty. """
5896+    try:
5897+        os.rmdir(dirfp.path)
5898+    except OSError, e:
5899+        if e.errno != errno.ENOTEMPTY:
5900+            raise
5901+    else:
5902+        dirfp.changed()
5903+
5904 def rmtree(dirname):
5905     """
5906     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5907hunk ./src/allmydata/util/fileutil.py 257
5908         raise OSError, excs
5909 
5910 def fp_remove(dirfp):
5911+    """
5912+    An idempotent version of shutil.rmtree().  If the dir is already gone,
5913+    do nothing and return without raising an exception.  If this call
5914+    removes the dir, return without raising an exception.  If there is an
5915+    error that prevents removal or if the directory gets created again by
5916+    someone else after this deletes it and before this checks that it is
5917+    gone, raise an exception.
5918+    """
5919     try:
5920         dirfp.remove()
5921     except UnlistableError, e:
5922hunk ./src/allmydata/util/fileutil.py 270
5923         if e.originalException.errno != errno.ENOENT:
5924             raise
5925+    except OSError, e:
5926+        if e.errno != errno.ENOENT:
5927+            raise
5928 
5929 def rm_dir(dirname):
5930     # Renamed to be like shutil.rmtree and unlike rmdir.
5931hunk ./src/allmydata/util/fileutil.py 387
5932         import traceback
5933         traceback.print_exc()
5934 
5935-def get_disk_stats(whichdir, reserved_space=0):
5936+def get_disk_stats(whichdirfp, reserved_space=0):
5937     """Return disk statistics for the storage disk, in the form of a dict
5938     with the following fields.
5939       total:            total bytes on disk
5940hunk ./src/allmydata/util/fileutil.py 408
5941     you can pass how many bytes you would like to leave unused on this
5942     filesystem as reserved_space.
5943     """
5944+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5945 
5946     if have_GetDiskFreeSpaceExW:
5947         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
5948hunk ./src/allmydata/util/fileutil.py 419
5949         n_free_for_nonroot = c_ulonglong(0)
5950         n_total            = c_ulonglong(0)
5951         n_free_for_root    = c_ulonglong(0)
5952-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
5953+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
5954                                                byref(n_total),
5955                                                byref(n_free_for_root))
5956         if retval == 0:
5957hunk ./src/allmydata/util/fileutil.py 424
5958             raise OSError("Windows error %d attempting to get disk statistics for %r"
5959-                          % (GetLastError(), whichdir))
5960+                          % (GetLastError(), whichdirfp.path))
5961         free_for_nonroot = n_free_for_nonroot.value
5962         total            = n_total.value
5963         free_for_root    = n_free_for_root.value
5964hunk ./src/allmydata/util/fileutil.py 433
5965         # <http://docs.python.org/library/os.html#os.statvfs>
5966         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
5967         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
5968-        s = os.statvfs(whichdir)
5969+        s = os.statvfs(whichdirfp.path)
5970 
5971         # on my mac laptop:
5972         #  statvfs(2) is a wrapper around statfs(2).
5973hunk ./src/allmydata/util/fileutil.py 460
5974              'avail': avail,
5975            }
5976 
5977-def get_available_space(whichdir, reserved_space):
5978+def get_available_space(whichdirfp, reserved_space):
5979     """Returns available space for share storage in bytes, or None if no
5980     API to get this information is available.
5981 
5982hunk ./src/allmydata/util/fileutil.py 472
5983     you can pass how many bytes you would like to leave unused on this
5984     filesystem as reserved_space.
5985     """
5986+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5987     try:
5988hunk ./src/allmydata/util/fileutil.py 474
5989-        return get_disk_stats(whichdir, reserved_space)['avail']
5990+        return get_disk_stats(whichdirfp, reserved_space)['avail']
5991     except AttributeError:
5992         return None
5993hunk ./src/allmydata/util/fileutil.py 477
5994-    except EnvironmentError:
5995-        log.msg("OS call to get disk statistics failed")
5996-        return 0
5997}
5998[jacp16 or so
5999wilcoxjg@gmail.com**20110722070036
6000 Ignore-this: 7548785cad146056eede9a16b93b569f
6001] {
6002merger 0.0 (
6003hunk ./src/allmydata/_auto_deps.py 21
6004-    "Twisted >= 2.4.0",
6005+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6006+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6007+    # support asynchronous close.
6008+    "Twisted >= 10.1.0",
6009hunk ./src/allmydata/_auto_deps.py 21
6010-    "Twisted >= 2.4.0",
6011+    "Twisted >= 11.0",
6012)
6013hunk ./src/allmydata/storage/backends/das/core.py 2
6014 import os, re, weakref, struct, time, stat
6015+from twisted.application import service
6016+from twisted.python.filepath import UnlistableError
6017+from twisted.python.filepath import FilePath
6018+from zope.interface import implements
6019 
6020hunk ./src/allmydata/storage/backends/das/core.py 7
6021+import allmydata # for __full_version__
6022 from allmydata.interfaces import IStorageBackend
6023 from allmydata.storage.backends.base import Backend
6024hunk ./src/allmydata/storage/backends/das/core.py 10
6025-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6026+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6027 from allmydata.util.assertutil import precondition
6028hunk ./src/allmydata/storage/backends/das/core.py 12
6029-
6030-#from foolscap.api import Referenceable
6031-from twisted.application import service
6032-from twisted.python.filepath import UnlistableError
6033-
6034-from zope.interface import implements
6035 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6036 from allmydata.util import fileutil, idlib, log, time_format
6037hunk ./src/allmydata/storage/backends/das/core.py 14
6038-import allmydata # for __full_version__
6039-
6040-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6041-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6042 from allmydata.storage.lease import LeaseInfo
6043 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6044      create_mutable_sharefile
6045hunk ./src/allmydata/storage/backends/das/core.py 21
6046 from allmydata.storage.crawler import FSBucketCountingCrawler
6047 from allmydata.util.hashutil import constant_time_compare
6048 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6049-
6050-from zope.interface import implements
6051+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6052 
6053 # storage/
6054 # storage/shares/incoming
6055hunk ./src/allmydata/storage/backends/das/core.py 49
6056         self._setup_lease_checkerf(expiration_policy)
6057 
6058     def _setup_storage(self, storedir, readonly, reserved_space):
6059+        precondition(isinstance(storedir, FilePath)) 
6060         self.storedir = storedir
6061         self.readonly = readonly
6062         self.reserved_space = int(reserved_space)
6063hunk ./src/allmydata/storage/backends/das/core.py 83
6064 
6065     def get_incoming_shnums(self, storageindex):
6066         """ Return a frozenset of the shnum (as ints) of incoming shares. """
6067-        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
6068+        incomingdir = si_si2dir(self.incomingdir, storageindex)
6069         try:
6070             childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
6071             shnums = [ int(fp.basename) for fp in childfps ]
6072hunk ./src/allmydata/storage/backends/das/core.py 96
6073         """ Generate ImmutableShare objects for shares we have for this
6074         storageindex. ("Shares we have" means completed ones, excluding
6075         incoming ones.)"""
6076-        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
6077+        finalstoragedir = si_si2dir(self.sharedir, storageindex)
6078         try:
6079             for fp in finalstoragedir.children():
6080                 if is_num(fp):
6081hunk ./src/allmydata/storage/backends/das/core.py 111
6082         return fileutil.get_available_space(self.storedir, self.reserved_space)
6083 
6084     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6085-        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
6086-        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6087+        finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
6088+        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6089         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
6090         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
6091         return bw
6092hunk ./src/allmydata/storage/backends/null/core.py 18
6093         return None
6094 
6095     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6096-       
6097-        immutableshare = ImmutableShare()
6098+        immutableshare = ImmutableShare()
6099         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
6100 
6101     def set_storage_server(self, ss):
6102hunk ./src/allmydata/storage/backends/null/core.py 24
6103         self.ss = ss
6104 
6105-    def get_incoming(self, storageindex):
6106-        return set()
6107+    def get_incoming_shnums(self, storageindex):
6108+        return frozenset()
6109 
6110 class ImmutableShare:
6111     sharetype = "immutable"
6112hunk ./src/allmydata/storage/common.py 19
6113 def si_a2b(ascii_storageindex):
6114     return base32.a2b(ascii_storageindex)
6115 
6116-def si_dir(startfp, storageindex):
6117+def si_si2dir(startfp, storageindex):
6118     sia = si_b2a(storageindex)
6119     return startfp.child(sia[:2]).child(sia)
6120hunk ./src/allmydata/storage/immutable.py 20
6121     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
6122         self.ss = ss
6123         self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
6124-
6125         self._canary = canary
6126         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
6127         self.closed = False
6128hunk ./src/allmydata/storage/lease.py 17
6129 
6130     def get_expiration_time(self):
6131         return self.expiration_time
6132+
6133     def get_grant_renew_time_time(self):
6134         # hack, based upon fixed 31day expiration period
6135         return self.expiration_time - 31*24*60*60
6136hunk ./src/allmydata/storage/lease.py 21
6137+
6138     def get_age(self):
6139         return time.time() - self.get_grant_renew_time_time()
6140 
6141hunk ./src/allmydata/storage/lease.py 32
6142          self.expiration_time) = struct.unpack(">L32s32sL", data)
6143         self.nodeid = None
6144         return self
6145+
6146     def to_immutable_data(self):
6147         return struct.pack(">L32s32sL",
6148                            self.owner_num,
6149hunk ./src/allmydata/storage/lease.py 45
6150                            int(self.expiration_time),
6151                            self.renew_secret, self.cancel_secret,
6152                            self.nodeid)
6153+
6154     def from_mutable_data(self, data):
6155         (self.owner_num,
6156          self.expiration_time,
6157hunk ./src/allmydata/storage/server.py 11
6158 from allmydata.util import fileutil, idlib, log, time_format
6159 import allmydata # for __full_version__
6160 
6161-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6162-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6163+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6164+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6165 from allmydata.storage.lease import LeaseInfo
6166 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6167      create_mutable_sharefile
6168hunk ./src/allmydata/storage/server.py 88
6169             else:
6170                 stats["mean"] = None
6171 
6172-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
6173-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
6174-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
6175+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
6176+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
6177+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
6178                              (0.999, "99_9_percentile", 1000)]
6179 
6180             for percentile, percentilestring, minnumtoobserve in orderstatlist:
6181hunk ./src/allmydata/storage/server.py 231
6182             header = f.read(32)
6183             f.close()
6184             if header[:32] == MutableShareFile.MAGIC:
6185+                # XXX  Can I exploit this code?
6186                 sf = MutableShareFile(filename, self)
6187                 # note: if the share has been migrated, the renew_lease()
6188                 # call will throw an exception, with information to help the
6189hunk ./src/allmydata/storage/server.py 237
6190                 # client update the lease.
6191             elif header[:4] == struct.pack(">L", 1):
6192+                # Check if version number is "1".
6193+                # XXX WHAT ABOUT OTHER VERSIONS!!!!!!!?
6194                 sf = ShareFile(filename)
6195             else:
6196                 continue # non-sharefile
6197hunk ./src/allmydata/storage/server.py 285
6198             total_space_freed += sf.cancel_lease(cancel_secret)
6199 
6200         if found_buckets:
6201-            storagedir = si_dir(self.sharedir, storageindex)
6202+            # XXX  Yikes looks like code that shouldn't be in the server!
6203+            storagedir = si_si2dir(self.sharedir, storageindex)
6204             fp_rmdir_if_empty(storagedir)
6205 
6206         if self.stats_provider:
6207hunk ./src/allmydata/storage/server.py 301
6208             self.stats_provider.count('storage_server.bytes_added', consumed_size)
6209         del self._active_writers[bw]
6210 
6211-
6212     def remote_get_buckets(self, storageindex):
6213         start = time.time()
6214         self.count("get")
6215hunk ./src/allmydata/storage/server.py 329
6216         except StopIteration:
6217             return iter([])
6218 
6219+    #  XXX  As far as Zancas' grockery has gotten.
6220     def remote_slot_testv_and_readv_and_writev(self, storageindex,
6221                                                secrets,
6222                                                test_and_write_vectors,
6223hunk ./src/allmydata/storage/server.py 338
6224         self.count("writev")
6225         si_s = si_b2a(storageindex)
6226         log.msg("storage: slot_writev %s" % si_s)
6227-        si_dir = storage_index_to_dir(storageindex)
6228+       
6229         (write_enabler, renew_secret, cancel_secret) = secrets
6230         # shares exist if there is a file for them
6231hunk ./src/allmydata/storage/server.py 341
6232-        bucketdir = os.path.join(self.sharedir, si_dir)
6233+        bucketdir = si_si2dir(self.sharedir, storageindex)
6234         shares = {}
6235         if os.path.isdir(bucketdir):
6236             for sharenum_s in os.listdir(bucketdir):
6237hunk ./src/allmydata/storage/server.py 430
6238         si_s = si_b2a(storageindex)
6239         lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
6240                      facility="tahoe.storage", level=log.OPERATIONAL)
6241-        si_dir = storage_index_to_dir(storageindex)
6242         # shares exist if there is a file for them
6243hunk ./src/allmydata/storage/server.py 431
6244-        bucketdir = os.path.join(self.sharedir, si_dir)
6245+        bucketdir = si_si2dir(self.sharedir, storageindex)
6246         if not os.path.isdir(bucketdir):
6247             self.add_latency("readv", time.time() - start)
6248             return {}
6249hunk ./src/allmydata/test/test_backends.py 2
6250 from twisted.trial import unittest
6251-
6252 from twisted.python.filepath import FilePath
6253hunk ./src/allmydata/test/test_backends.py 3
6254-
6255 from allmydata.util.log import msg
6256hunk ./src/allmydata/test/test_backends.py 4
6257-
6258 from StringIO import StringIO
6259hunk ./src/allmydata/test/test_backends.py 5
6260-
6261 from allmydata.test.common_util import ReallyEqualMixin
6262 from allmydata.util.assertutil import _assert
6263hunk ./src/allmydata/test/test_backends.py 7
6264-
6265 import mock
6266 
6267 # This is the code that we're going to be testing.
6268hunk ./src/allmydata/test/test_backends.py 11
6269 from allmydata.storage.server import StorageServer
6270-
6271 from allmydata.storage.backends.das.core import DASCore
6272 from allmydata.storage.backends.null.core import NullCore
6273 
6274hunk ./src/allmydata/test/test_backends.py 14
6275-
6276-# The following share file contents was generated with
6277+# The following share file content was generated with
6278 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
6279hunk ./src/allmydata/test/test_backends.py 16
6280-# with share data == 'a'.
6281+# with share data == 'a'. The total size of this input
6282+# is 85 bytes.
6283 shareversionnumber = '\x00\x00\x00\x01'
6284 sharedatalength = '\x00\x00\x00\x01'
6285 numberofleases = '\x00\x00\x00\x01'
6286hunk ./src/allmydata/test/test_backends.py 21
6287-
6288 shareinputdata = 'a'
6289 ownernumber = '\x00\x00\x00\x00'
6290 renewsecret  = 'x'*32
6291hunk ./src/allmydata/test/test_backends.py 31
6292 client_data = shareinputdata + ownernumber + renewsecret + \
6293     cancelsecret + expirationtime + nextlease
6294 share_data = containerdata + client_data
6295-
6296-
6297 testnodeid = 'testnodeidxxxxxxxxxx'
6298 
6299 class MockStat:
6300hunk ./src/allmydata/test/test_backends.py 105
6301         mstat.st_mode = 16893 # a directory
6302         return mstat
6303 
6304+    def call_get_available_space(self, storedir, reservedspace):
6305+        # The input vector has an input size of 85.
6306+        return 85 - reservedspace
6307+
6308+    def call_exists(self):
6309+        # I'm only called in the ImmutableShareFile constructor.
6310+        return False
6311+
6312     def setUp(self):
6313         msg( "%s.setUp()" % (self,))
6314         self.storedir = FilePath('teststoredir')
6315hunk ./src/allmydata/test/test_backends.py 147
6316         mockfpstat = self.mockfpstatp.__enter__()
6317         mockfpstat.side_effect = self.call_stat
6318 
6319+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6320+        mockget_available_space = self.mockget_available_space.__enter__()
6321+        mockget_available_space.side_effect = self.call_get_available_space
6322+
6323+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6324+        mockfpexists = self.mockfpexists.__enter__()
6325+        mockfpexists.side_effect = self.call_exists
6326+
6327     def tearDown(self):
6328         msg( "%s.tearDown()" % (self,))
6329hunk ./src/allmydata/test/test_backends.py 157
6330+        self.mockfpexists.__exit__()
6331+        self.mockget_available_space.__exit__()
6332         self.mockfpstatp.__exit__()
6333         self.mockstatp.__exit__()
6334         self.mockopenp.__exit__()
6335hunk ./src/allmydata/test/test_backends.py 166
6336         self.mockmkdirp.__exit__()
6337         self.mocklistdirp.__exit__()
6338 
6339+
6340 expiration_policy = {'enabled' : False,
6341                      'mode' : 'age',
6342                      'override_lease_duration' : None,
6343hunk ./src/allmydata/test/test_backends.py 182
6344         self.ss = StorageServer(testnodeid, backend=NullCore())
6345 
6346     @mock.patch('os.mkdir')
6347-
6348     @mock.patch('__builtin__.open')
6349     @mock.patch('os.listdir')
6350     @mock.patch('os.path.isdir')
6351hunk ./src/allmydata/test/test_backends.py 201
6352         filesystem backend. To pass the test, it mustn't use the filesystem
6353         outside of its configured storedir. """
6354 
6355-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
6356+        StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
6357 
6358 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
6359     """ This tests both the StorageServer and the DAS backend together. """
6360hunk ./src/allmydata/test/test_backends.py 205
6361+   
6362     def setUp(self):
6363         MockFiles.setUp(self)
6364         try:
6365hunk ./src/allmydata/test/test_backends.py 211
6366             self.backend = DASCore(self.storedir, expiration_policy)
6367             self.ss = StorageServer(testnodeid, self.backend)
6368-            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6369-            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
6370+            self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6371+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
6372         except:
6373             MockFiles.tearDown(self)
6374             raise
6375hunk ./src/allmydata/test/test_backends.py 233
6376         # Populate incoming with the sharenum: 0.
6377         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6378 
6379-        # Inspect incoming and fail unless the sharenum: 0 is listed there.
6380-        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6381+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
6382+        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6383         
6384         # Attempt to create a second share writer with the same sharenum.
6385         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6386hunk ./src/allmydata/test/test_backends.py 257
6387 
6388         # Postclose: (Omnibus) failUnless written data is in final.
6389         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
6390-        contents = sharesinfinal[0].read_share_data(0,73)
6391+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
6392+        contents = sharesinfinal[0].read_share_data(0, 73)
6393         self.failUnlessReallyEqual(contents, client_data)
6394 
6395         # Exercise the case that the share we're asking to allocate is
6396hunk ./src/allmydata/test/test_backends.py 276
6397         mockget_available_space.side_effect = call_get_available_space
6398         
6399         
6400-        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6401+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6402 
6403     @mock.patch('os.path.exists')
6404     @mock.patch('os.path.getsize')
6405}
6406[jacp17
6407wilcoxjg@gmail.com**20110722203244
6408 Ignore-this: e79a5924fb2eb786ee4e9737a8228f87
6409] {
6410hunk ./src/allmydata/storage/backends/das/core.py 14
6411 from allmydata.util.assertutil import precondition
6412 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6413 from allmydata.util import fileutil, idlib, log, time_format
6414+from allmydata.util.fileutil import fp_make_dirs
6415 from allmydata.storage.lease import LeaseInfo
6416 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6417      create_mutable_sharefile
6418hunk ./src/allmydata/storage/backends/das/core.py 19
6419 from allmydata.storage.immutable import BucketWriter, BucketReader
6420-from allmydata.storage.crawler import FSBucketCountingCrawler
6421+from allmydata.storage.crawler import BucketCountingCrawler
6422 from allmydata.util.hashutil import constant_time_compare
6423hunk ./src/allmydata/storage/backends/das/core.py 21
6424-from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6425+from allmydata.storage.backends.das.expirer import LeaseCheckingCrawler
6426 _pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6427 
6428 # storage/
6429hunk ./src/allmydata/storage/backends/das/core.py 43
6430     implements(IStorageBackend)
6431     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
6432         Backend.__init__(self)
6433-
6434         self._setup_storage(storedir, readonly, reserved_space)
6435         self._setup_corruption_advisory()
6436         self._setup_bucket_counter()
6437hunk ./src/allmydata/storage/backends/das/core.py 72
6438 
6439     def _setup_bucket_counter(self):
6440         statefname = self.storedir.child("bucket_counter.state")
6441-        self.bucket_counter = FSBucketCountingCrawler(statefname)
6442+        self.bucket_counter = BucketCountingCrawler(statefname)
6443         self.bucket_counter.setServiceParent(self)
6444 
6445     def _setup_lease_checkerf(self, expiration_policy):
6446hunk ./src/allmydata/storage/backends/das/core.py 78
6447         statefile = self.storedir.child("lease_checker.state")
6448         historyfile = self.storedir.child("lease_checker.history")
6449-        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6450+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6451         self.lease_checker.setServiceParent(self)
6452 
6453     def get_incoming_shnums(self, storageindex):
6454hunk ./src/allmydata/storage/backends/das/core.py 168
6455             # it. Also construct the metadata.
6456             assert not finalhome.exists()
6457             fp_make_dirs(self.incominghome)
6458-            f = open(self.incominghome, 'wb')
6459+            f = self.incominghome.child(str(self.shnum))
6460             # The second field -- the four-byte share data length -- is no
6461             # longer used as of Tahoe v1.3.0, but we continue to write it in
6462             # there in case someone downgrades a storage server from >=
6463hunk ./src/allmydata/storage/backends/das/core.py 178
6464             # the largest length that can fit into the field. That way, even
6465             # if this does happen, the old < v1.3.0 server will still allow
6466             # clients to read the first part of the share.
6467-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6468-            f.close()
6469+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6470+            #f.close()
6471             self._lease_offset = max_size + 0x0c
6472             self._num_leases = 0
6473         else:
6474hunk ./src/allmydata/storage/backends/das/core.py 261
6475         f.write(data)
6476         f.close()
6477 
6478-    def _write_lease_record(self, f, lease_number, lease_info):
6479+    def _write_lease_record(self, lease_number, lease_info):
6480         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6481         f.seek(offset)
6482         assert f.tell() == offset
6483hunk ./src/allmydata/storage/backends/das/core.py 290
6484                 yield LeaseInfo().from_immutable_data(data)
6485 
6486     def add_lease(self, lease_info):
6487-        f = open(self.incominghome, 'rb+')
6488+        self.incominghome, 'rb+')
6489         num_leases = self._read_num_leases(f)
6490         self._write_lease_record(f, num_leases, lease_info)
6491         self._write_num_leases(f, num_leases+1)
6492hunk ./src/allmydata/storage/backends/das/expirer.py 1
6493-import time, os, pickle, struct
6494-from allmydata.storage.crawler import FSShareCrawler
6495+import time, os, pickle, struct # os, pickle, and struct will almost certainly be migrated to the backend...
6496+from allmydata.storage.crawler import ShareCrawler
6497 from allmydata.storage.common import UnknownMutableContainerVersionError, \
6498      UnknownImmutableContainerVersionError
6499 from twisted.python import log as twlog
6500hunk ./src/allmydata/storage/backends/das/expirer.py 7
6501 
6502-class FSLeaseCheckingCrawler(FSShareCrawler):
6503+class LeaseCheckingCrawler(ShareCrawler):
6504     """I examine the leases on all shares, determining which are still valid
6505     and which have expired. I can remove the expired leases (if so
6506     configured), and the share will be deleted when the last lease is
6507hunk ./src/allmydata/storage/backends/das/expirer.py 66
6508         else:
6509             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
6510         self.sharetypes_to_expire = expiration_policy['sharetypes']
6511-        FSShareCrawler.__init__(self, statefile)
6512+        ShareCrawler.__init__(self, statefile)
6513 
6514     def add_initial_state(self):
6515         # we fill ["cycle-to-date"] here (even though they will be reset in
6516hunk ./src/allmydata/storage/crawler.py 1
6517-
6518 import os, time, struct
6519 import cPickle as pickle
6520 from twisted.internet import reactor
6521hunk ./src/allmydata/storage/crawler.py 11
6522 class TimeSliceExceeded(Exception):
6523     pass
6524 
6525-class FSShareCrawler(service.MultiService):
6526-    """A subcless of ShareCrawler is attached to a StorageServer, and
6527+class ShareCrawler(service.MultiService):
6528+    """A subclass of ShareCrawler is attached to a StorageServer, and
6529     periodically walks all of its shares, processing each one in some
6530     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
6531     since large servers can easily have a terabyte of shares, in several
6532hunk ./src/allmydata/storage/crawler.py 426
6533         pass
6534 
6535 
6536-class FSBucketCountingCrawler(FSShareCrawler):
6537+class BucketCountingCrawler(ShareCrawler):
6538     """I keep track of how many buckets are being managed by this server.
6539     This is equivalent to the number of distributed files and directories for
6540     which I am providing storage. The actual number of files+directories in
6541hunk ./src/allmydata/storage/crawler.py 440
6542     minimum_cycle_time = 60*60 # we don't need this more than once an hour
6543 
6544     def __init__(self, statefp, num_sample_prefixes=1):
6545-        FSShareCrawler.__init__(self, statefp)
6546+        ShareCrawler.__init__(self, statefp)
6547         self.num_sample_prefixes = num_sample_prefixes
6548 
6549     def add_initial_state(self):
6550hunk ./src/allmydata/test/test_backends.py 113
6551         # I'm only called in the ImmutableShareFile constructor.
6552         return False
6553 
6554+    def call_setContent(self, inputstring):
6555+        # XXX Good enough for expirer, not sure about elsewhere...
6556+        return True
6557+
6558     def setUp(self):
6559         msg( "%s.setUp()" % (self,))
6560         self.storedir = FilePath('teststoredir')
6561hunk ./src/allmydata/test/test_backends.py 159
6562         mockfpexists = self.mockfpexists.__enter__()
6563         mockfpexists.side_effect = self.call_exists
6564 
6565+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6566+        mocksetContent = self.mocksetContent.__enter__()
6567+        mocksetContent.side_effect = self.call_setContent
6568+
6569     def tearDown(self):
6570         msg( "%s.tearDown()" % (self,))
6571hunk ./src/allmydata/test/test_backends.py 165
6572+        self.mocksetContent.__exit__()
6573         self.mockfpexists.__exit__()
6574         self.mockget_available_space.__exit__()
6575         self.mockfpstatp.__exit__()
6576}
6577[jacp18
6578wilcoxjg@gmail.com**20110723031915
6579 Ignore-this: 21e7f22ac20e3f8af22ea2e9b755d6a5
6580] {
6581hunk ./src/allmydata/_auto_deps.py 21
6582     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6583     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6584 
6585-    "Twisted >= 2.4.0",
6586+v v v v v v v
6587+    "Twisted >= 11.0",
6588+*************
6589+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6590+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6591+    # support asynchronous close.
6592+    "Twisted >= 10.1.0",
6593+^ ^ ^ ^ ^ ^ ^
6594 
6595     # foolscap < 0.5.1 had a performance bug which spent
6596     # O(N**2) CPU for transferring large mutable files
6597hunk ./src/allmydata/storage/backends/das/core.py 168
6598             # it. Also construct the metadata.
6599             assert not finalhome.exists()
6600             fp_make_dirs(self.incominghome)
6601-            f = self.incominghome.child(str(self.shnum))
6602+            f = self.incominghome
6603             # The second field -- the four-byte share data length -- is no
6604             # longer used as of Tahoe v1.3.0, but we continue to write it in
6605             # there in case someone downgrades a storage server from >=
6606hunk ./src/allmydata/storage/backends/das/core.py 178
6607             # the largest length that can fit into the field. That way, even
6608             # if this does happen, the old < v1.3.0 server will still allow
6609             # clients to read the first part of the share.
6610-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6611-            #f.close()
6612+            print 'f: ',f
6613+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6614             self._lease_offset = max_size + 0x0c
6615             self._num_leases = 0
6616         else:
6617hunk ./src/allmydata/storage/backends/das/core.py 263
6618 
6619     def _write_lease_record(self, lease_number, lease_info):
6620         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6621-        f.seek(offset)
6622-        assert f.tell() == offset
6623-        f.write(lease_info.to_immutable_data())
6624+        fh = f.open()
6625+        try:
6626+            fh.seek(offset)
6627+            assert fh.tell() == offset
6628+            fh.write(lease_info.to_immutable_data())
6629+        finally:
6630+            fh.close()
6631 
6632     def _read_num_leases(self, f):
6633hunk ./src/allmydata/storage/backends/das/core.py 272
6634-        f.seek(0x08)
6635-        (num_leases,) = struct.unpack(">L", f.read(4))
6636+        fh = f.open()
6637+        try:
6638+            fh.seek(0x08)
6639+            ro = fh.read(4)
6640+            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6641+            (num_leases,) = struct.unpack(">L", ro)
6642+        finally:
6643+            fh.close()
6644         return num_leases
6645 
6646     def _write_num_leases(self, f, num_leases):
6647hunk ./src/allmydata/storage/backends/das/core.py 283
6648-        f.seek(0x08)
6649-        f.write(struct.pack(">L", num_leases))
6650+        fh = f.open()
6651+        try:
6652+            fh.seek(0x08)
6653+            fh.write(struct.pack(">L", num_leases))
6654+        finally:
6655+            fh.close()
6656 
6657     def _truncate_leases(self, f, num_leases):
6658         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
6659hunk ./src/allmydata/storage/backends/das/core.py 304
6660                 yield LeaseInfo().from_immutable_data(data)
6661 
6662     def add_lease(self, lease_info):
6663-        self.incominghome, 'rb+')
6664-        num_leases = self._read_num_leases(f)
6665+        f = self.incominghome
6666+        num_leases = self._read_num_leases(self.incominghome)
6667         self._write_lease_record(f, num_leases, lease_info)
6668         self._write_num_leases(f, num_leases+1)
6669hunk ./src/allmydata/storage/backends/das/core.py 308
6670-        f.close()
6671-
6672+       
6673     def renew_lease(self, renew_secret, new_expire_time):
6674         for i,lease in enumerate(self.get_leases()):
6675             if constant_time_compare(lease.renew_secret, renew_secret):
6676hunk ./src/allmydata/test/test_backends.py 33
6677 share_data = containerdata + client_data
6678 testnodeid = 'testnodeidxxxxxxxxxx'
6679 
6680+
6681 class MockStat:
6682     def __init__(self):
6683         self.st_mode = None
6684hunk ./src/allmydata/test/test_backends.py 43
6685     code under test if it reads or writes outside of its prescribed
6686     subtree. I simulate just the parts of the filesystem that the current
6687     implementation of DAS backend needs. """
6688+
6689+    def setUp(self):
6690+        msg( "%s.setUp()" % (self,))
6691+        self.storedir = FilePath('teststoredir')
6692+        self.basedir = self.storedir.child('shares')
6693+        self.baseincdir = self.basedir.child('incoming')
6694+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6695+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6696+        self.shareincomingname = self.sharedirincomingname.child('0')
6697+        self.sharefilename = self.sharedirfinalname.child('0')
6698+        self.sharefilecontents = StringIO(share_data)
6699+
6700+        self.mocklistdirp = mock.patch('os.listdir')
6701+        mocklistdir = self.mocklistdirp.__enter__()
6702+        mocklistdir.side_effect = self.call_listdir
6703+
6704+        self.mockmkdirp = mock.patch('os.mkdir')
6705+        mockmkdir = self.mockmkdirp.__enter__()
6706+        mockmkdir.side_effect = self.call_mkdir
6707+
6708+        self.mockisdirp = mock.patch('os.path.isdir')
6709+        mockisdir = self.mockisdirp.__enter__()
6710+        mockisdir.side_effect = self.call_isdir
6711+
6712+        self.mockopenp = mock.patch('__builtin__.open')
6713+        mockopen = self.mockopenp.__enter__()
6714+        mockopen.side_effect = self.call_open
6715+
6716+        self.mockstatp = mock.patch('os.stat')
6717+        mockstat = self.mockstatp.__enter__()
6718+        mockstat.side_effect = self.call_stat
6719+
6720+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6721+        mockfpstat = self.mockfpstatp.__enter__()
6722+        mockfpstat.side_effect = self.call_stat
6723+
6724+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6725+        mockget_available_space = self.mockget_available_space.__enter__()
6726+        mockget_available_space.side_effect = self.call_get_available_space
6727+
6728+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6729+        mockfpexists = self.mockfpexists.__enter__()
6730+        mockfpexists.side_effect = self.call_exists
6731+
6732+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6733+        mocksetContent = self.mocksetContent.__enter__()
6734+        mocksetContent.side_effect = self.call_setContent
6735+
6736     def call_open(self, fname, mode):
6737         assert isinstance(fname, basestring), fname
6738         fnamefp = FilePath(fname)
6739hunk ./src/allmydata/test/test_backends.py 107
6740             # current implementation of DAS backend, and we might want to
6741             # use this information in this test in the future...
6742             return StringIO()
6743+        elif fnamefp == self.shareincomingname:
6744+            print "repr(fnamefp): ", repr(fnamefp)
6745         else:
6746             # Anything else you open inside your subtree appears to be an
6747             # empty file.
6748hunk ./src/allmydata/test/test_backends.py 168
6749         # XXX Good enough for expirer, not sure about elsewhere...
6750         return True
6751 
6752-    def setUp(self):
6753-        msg( "%s.setUp()" % (self,))
6754-        self.storedir = FilePath('teststoredir')
6755-        self.basedir = self.storedir.child('shares')
6756-        self.baseincdir = self.basedir.child('incoming')
6757-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6758-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6759-        self.shareincomingname = self.sharedirincomingname.child('0')
6760-        self.sharefname = self.sharedirfinalname.child('0')
6761-
6762-        self.mocklistdirp = mock.patch('os.listdir')
6763-        mocklistdir = self.mocklistdirp.__enter__()
6764-        mocklistdir.side_effect = self.call_listdir
6765-
6766-        self.mockmkdirp = mock.patch('os.mkdir')
6767-        mockmkdir = self.mockmkdirp.__enter__()
6768-        mockmkdir.side_effect = self.call_mkdir
6769-
6770-        self.mockisdirp = mock.patch('os.path.isdir')
6771-        mockisdir = self.mockisdirp.__enter__()
6772-        mockisdir.side_effect = self.call_isdir
6773-
6774-        self.mockopenp = mock.patch('__builtin__.open')
6775-        mockopen = self.mockopenp.__enter__()
6776-        mockopen.side_effect = self.call_open
6777-
6778-        self.mockstatp = mock.patch('os.stat')
6779-        mockstat = self.mockstatp.__enter__()
6780-        mockstat.side_effect = self.call_stat
6781-
6782-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6783-        mockfpstat = self.mockfpstatp.__enter__()
6784-        mockfpstat.side_effect = self.call_stat
6785-
6786-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6787-        mockget_available_space = self.mockget_available_space.__enter__()
6788-        mockget_available_space.side_effect = self.call_get_available_space
6789-
6790-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6791-        mockfpexists = self.mockfpexists.__enter__()
6792-        mockfpexists.side_effect = self.call_exists
6793-
6794-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6795-        mocksetContent = self.mocksetContent.__enter__()
6796-        mocksetContent.side_effect = self.call_setContent
6797 
6798     def tearDown(self):
6799         msg( "%s.tearDown()" % (self,))
6800hunk ./src/allmydata/test/test_backends.py 239
6801         handling of simultaneous and successive attempts to write the same
6802         share.
6803         """
6804-
6805         mocktime.return_value = 0
6806         # Inspect incoming and fail unless it's empty.
6807         incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
6808}
6809[jacp19orso
6810wilcoxjg@gmail.com**20110724034230
6811 Ignore-this: f001093c467225c289489636a61935fe
6812] {
6813hunk ./src/allmydata/_auto_deps.py 21
6814     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6815     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6816 
6817-v v v v v v v
6818-    "Twisted >= 11.0",
6819-*************
6820+
6821     # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6822     # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6823     # support asynchronous close.
6824hunk ./src/allmydata/_auto_deps.py 26
6825     "Twisted >= 10.1.0",
6826-^ ^ ^ ^ ^ ^ ^
6827+
6828 
6829     # foolscap < 0.5.1 had a performance bug which spent
6830     # O(N**2) CPU for transferring large mutable files
6831hunk ./src/allmydata/storage/backends/das/core.py 153
6832     LEASE_SIZE = struct.calcsize(">L32s32sL")
6833     sharetype = "immutable"
6834 
6835-    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
6836+    def __init__(self, finalhome, storageindex, shnum, incominghome, max_size=None, create=False):
6837         """ If max_size is not None then I won't allow more than
6838         max_size to be written to me. If create=True then max_size
6839         must not be None. """
6840hunk ./src/allmydata/storage/backends/das/core.py 167
6841             # touch the file, so later callers will see that we're working on
6842             # it. Also construct the metadata.
6843             assert not finalhome.exists()
6844-            fp_make_dirs(self.incominghome)
6845-            f = self.incominghome
6846+            fp_make_dirs(self.incominghome.parent())
6847             # The second field -- the four-byte share data length -- is no
6848             # longer used as of Tahoe v1.3.0, but we continue to write it in
6849             # there in case someone downgrades a storage server from >=
6850hunk ./src/allmydata/storage/backends/das/core.py 177
6851             # the largest length that can fit into the field. That way, even
6852             # if this does happen, the old < v1.3.0 server will still allow
6853             # clients to read the first part of the share.
6854-            print 'f: ',f
6855-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6856+            self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6857             self._lease_offset = max_size + 0x0c
6858             self._num_leases = 0
6859         else:
6860hunk ./src/allmydata/storage/backends/das/core.py 182
6861             f = open(self.finalhome, 'rb')
6862-            filesize = os.path.getsize(self.finalhome)
6863             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
6864             f.close()
6865hunk ./src/allmydata/storage/backends/das/core.py 184
6866+            filesize = self.finalhome.getsize()
6867             if version != 1:
6868                 msg = "sharefile %s had version %d but we wanted 1" % \
6869                       (self.finalhome, version)
6870hunk ./src/allmydata/storage/backends/das/core.py 259
6871         f.write(data)
6872         f.close()
6873 
6874-    def _write_lease_record(self, lease_number, lease_info):
6875+    def _write_lease_record(self, f, lease_number, lease_info):
6876         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6877         fh = f.open()
6878hunk ./src/allmydata/storage/backends/das/core.py 262
6879+        print fh
6880         try:
6881             fh.seek(offset)
6882             assert fh.tell() == offset
6883hunk ./src/allmydata/storage/backends/das/core.py 271
6884             fh.close()
6885 
6886     def _read_num_leases(self, f):
6887-        fh = f.open()
6888+        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
6889         try:
6890             fh.seek(0x08)
6891             ro = fh.read(4)
6892hunk ./src/allmydata/storage/backends/das/core.py 275
6893-            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6894             (num_leases,) = struct.unpack(">L", ro)
6895         finally:
6896             fh.close()
6897hunk ./src/allmydata/storage/backends/das/core.py 302
6898                 yield LeaseInfo().from_immutable_data(data)
6899 
6900     def add_lease(self, lease_info):
6901-        f = self.incominghome
6902         num_leases = self._read_num_leases(self.incominghome)
6903hunk ./src/allmydata/storage/backends/das/core.py 303
6904-        self._write_lease_record(f, num_leases, lease_info)
6905-        self._write_num_leases(f, num_leases+1)
6906+        self._write_lease_record(self.incominghome, num_leases, lease_info)
6907+        self._write_num_leases(self.incominghome, num_leases+1)
6908         
6909     def renew_lease(self, renew_secret, new_expire_time):
6910         for i,lease in enumerate(self.get_leases()):
6911hunk ./src/allmydata/test/test_backends.py 52
6912         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6913         self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6914         self.shareincomingname = self.sharedirincomingname.child('0')
6915-        self.sharefilename = self.sharedirfinalname.child('0')
6916-        self.sharefilecontents = StringIO(share_data)
6917+        self.sharefinalname = self.sharedirfinalname.child('0')
6918 
6919hunk ./src/allmydata/test/test_backends.py 54
6920-        self.mocklistdirp = mock.patch('os.listdir')
6921-        mocklistdir = self.mocklistdirp.__enter__()
6922-        mocklistdir.side_effect = self.call_listdir
6923+        # Make patcher, patch, and make effects for fs using functions.
6924+        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
6925+        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
6926+        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
6927 
6928hunk ./src/allmydata/test/test_backends.py 59
6929-        self.mockmkdirp = mock.patch('os.mkdir')
6930-        mockmkdir = self.mockmkdirp.__enter__()
6931-        mockmkdir.side_effect = self.call_mkdir
6932+        #self.mockmkdirp = mock.patch('os.mkdir')
6933+        #mockmkdir = self.mockmkdirp.__enter__()
6934+        #mockmkdir.side_effect = self.call_mkdir
6935 
6936hunk ./src/allmydata/test/test_backends.py 63
6937-        self.mockisdirp = mock.patch('os.path.isdir')
6938+        self.mockisdirp = mock.patch('FilePath.isdir')
6939         mockisdir = self.mockisdirp.__enter__()
6940         mockisdir.side_effect = self.call_isdir
6941 
6942hunk ./src/allmydata/test/test_backends.py 67
6943-        self.mockopenp = mock.patch('__builtin__.open')
6944+        self.mockopenp = mock.patch('FilePath.open')
6945         mockopen = self.mockopenp.__enter__()
6946         mockopen.side_effect = self.call_open
6947 
6948hunk ./src/allmydata/test/test_backends.py 71
6949-        self.mockstatp = mock.patch('os.stat')
6950+        self.mockstatp = mock.patch('filepath.stat')
6951         mockstat = self.mockstatp.__enter__()
6952         mockstat.side_effect = self.call_stat
6953 
6954hunk ./src/allmydata/test/test_backends.py 91
6955         mocksetContent = self.mocksetContent.__enter__()
6956         mocksetContent.side_effect = self.call_setContent
6957 
6958+    #  The behavior of mocked filesystem using functions
6959     def call_open(self, fname, mode):
6960         assert isinstance(fname, basestring), fname
6961         fnamefp = FilePath(fname)
6962hunk ./src/allmydata/test/test_backends.py 109
6963             # use this information in this test in the future...
6964             return StringIO()
6965         elif fnamefp == self.shareincomingname:
6966-            print "repr(fnamefp): ", repr(fnamefp)
6967+            self.incomingsharefilecontents.closed = False
6968+            return self.incomingsharefilecontents
6969         else:
6970             # Anything else you open inside your subtree appears to be an
6971             # empty file.
6972hunk ./src/allmydata/test/test_backends.py 152
6973         fnamefp = FilePath(fname)
6974         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
6975                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
6976-
6977         msg("%s.call_stat(%s)" % (self, fname,))
6978         mstat = MockStat()
6979         mstat.st_mode = 16893 # a directory
6980hunk ./src/allmydata/test/test_backends.py 166
6981         return False
6982 
6983     def call_setContent(self, inputstring):
6984-        # XXX Good enough for expirer, not sure about elsewhere...
6985-        return True
6986-
6987+        self.incomingsharefilecontents = StringIO(inputstring)
6988 
6989     def tearDown(self):
6990         msg( "%s.tearDown()" % (self,))
6991}
6992[jacp19
6993wilcoxjg@gmail.com**20110727080553
6994 Ignore-this: 851b1ebdeeee712abfbda557af142726
6995] {
6996hunk ./src/allmydata/storage/backends/das/core.py 1
6997-import os, re, weakref, struct, time, stat
6998+import re, weakref, struct, time, stat
6999 from twisted.application import service
7000 from twisted.python.filepath import UnlistableError
7001hunk ./src/allmydata/storage/backends/das/core.py 4
7002+from twisted.python import filepath
7003 from twisted.python.filepath import FilePath
7004 from zope.interface import implements
7005 
7006hunk ./src/allmydata/storage/backends/das/core.py 50
7007         self._setup_lease_checkerf(expiration_policy)
7008 
7009     def _setup_storage(self, storedir, readonly, reserved_space):
7010-        precondition(isinstance(storedir, FilePath)) 
7011+        precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7012         self.storedir = storedir
7013         self.readonly = readonly
7014         self.reserved_space = int(reserved_space)
7015hunk ./src/allmydata/storage/backends/das/core.py 195
7016         self._data_offset = 0xc
7017 
7018     def close(self):
7019-        fileutil.make_dirs(os.path.dirname(self.finalhome))
7020-        fileutil.rename(self.incominghome, self.finalhome)
7021+        fileutil.fp_make_dirs(self.finalhome.parent())
7022+        self.incominghome.moveTo(self.finalhome)
7023         try:
7024             # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
7025             # We try to delete the parent (.../ab/abcde) to avoid leaving
7026hunk ./src/allmydata/storage/backends/das/core.py 209
7027             # their children to know when they should do the rmdir. This
7028             # approach is simpler, but relies on os.rmdir refusing to delete
7029             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
7030-            #print "os.path.dirname(self.incominghome): "
7031-            #print os.path.dirname(self.incominghome)
7032-            os.rmdir(os.path.dirname(self.incominghome))
7033+            fileutil.fp_rmdir_if_empty(self.incominghome.parent())
7034             # we also delete the grandparent (prefix) directory, .../ab ,
7035             # again to avoid leaving directories lying around. This might
7036             # fail if there is another bucket open that shares a prefix (like
7037hunk ./src/allmydata/storage/backends/das/core.py 214
7038             # ab/abfff).
7039-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
7040+            fileutil.fp_rmdir_if_empty(self.incominghome.parent().parent())
7041             # we leave the great-grandparent (incoming/) directory in place.
7042         except EnvironmentError:
7043             # ignore the "can't rmdir because the directory is not empty"
7044hunk ./src/allmydata/storage/backends/das/core.py 224
7045         pass
7046         
7047     def stat(self):
7048-        return os.stat(self.finalhome)[stat.ST_SIZE]
7049-        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
7050+        return filepath.stat(self.finalhome)[stat.ST_SIZE]
7051 
7052     def get_shnum(self):
7053         return self.shnum
7054hunk ./src/allmydata/storage/backends/das/core.py 230
7055 
7056     def unlink(self):
7057-        os.unlink(self.finalhome)
7058+        self.finalhome.remove()
7059 
7060     def read_share_data(self, offset, length):
7061         precondition(offset >= 0)
7062hunk ./src/allmydata/storage/backends/das/core.py 237
7063         # Reads beyond the end of the data are truncated. Reads that start
7064         # beyond the end of the data return an empty string.
7065         seekpos = self._data_offset+offset
7066-        fsize = os.path.getsize(self.finalhome)
7067+        fsize = self.finalhome.getsize()
7068         actuallength = max(0, min(length, fsize-seekpos))
7069         if actuallength == 0:
7070             return ""
7071hunk ./src/allmydata/storage/backends/das/core.py 241
7072-        f = open(self.finalhome, 'rb')
7073-        f.seek(seekpos)
7074-        return f.read(actuallength)
7075+        try:
7076+            fh = open(self.finalhome, 'rb')
7077+            fh.seek(seekpos)
7078+            sharedata = fh.read(actuallength)
7079+        finally:
7080+            fh.close()
7081+        return sharedata
7082 
7083     def write_share_data(self, offset, data):
7084         length = len(data)
7085hunk ./src/allmydata/storage/backends/das/core.py 264
7086     def _write_lease_record(self, f, lease_number, lease_info):
7087         offset = self._lease_offset + lease_number * self.LEASE_SIZE
7088         fh = f.open()
7089-        print fh
7090         try:
7091             fh.seek(offset)
7092             assert fh.tell() == offset
7093hunk ./src/allmydata/storage/backends/das/core.py 269
7094             fh.write(lease_info.to_immutable_data())
7095         finally:
7096+            print dir(fh)
7097             fh.close()
7098 
7099     def _read_num_leases(self, f):
7100hunk ./src/allmydata/storage/backends/das/core.py 273
7101-        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
7102+        fh = f.open() #XXX  Should be mocking FilePath.open()
7103         try:
7104             fh.seek(0x08)
7105             ro = fh.read(4)
7106hunk ./src/allmydata/storage/backends/das/core.py 280
7107             (num_leases,) = struct.unpack(">L", ro)
7108         finally:
7109             fh.close()
7110+            print "end of _read_num_leases"
7111         return num_leases
7112 
7113     def _write_num_leases(self, f, num_leases):
7114hunk ./src/allmydata/storage/crawler.py 6
7115 from twisted.internet import reactor
7116 from twisted.application import service
7117 from allmydata.storage.common import si_b2a
7118-from allmydata.util import fileutil
7119 
7120 class TimeSliceExceeded(Exception):
7121     pass
7122hunk ./src/allmydata/storage/crawler.py 478
7123             old_cycle,buckets = self.state["storage-index-samples"][prefix]
7124             if old_cycle != cycle:
7125                 del self.state["storage-index-samples"][prefix]
7126-
7127hunk ./src/allmydata/test/test_backends.py 1
7128+import os
7129 from twisted.trial import unittest
7130 from twisted.python.filepath import FilePath
7131 from allmydata.util.log import msg
7132hunk ./src/allmydata/test/test_backends.py 9
7133 from allmydata.test.common_util import ReallyEqualMixin
7134 from allmydata.util.assertutil import _assert
7135 import mock
7136+from mock import Mock
7137 
7138 # This is the code that we're going to be testing.
7139 from allmydata.storage.server import StorageServer
7140hunk ./src/allmydata/test/test_backends.py 40
7141     def __init__(self):
7142         self.st_mode = None
7143 
7144+class MockFilePath:
7145+    def __init__(self, PathString):
7146+        self.PathName = PathString
7147+    def child(self, ChildString):
7148+        return MockFilePath(os.path.join(self.PathName, ChildString))
7149+    def parent(self):
7150+        return MockFilePath(os.path.dirname(self.PathName))
7151+    def makedirs(self):
7152+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7153+        pass
7154+    def isdir(self):
7155+        return True
7156+    def remove(self):
7157+        pass
7158+    def children(self):
7159+        return []
7160+    def exists(self):
7161+        return False
7162+    def setContent(self, ContentString):
7163+        self.File = MockFile(ContentString)
7164+    def open(self):
7165+        return self.File.open()
7166+
7167+class MockFile:
7168+    def __init__(self, ContentString):
7169+        self.Contents = ContentString
7170+    def open(self):
7171+        return self
7172+    def close(self):
7173+        pass
7174+    def seek(self, position):
7175+        pass
7176+    def read(self, amount):
7177+        pass
7178+
7179+
7180+class MockBCC:
7181+    def setServiceParent(self, Parent):
7182+        pass
7183+
7184+class MockLCC:
7185+    def setServiceParent(self, Parent):
7186+        pass
7187+
7188 class MockFiles(unittest.TestCase):
7189     """ I simulate a filesystem that the code under test can use. I flag the
7190     code under test if it reads or writes outside of its prescribed
7191hunk ./src/allmydata/test/test_backends.py 91
7192     implementation of DAS backend needs. """
7193 
7194     def setUp(self):
7195+        # Make patcher, patch, and make effects for fs using functions.
7196         msg( "%s.setUp()" % (self,))
7197hunk ./src/allmydata/test/test_backends.py 93
7198-        self.storedir = FilePath('teststoredir')
7199+        self.storedir = MockFilePath('teststoredir')
7200         self.basedir = self.storedir.child('shares')
7201         self.baseincdir = self.basedir.child('incoming')
7202         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
7203hunk ./src/allmydata/test/test_backends.py 101
7204         self.shareincomingname = self.sharedirincomingname.child('0')
7205         self.sharefinalname = self.sharedirfinalname.child('0')
7206 
7207-        # Make patcher, patch, and make effects for fs using functions.
7208-        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
7209-        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
7210-        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
7211-
7212-        #self.mockmkdirp = mock.patch('os.mkdir')
7213-        #mockmkdir = self.mockmkdirp.__enter__()
7214-        #mockmkdir.side_effect = self.call_mkdir
7215-
7216-        self.mockisdirp = mock.patch('FilePath.isdir')
7217-        mockisdir = self.mockisdirp.__enter__()
7218-        mockisdir.side_effect = self.call_isdir
7219+        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
7220+        FakePath = self.FilePathFake.__enter__()
7221 
7222hunk ./src/allmydata/test/test_backends.py 104
7223-        self.mockopenp = mock.patch('FilePath.open')
7224-        mockopen = self.mockopenp.__enter__()
7225-        mockopen.side_effect = self.call_open
7226+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
7227+        FakeBCC = self.BCountingCrawler.__enter__()
7228+        FakeBCC.side_effect = self.call_FakeBCC
7229 
7230hunk ./src/allmydata/test/test_backends.py 108
7231-        self.mockstatp = mock.patch('filepath.stat')
7232-        mockstat = self.mockstatp.__enter__()
7233-        mockstat.side_effect = self.call_stat
7234+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
7235+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
7236+        FakeLCC.side_effect = self.call_FakeLCC
7237 
7238hunk ./src/allmydata/test/test_backends.py 112
7239-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
7240-        mockfpstat = self.mockfpstatp.__enter__()
7241-        mockfpstat.side_effect = self.call_stat
7242+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
7243+        GetSpace = self.get_available_space.__enter__()
7244+        GetSpace.side_effect = self.call_get_available_space
7245 
7246hunk ./src/allmydata/test/test_backends.py 116
7247-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
7248-        mockget_available_space = self.mockget_available_space.__enter__()
7249-        mockget_available_space.side_effect = self.call_get_available_space
7250+    def call_FakeBCC(self, StateFile):
7251+        return MockBCC()
7252 
7253hunk ./src/allmydata/test/test_backends.py 119
7254-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
7255-        mockfpexists = self.mockfpexists.__enter__()
7256-        mockfpexists.side_effect = self.call_exists
7257-
7258-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
7259-        mocksetContent = self.mocksetContent.__enter__()
7260-        mocksetContent.side_effect = self.call_setContent
7261-
7262-    #  The behavior of mocked filesystem using functions
7263-    def call_open(self, fname, mode):
7264-        assert isinstance(fname, basestring), fname
7265-        fnamefp = FilePath(fname)
7266-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
7267-                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
7268-
7269-        if fnamefp == self.storedir.child('bucket_counter.state'):
7270-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
7271-        elif fnamefp == self.storedir.child('lease_checker.state'):
7272-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
7273-        elif fnamefp == self.storedir.child('lease_checker.history'):
7274-            # This is separated out from the else clause below just because
7275-            # we know this particular file is going to be used by the
7276-            # current implementation of DAS backend, and we might want to
7277-            # use this information in this test in the future...
7278-            return StringIO()
7279-        elif fnamefp == self.shareincomingname:
7280-            self.incomingsharefilecontents.closed = False
7281-            return self.incomingsharefilecontents
7282-        else:
7283-            # Anything else you open inside your subtree appears to be an
7284-            # empty file.
7285-            return StringIO()
7286-
7287-    def call_isdir(self, fname):
7288-        fnamefp = FilePath(fname)
7289-        return fnamefp.isdir()
7290-
7291-        self.failUnless(self.storedir == self or self.storedir in self.parents(),
7292-                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
7293-
7294-        # The first two cases are separate from the else clause below just
7295-        # because we know that the current implementation of the DAS backend
7296-        # inspects these two directories and we might want to make use of
7297-        # that information in the tests in the future...
7298-        if self == self.storedir.child('shares'):
7299-            return True
7300-        elif self == self.storedir.child('shares').child('incoming'):
7301-            return True
7302-        else:
7303-            # Anything else you open inside your subtree appears to be a
7304-            # directory.
7305-            return True
7306-
7307-    def call_mkdir(self, fname, mode):
7308-        fnamefp = FilePath(fname)
7309-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
7310-                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
7311-        self.failUnlessEqual(0777, mode)
7312+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
7313+        return MockLCC()
7314 
7315     def call_listdir(self, fname):
7316         fnamefp = FilePath(fname)
7317hunk ./src/allmydata/test/test_backends.py 150
7318 
7319     def tearDown(self):
7320         msg( "%s.tearDown()" % (self,))
7321-        self.mocksetContent.__exit__()
7322-        self.mockfpexists.__exit__()
7323-        self.mockget_available_space.__exit__()
7324-        self.mockfpstatp.__exit__()
7325-        self.mockstatp.__exit__()
7326-        self.mockopenp.__exit__()
7327-        self.mockisdirp.__exit__()
7328-        self.mockmkdirp.__exit__()
7329-        self.mocklistdirp.__exit__()
7330-
7331+        FakePath = self.FilePathFake.__exit__()       
7332+        FakeBCC = self.BCountingCrawler.__exit__()
7333 
7334 expiration_policy = {'enabled' : False,
7335                      'mode' : 'age',
7336hunk ./src/allmydata/test/test_backends.py 222
7337         # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7338         
7339         # Attempt to create a second share writer with the same sharenum.
7340-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7341+        # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7342 
7343         # Show that no sharewriter results from a remote_allocate_buckets
7344         # with the same si and sharenum, until BucketWriter.remote_close()
7345hunk ./src/allmydata/test/test_backends.py 227
7346         # has been called.
7347-        self.failIf(bsa)
7348+        # self.failIf(bsa)
7349 
7350         # Test allocated size.
7351hunk ./src/allmydata/test/test_backends.py 230
7352-        spaceint = self.ss.allocated_size()
7353-        self.failUnlessReallyEqual(spaceint, 1)
7354+        # spaceint = self.ss.allocated_size()
7355+        # self.failUnlessReallyEqual(spaceint, 1)
7356 
7357         # Write 'a' to shnum 0. Only tested together with close and read.
7358hunk ./src/allmydata/test/test_backends.py 234
7359-        bs[0].remote_write(0, 'a')
7360+        # bs[0].remote_write(0, 'a')
7361         
7362         # Preclose: Inspect final, failUnless nothing there.
7363hunk ./src/allmydata/test/test_backends.py 237
7364-        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
7365-        bs[0].remote_close()
7366+        # self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
7367+        # bs[0].remote_close()
7368 
7369         # Postclose: (Omnibus) failUnless written data is in final.
7370hunk ./src/allmydata/test/test_backends.py 241
7371-        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
7372-        self.failUnlessReallyEqual(len(sharesinfinal), 1)
7373-        contents = sharesinfinal[0].read_share_data(0, 73)
7374-        self.failUnlessReallyEqual(contents, client_data)
7375+        # sharesinfinal = list(self.backend.get_shares('teststorage_index'))
7376+        # self.failUnlessReallyEqual(len(sharesinfinal), 1)
7377+        # contents = sharesinfinal[0].read_share_data(0, 73)
7378+        # self.failUnlessReallyEqual(contents, client_data)
7379 
7380         # Exercise the case that the share we're asking to allocate is
7381         # already (completely) uploaded.
7382hunk ./src/allmydata/test/test_backends.py 248
7383-        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
7384+        # self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
7385         
7386     @mock.patch('time.time')
7387     @mock.patch('allmydata.util.fileutil.get_available_space')
7388}
7389[jacp20
7390wilcoxjg@gmail.com**20110728072514
7391 Ignore-this: 6a03289023c3c79b8d09e2711183ea82
7392] {
7393hunk ./src/allmydata/storage/backends/das/core.py 52
7394     def _setup_storage(self, storedir, readonly, reserved_space):
7395         precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7396         self.storedir = storedir
7397+        print "self.storedir: ", self.storedir
7398         self.readonly = readonly
7399         self.reserved_space = int(reserved_space)
7400         self.sharedir = self.storedir.child("shares")
7401hunk ./src/allmydata/storage/backends/das/core.py 85
7402 
7403     def get_incoming_shnums(self, storageindex):
7404         """ Return a frozenset of the shnum (as ints) of incoming shares. """
7405-        incomingdir = si_si2dir(self.incomingdir, storageindex)
7406+        print "self.incomingdir.children(): ", self.incomingdir.children()
7407+        print "self.incomingdir.pathname: ", self.incomingdir.pathname
7408+        incomingthissi = si_si2dir(self.incomingdir, storageindex)
7409+        print "incomingthissi.children(): ", incomingthissi.children()
7410         try:
7411hunk ./src/allmydata/storage/backends/das/core.py 90
7412-            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
7413+            childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
7414             shnums = [ int(fp.basename) for fp in childfps ]
7415             return frozenset(shnums)
7416         except UnlistableError:
7417hunk ./src/allmydata/storage/backends/das/core.py 117
7418 
7419     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
7420         finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
7421-        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
7422+        incominghome = si_si2dir(self.incomingdir, storageindex).child(str(shnum))
7423         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
7424         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
7425         return bw
7426hunk ./src/allmydata/storage/backends/das/core.py 183
7427             # if this does happen, the old < v1.3.0 server will still allow
7428             # clients to read the first part of the share.
7429             self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
7430+            print "We got here right?"
7431             self._lease_offset = max_size + 0x0c
7432             self._num_leases = 0
7433         else:
7434hunk ./src/allmydata/storage/backends/das/core.py 274
7435             assert fh.tell() == offset
7436             fh.write(lease_info.to_immutable_data())
7437         finally:
7438-            print dir(fh)
7439             fh.close()
7440 
7441     def _read_num_leases(self, f):
7442hunk ./src/allmydata/storage/backends/das/core.py 284
7443             (num_leases,) = struct.unpack(">L", ro)
7444         finally:
7445             fh.close()
7446-            print "end of _read_num_leases"
7447         return num_leases
7448 
7449     def _write_num_leases(self, f, num_leases):
7450hunk ./src/allmydata/storage/common.py 21
7451 
7452 def si_si2dir(startfp, storageindex):
7453     sia = si_b2a(storageindex)
7454-    return startfp.child(sia[:2]).child(sia)
7455+    print "I got here right?  sia =", sia
7456+    print "What the fuck is startfp? ", startfp
7457+    print "What the fuck is startfp.pathname? ", startfp.pathname
7458+    newfp = startfp.child(sia[:2])
7459+    print "Did I get here?"
7460+    return newfp.child(sia)
7461hunk ./src/allmydata/test/test_backends.py 5
7462 from twisted.trial import unittest
7463 from twisted.python.filepath import FilePath
7464 from allmydata.util.log import msg
7465-from StringIO import StringIO
7466+from tempfile import TemporaryFile
7467 from allmydata.test.common_util import ReallyEqualMixin
7468 from allmydata.util.assertutil import _assert
7469 import mock
7470hunk ./src/allmydata/test/test_backends.py 34
7471     cancelsecret + expirationtime + nextlease
7472 share_data = containerdata + client_data
7473 testnodeid = 'testnodeidxxxxxxxxxx'
7474+fakefilepaths = {}
7475 
7476 
7477 class MockStat:
7478hunk ./src/allmydata/test/test_backends.py 41
7479     def __init__(self):
7480         self.st_mode = None
7481 
7482+
7483 class MockFilePath:
7484hunk ./src/allmydata/test/test_backends.py 43
7485-    def __init__(self, PathString):
7486-        self.PathName = PathString
7487-    def child(self, ChildString):
7488-        return MockFilePath(os.path.join(self.PathName, ChildString))
7489+    def __init__(self, pathstring):
7490+        self.pathname = pathstring
7491+        self.spawn = {}
7492+        self.antecedent = os.path.dirname(self.pathname)
7493+    def child(self, childstring):
7494+        arg2child = os.path.join(self.pathname, childstring)
7495+        print "arg2child: ", arg2child
7496+        if fakefilepaths.has_key(arg2child):
7497+            child = fakefilepaths[arg2child]
7498+            print "Should have gotten here."
7499+        else:
7500+            child = MockFilePath(arg2child)
7501+        return child
7502     def parent(self):
7503hunk ./src/allmydata/test/test_backends.py 57
7504-        return MockFilePath(os.path.dirname(self.PathName))
7505+        if fakefilepaths.has_key(self.antecedent):
7506+            parent = fakefilepaths[self.antecedent]
7507+        else:
7508+            parent = MockFilePath(self.antecedent)
7509+        return parent
7510+    def children(self):
7511+        childrenfromffs = frozenset(fakefilepaths.values())
7512+        return list(childrenfromffs | frozenset(self.spawn.values())) 
7513     def makedirs(self):
7514         # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7515         pass
7516hunk ./src/allmydata/test/test_backends.py 72
7517         return True
7518     def remove(self):
7519         pass
7520-    def children(self):
7521-        return []
7522     def exists(self):
7523         return False
7524hunk ./src/allmydata/test/test_backends.py 74
7525-    def setContent(self, ContentString):
7526-        self.File = MockFile(ContentString)
7527     def open(self):
7528         return self.File.open()
7529hunk ./src/allmydata/test/test_backends.py 76
7530+    def setparents(self):
7531+        antecedents = []
7532+        def f(fps, antecedents):
7533+            newfps = os.path.split(fps)[0]
7534+            if newfps:
7535+                antecedents.append(newfps)
7536+                f(newfps, antecedents)
7537+        f(self.pathname, antecedents)
7538+        for fps in antecedents:
7539+            if not fakefilepaths.has_key(fps):
7540+                fakefilepaths[fps] = MockFilePath(fps)
7541+    def setContent(self, contentstring):
7542+        print "I am self.pathname: ", self.pathname
7543+        fakefilepaths[self.pathname] = self
7544+        self.File = MockFile(contentstring)
7545+        self.setparents()
7546+    def create(self):
7547+        fakefilepaths[self.pathname] = self
7548+        self.setparents()
7549+           
7550 
7551 class MockFile:
7552hunk ./src/allmydata/test/test_backends.py 98
7553-    def __init__(self, ContentString):
7554-        self.Contents = ContentString
7555+    def __init__(self, contentstring):
7556+        self.buffer = contentstring
7557+        self.pos = 0
7558     def open(self):
7559         return self
7560hunk ./src/allmydata/test/test_backends.py 103
7561+    def write(self, instring):
7562+        begin = self.pos
7563+        padlen = begin - len(self.buffer)
7564+        if padlen > 0:
7565+            self.buffer += '\x00' * padlen
7566+            end = self.pos + len(instring)
7567+            self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7568+            self.pos = end
7569     def close(self):
7570         pass
7571hunk ./src/allmydata/test/test_backends.py 113
7572-    def seek(self, position):
7573-        pass
7574-    def read(self, amount):
7575-        pass
7576+    def seek(self, pos):
7577+        self.pos = pos
7578+    def read(self, numberbytes):
7579+        return self.buffer[self.pos:self.pos+numberbytes]
7580+    def tell(self):
7581+        return self.pos
7582 
7583 
7584 class MockBCC:
7585hunk ./src/allmydata/test/test_backends.py 125
7586     def setServiceParent(self, Parent):
7587         pass
7588 
7589+
7590 class MockLCC:
7591     def setServiceParent(self, Parent):
7592         pass
7593hunk ./src/allmydata/test/test_backends.py 130
7594 
7595+
7596 class MockFiles(unittest.TestCase):
7597     """ I simulate a filesystem that the code under test can use. I flag the
7598     code under test if it reads or writes outside of its prescribed
7599hunk ./src/allmydata/test/test_backends.py 193
7600         return False
7601 
7602     def call_setContent(self, inputstring):
7603-        self.incomingsharefilecontents = StringIO(inputstring)
7604+        self.incomingsharefilecontents = TemporaryFile(inputstring)
7605 
7606     def tearDown(self):
7607         msg( "%s.tearDown()" % (self,))
7608hunk ./src/allmydata/test/test_backends.py 206
7609                      'cutoff_date' : None,
7610                      'sharetypes' : None}
7611 
7612+
7613 class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
7614     """ NullBackend is just for testing and executable documentation, so
7615     this test is actually a test of StorageServer in which we're using
7616hunk ./src/allmydata/test/test_backends.py 229
7617         self.failIf(mockopen.called)
7618         self.failIf(mockmkdir.called)
7619 
7620+
7621 class TestServerConstruction(MockFiles, ReallyEqualMixin):
7622     def test_create_server_fs_backend(self):
7623         """ This tests whether a server instance can be constructed with a
7624hunk ./src/allmydata/test/test_backends.py 238
7625 
7626         StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
7627 
7628+
7629 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
7630     """ This tests both the StorageServer and the DAS backend together. """
7631     
7632hunk ./src/allmydata/test/test_backends.py 262
7633         """
7634         mocktime.return_value = 0
7635         # Inspect incoming and fail unless it's empty.
7636-        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7637-        self.failUnlessReallyEqual(incomingset, frozenset())
7638+        # incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7639+        # self.failUnlessReallyEqual(incomingset, frozenset())
7640         
7641         # Populate incoming with the sharenum: 0.
7642         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7643hunk ./src/allmydata/test/test_backends.py 269
7644 
7645         # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
7646-        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7647+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7648         
7649         # Attempt to create a second share writer with the same sharenum.
7650         # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7651hunk ./src/allmydata/test/test_backends.py 274
7652 
7653+        # print bsa
7654         # Show that no sharewriter results from a remote_allocate_buckets
7655         # with the same si and sharenum, until BucketWriter.remote_close()
7656         # has been called.
7657hunk ./src/allmydata/test/test_backends.py 339
7658             self.failUnlessEqual(mode[0], 'r', mode)
7659             self.failUnless('b' in mode, mode)
7660 
7661-            return StringIO(share_data)
7662+            return TemporaryFile(share_data)
7663         mockopen.side_effect = call_open
7664 
7665         datalen = len(share_data)
7666}
7667[Completed FilePath based test_write_and_read_share
7668wilcoxjg@gmail.com**20110729043830
7669 Ignore-this: 2c32adb041f0344394927cd3ce8f3b36
7670] {
7671hunk ./src/allmydata/storage/backends/das/core.py 38
7672 NUM_RE=re.compile("^[0-9]+$")
7673 
7674 def is_num(fp):
7675-    return NUM_RE.match(fp.basename)
7676+    return NUM_RE.match(fp.basename())
7677 
7678 class DASCore(Backend):
7679     implements(IStorageBackend)
7680hunk ./src/allmydata/storage/backends/das/core.py 52
7681     def _setup_storage(self, storedir, readonly, reserved_space):
7682         precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7683         self.storedir = storedir
7684-        print "self.storedir: ", self.storedir
7685         self.readonly = readonly
7686         self.reserved_space = int(reserved_space)
7687         self.sharedir = self.storedir.child("shares")
7688hunk ./src/allmydata/storage/backends/das/core.py 84
7689 
7690     def get_incoming_shnums(self, storageindex):
7691         """ Return a frozenset of the shnum (as ints) of incoming shares. """
7692-        print "self.incomingdir.children(): ", self.incomingdir.children()
7693-        print "self.incomingdir.pathname: ", self.incomingdir.pathname
7694         incomingthissi = si_si2dir(self.incomingdir, storageindex)
7695hunk ./src/allmydata/storage/backends/das/core.py 85
7696-        print "incomingthissi.children(): ", incomingthissi.children()
7697         try:
7698             childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
7699hunk ./src/allmydata/storage/backends/das/core.py 87
7700-            shnums = [ int(fp.basename) for fp in childfps ]
7701+            shnums = [ int(fp.basename()) for fp in childfps ]
7702             return frozenset(shnums)
7703         except UnlistableError:
7704             # There is no shares directory at all.
7705hunk ./src/allmydata/storage/backends/das/core.py 101
7706         try:
7707             for fp in finalstoragedir.children():
7708                 if is_num(fp):
7709-                    yield ImmutableShare(fp, storageindex)
7710+                    finalhome = finalstoragedir.child(str(fp.basename()))
7711+                    yield ImmutableShare(storageindex, fp, finalhome)
7712         except UnlistableError:
7713             # There is no shares directory at all.
7714             pass
7715hunk ./src/allmydata/storage/backends/das/core.py 115
7716     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
7717         finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
7718         incominghome = si_si2dir(self.incomingdir, storageindex).child(str(shnum))
7719-        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
7720+        immsh = ImmutableShare(storageindex, shnum, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
7721         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
7722         return bw
7723 
7724hunk ./src/allmydata/storage/backends/das/core.py 155
7725     LEASE_SIZE = struct.calcsize(">L32s32sL")
7726     sharetype = "immutable"
7727 
7728-    def __init__(self, finalhome, storageindex, shnum, incominghome, max_size=None, create=False):
7729+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
7730         """ If max_size is not None then I won't allow more than
7731         max_size to be written to me. If create=True then max_size
7732         must not be None. """
7733hunk ./src/allmydata/storage/backends/das/core.py 180
7734             # if this does happen, the old < v1.3.0 server will still allow
7735             # clients to read the first part of the share.
7736             self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
7737-            print "We got here right?"
7738             self._lease_offset = max_size + 0x0c
7739             self._num_leases = 0
7740         else:
7741hunk ./src/allmydata/storage/backends/das/core.py 183
7742-            f = open(self.finalhome, 'rb')
7743-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
7744-            f.close()
7745+            fh = self.finalhome.open(mode='rb')
7746+            try:
7747+                (version, unused, num_leases) = struct.unpack(">LLL", fh.read(0xc))
7748+            finally:
7749+                fh.close()
7750             filesize = self.finalhome.getsize()
7751             if version != 1:
7752                 msg = "sharefile %s had version %d but we wanted 1" % \
7753hunk ./src/allmydata/storage/backends/das/core.py 227
7754         pass
7755         
7756     def stat(self):
7757-        return filepath.stat(self.finalhome)[stat.ST_SIZE]
7758+        return filepath.stat(self.finalhome.path)[stat.ST_SIZE]
7759 
7760     def get_shnum(self):
7761         return self.shnum
7762hunk ./src/allmydata/storage/backends/das/core.py 244
7763         actuallength = max(0, min(length, fsize-seekpos))
7764         if actuallength == 0:
7765             return ""
7766+        fh = self.finalhome.open(mode='rb')
7767         try:
7768hunk ./src/allmydata/storage/backends/das/core.py 246
7769-            fh = open(self.finalhome, 'rb')
7770             fh.seek(seekpos)
7771             sharedata = fh.read(actuallength)
7772         finally:
7773hunk ./src/allmydata/storage/backends/das/core.py 257
7774         precondition(offset >= 0, offset)
7775         if self._max_size is not None and offset+length > self._max_size:
7776             raise DataTooLargeError(self._max_size, offset, length)
7777-        f = open(self.incominghome, 'rb+')
7778-        real_offset = self._data_offset+offset
7779-        f.seek(real_offset)
7780-        assert f.tell() == real_offset
7781-        f.write(data)
7782-        f.close()
7783+        fh = self.incominghome.open(mode='rb+')
7784+        try:
7785+            real_offset = self._data_offset+offset
7786+            fh.seek(real_offset)
7787+            assert fh.tell() == real_offset
7788+            fh.write(data)
7789+        finally:
7790+            fh.close()
7791 
7792     def _write_lease_record(self, f, lease_number, lease_info):
7793         offset = self._lease_offset + lease_number * self.LEASE_SIZE
7794hunk ./src/allmydata/storage/backends/das/core.py 299
7795 
7796     def get_leases(self):
7797         """Yields a LeaseInfo instance for all leases."""
7798-        f = open(self.finalhome, 'rb')
7799-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
7800-        f.seek(self._lease_offset)
7801+        fh = self.finalhome.open(mode='rb')
7802+        (version, unused, num_leases) = struct.unpack(">LLL", fh.read(0xc))
7803+        fh.seek(self._lease_offset)
7804         for i in range(num_leases):
7805hunk ./src/allmydata/storage/backends/das/core.py 303
7806-            data = f.read(self.LEASE_SIZE)
7807+            data = fh.read(self.LEASE_SIZE)
7808             if data:
7809                 yield LeaseInfo().from_immutable_data(data)
7810 
7811hunk ./src/allmydata/storage/common.py 21
7812 
7813 def si_si2dir(startfp, storageindex):
7814     sia = si_b2a(storageindex)
7815-    print "I got here right?  sia =", sia
7816-    print "What the fuck is startfp? ", startfp
7817-    print "What the fuck is startfp.pathname? ", startfp.pathname
7818     newfp = startfp.child(sia[:2])
7819hunk ./src/allmydata/storage/common.py 22
7820-    print "Did I get here?"
7821     return newfp.child(sia)
7822hunk ./src/allmydata/test/test_backends.py 1
7823-import os
7824+import os, stat
7825 from twisted.trial import unittest
7826 from twisted.python.filepath import FilePath
7827 from allmydata.util.log import msg
7828hunk ./src/allmydata/test/test_backends.py 44
7829 
7830 class MockFilePath:
7831     def __init__(self, pathstring):
7832-        self.pathname = pathstring
7833+        self.path = pathstring
7834         self.spawn = {}
7835hunk ./src/allmydata/test/test_backends.py 46
7836-        self.antecedent = os.path.dirname(self.pathname)
7837+        self.antecedent = os.path.dirname(self.path)
7838     def child(self, childstring):
7839hunk ./src/allmydata/test/test_backends.py 48
7840-        arg2child = os.path.join(self.pathname, childstring)
7841-        print "arg2child: ", arg2child
7842+        arg2child = os.path.join(self.path, childstring)
7843         if fakefilepaths.has_key(arg2child):
7844             child = fakefilepaths[arg2child]
7845hunk ./src/allmydata/test/test_backends.py 51
7846-            print "Should have gotten here."
7847         else:
7848             child = MockFilePath(arg2child)
7849         return child
7850hunk ./src/allmydata/test/test_backends.py 61
7851             parent = MockFilePath(self.antecedent)
7852         return parent
7853     def children(self):
7854-        childrenfromffs = frozenset(fakefilepaths.values())
7855+        childrenfromffs = [ffp for ffp in fakefilepaths.values() if ffp.path.startswith(self.path)]
7856+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
7857+        childrenfromffs = frozenset(childrenfromffs)
7858         return list(childrenfromffs | frozenset(self.spawn.values())) 
7859     def makedirs(self):
7860         # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7861hunk ./src/allmydata/test/test_backends.py 74
7862         pass
7863     def exists(self):
7864         return False
7865-    def open(self):
7866-        return self.File.open()
7867+    def open(self, mode='r'):
7868+        return self.fileobject.open(mode)
7869     def setparents(self):
7870         antecedents = []
7871         def f(fps, antecedents):
7872hunk ./src/allmydata/test/test_backends.py 83
7873             if newfps:
7874                 antecedents.append(newfps)
7875                 f(newfps, antecedents)
7876-        f(self.pathname, antecedents)
7877+        f(self.path, antecedents)
7878         for fps in antecedents:
7879             if not fakefilepaths.has_key(fps):
7880                 fakefilepaths[fps] = MockFilePath(fps)
7881hunk ./src/allmydata/test/test_backends.py 88
7882     def setContent(self, contentstring):
7883-        print "I am self.pathname: ", self.pathname
7884-        fakefilepaths[self.pathname] = self
7885-        self.File = MockFile(contentstring)
7886+        fakefilepaths[self.path] = self
7887+        self.fileobject = MockFileObject(contentstring)
7888         self.setparents()
7889     def create(self):
7890hunk ./src/allmydata/test/test_backends.py 92
7891-        fakefilepaths[self.pathname] = self
7892+        fakefilepaths[self.path] = self
7893         self.setparents()
7894hunk ./src/allmydata/test/test_backends.py 94
7895-           
7896+    def basename(self):
7897+        return os.path.split(self.path)[1]
7898+    def moveTo(self, newffp):
7899+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
7900+        if fakefilepaths.has_key(newffp.path):
7901+            raise OSError
7902+        else:
7903+            fakefilepaths[newffp.path] = self
7904+            self.path = newffp.path
7905+    def getsize(self):
7906+        return self.fileobject.getsize()
7907 
7908hunk ./src/allmydata/test/test_backends.py 106
7909-class MockFile:
7910+class MockFileObject:
7911     def __init__(self, contentstring):
7912         self.buffer = contentstring
7913         self.pos = 0
7914hunk ./src/allmydata/test/test_backends.py 110
7915-    def open(self):
7916+    def open(self, mode='r'):
7917         return self
7918     def write(self, instring):
7919         begin = self.pos
7920hunk ./src/allmydata/test/test_backends.py 117
7921         padlen = begin - len(self.buffer)
7922         if padlen > 0:
7923             self.buffer += '\x00' * padlen
7924-            end = self.pos + len(instring)
7925-            self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7926-            self.pos = end
7927+        end = self.pos + len(instring)
7928+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7929+        self.pos = end
7930     def close(self):
7931hunk ./src/allmydata/test/test_backends.py 121
7932-        pass
7933+        self.pos = 0
7934     def seek(self, pos):
7935         self.pos = pos
7936     def read(self, numberbytes):
7937hunk ./src/allmydata/test/test_backends.py 128
7938         return self.buffer[self.pos:self.pos+numberbytes]
7939     def tell(self):
7940         return self.pos
7941-
7942+    def size(self):
7943+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
7944+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
7945+        return {stat.ST_SIZE:len(self.buffer)}
7946+    def getsize(self):
7947+        return len(self.buffer)
7948 
7949 class MockBCC:
7950     def setServiceParent(self, Parent):
7951hunk ./src/allmydata/test/test_backends.py 177
7952         GetSpace = self.get_available_space.__enter__()
7953         GetSpace.side_effect = self.call_get_available_space
7954 
7955+        self.statforsize = mock.patch('allmydata.storage.backends.das.core.filepath.stat')
7956+        getsize = self.statforsize.__enter__()
7957+        getsize.side_effect = self.call_statforsize
7958+
7959+    def call_statforsize(self, fakefpname):
7960+        return fakefilepaths[fakefpname].fileobject.size()
7961+
7962     def call_FakeBCC(self, StateFile):
7963         return MockBCC()
7964 
7965hunk ./src/allmydata/test/test_backends.py 220
7966         msg( "%s.tearDown()" % (self,))
7967         FakePath = self.FilePathFake.__exit__()       
7968         FakeBCC = self.BCountingCrawler.__exit__()
7969+        getsize = self.statforsize.__exit__()
7970 
7971 expiration_policy = {'enabled' : False,
7972                      'mode' : 'age',
7973hunk ./src/allmydata/test/test_backends.py 284
7974         """
7975         mocktime.return_value = 0
7976         # Inspect incoming and fail unless it's empty.
7977-        # incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7978-        # self.failUnlessReallyEqual(incomingset, frozenset())
7979+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7980+        self.failUnlessReallyEqual(incomingset, frozenset())
7981         
7982         # Populate incoming with the sharenum: 0.
7983         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7984hunk ./src/allmydata/test/test_backends.py 294
7985         self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7986         
7987         # Attempt to create a second share writer with the same sharenum.
7988-        # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7989+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7990 
7991hunk ./src/allmydata/test/test_backends.py 296
7992-        # print bsa
7993         # Show that no sharewriter results from a remote_allocate_buckets
7994         # with the same si and sharenum, until BucketWriter.remote_close()
7995         # has been called.
7996hunk ./src/allmydata/test/test_backends.py 299
7997-        # self.failIf(bsa)
7998+        self.failIf(bsa)
7999 
8000         # Test allocated size.
8001hunk ./src/allmydata/test/test_backends.py 302
8002-        # spaceint = self.ss.allocated_size()
8003-        # self.failUnlessReallyEqual(spaceint, 1)
8004+        spaceint = self.ss.allocated_size()
8005+        self.failUnlessReallyEqual(spaceint, 1)
8006 
8007         # Write 'a' to shnum 0. Only tested together with close and read.
8008hunk ./src/allmydata/test/test_backends.py 306
8009-        # bs[0].remote_write(0, 'a')
8010+        bs[0].remote_write(0, 'a')
8011         
8012         # Preclose: Inspect final, failUnless nothing there.
8013hunk ./src/allmydata/test/test_backends.py 309
8014-        # self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
8015-        # bs[0].remote_close()
8016+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
8017+        bs[0].remote_close()
8018 
8019         # Postclose: (Omnibus) failUnless written data is in final.
8020hunk ./src/allmydata/test/test_backends.py 313
8021-        # sharesinfinal = list(self.backend.get_shares('teststorage_index'))
8022-        # self.failUnlessReallyEqual(len(sharesinfinal), 1)
8023-        # contents = sharesinfinal[0].read_share_data(0, 73)
8024-        # self.failUnlessReallyEqual(contents, client_data)
8025+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
8026+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
8027+        contents = sharesinfinal[0].read_share_data(0, 73)
8028+        self.failUnlessReallyEqual(contents, client_data)
8029 
8030         # Exercise the case that the share we're asking to allocate is
8031         # already (completely) uploaded.
8032hunk ./src/allmydata/test/test_backends.py 320
8033-        # self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8034+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8035         
8036     @mock.patch('time.time')
8037     @mock.patch('allmydata.util.fileutil.get_available_space')
8038}
8039[TestServerAndFSBackend.test_read_old_share passes
8040wilcoxjg@gmail.com**20110729235356
8041 Ignore-this: 574636c959ea58d4609bea2428ff51d3
8042] {
8043hunk ./src/allmydata/storage/backends/das/core.py 37
8044 # $SHARENUM matches this regex:
8045 NUM_RE=re.compile("^[0-9]+$")
8046 
8047-def is_num(fp):
8048-    return NUM_RE.match(fp.basename())
8049-
8050 class DASCore(Backend):
8051     implements(IStorageBackend)
8052     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
8053hunk ./src/allmydata/storage/backends/das/core.py 97
8054         finalstoragedir = si_si2dir(self.sharedir, storageindex)
8055         try:
8056             for fp in finalstoragedir.children():
8057-                if is_num(fp):
8058-                    finalhome = finalstoragedir.child(str(fp.basename()))
8059-                    yield ImmutableShare(storageindex, fp, finalhome)
8060+                fpshnumstr = fp.basename()
8061+                if NUM_RE.match(fpshnumstr):
8062+                    finalhome = finalstoragedir.child(fpshnumstr)
8063+                    yield ImmutableShare(storageindex, fpshnumstr, finalhome)
8064         except UnlistableError:
8065             # There is no shares directory at all.
8066             pass
8067hunk ./src/allmydata/test/test_backends.py 15
8068 from allmydata.storage.server import StorageServer
8069 from allmydata.storage.backends.das.core import DASCore
8070 from allmydata.storage.backends.null.core import NullCore
8071+from allmydata.storage.common import si_si2dir
8072 
8073 # The following share file content was generated with
8074 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
8075hunk ./src/allmydata/test/test_backends.py 155
8076     def setUp(self):
8077         # Make patcher, patch, and make effects for fs using functions.
8078         msg( "%s.setUp()" % (self,))
8079+        fakefilepaths = {}
8080         self.storedir = MockFilePath('teststoredir')
8081         self.basedir = self.storedir.child('shares')
8082         self.baseincdir = self.basedir.child('incoming')
8083hunk ./src/allmydata/test/test_backends.py 223
8084         FakePath = self.FilePathFake.__exit__()       
8085         FakeBCC = self.BCountingCrawler.__exit__()
8086         getsize = self.statforsize.__exit__()
8087+        fakefilepaths = {}
8088 
8089 expiration_policy = {'enabled' : False,
8090                      'mode' : 'age',
8091hunk ./src/allmydata/test/test_backends.py 334
8092             return 0
8093 
8094         mockget_available_space.side_effect = call_get_available_space
8095-       
8096-       
8097         alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8098 
8099hunk ./src/allmydata/test/test_backends.py 336
8100-    @mock.patch('os.path.exists')
8101-    @mock.patch('os.path.getsize')
8102-    @mock.patch('__builtin__.open')
8103-    @mock.patch('os.listdir')
8104-    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
8105+    def test_read_old_share(self):
8106         """ This tests whether the code correctly finds and reads
8107         shares written out by old (Tahoe-LAFS <= v1.8.2)
8108         servers. There is a similar test in test_download, but that one
8109hunk ./src/allmydata/test/test_backends.py 344
8110         stack of code. This one is for exercising just the
8111         StorageServer object. """
8112 
8113-        def call_listdir(dirname):
8114-            precondition(isinstance(dirname, basestring), dirname)
8115-            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
8116-            return ['0']
8117-
8118-        mocklistdir.side_effect = call_listdir
8119-
8120-        def call_open(fname, mode):
8121-            precondition(isinstance(fname, basestring), fname)
8122-            self.failUnlessReallyEqual(fname, sharefname)
8123-            self.failUnlessEqual(mode[0], 'r', mode)
8124-            self.failUnless('b' in mode, mode)
8125-
8126-            return TemporaryFile(share_data)
8127-        mockopen.side_effect = call_open
8128-
8129         datalen = len(share_data)
8130hunk ./src/allmydata/test/test_backends.py 345
8131-        def call_getsize(fname):
8132-            precondition(isinstance(fname, basestring), fname)
8133-            self.failUnlessReallyEqual(fname, sharefname)
8134-            return datalen
8135-        mockgetsize.side_effect = call_getsize
8136-
8137-        def call_exists(fname):
8138-            precondition(isinstance(fname, basestring), fname)
8139-            self.failUnlessReallyEqual(fname, sharefname)
8140-            return True
8141-        mockexists.side_effect = call_exists
8142+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(3))
8143+        finalhome.setContent(share_data)
8144 
8145         # Now begin the test.
8146         bs = self.ss.remote_get_buckets('teststorage_index')
8147hunk ./src/allmydata/test/test_backends.py 352
8148 
8149         self.failUnlessEqual(len(bs), 1)
8150-        b = bs[0]
8151+        b = bs['3']
8152         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
8153         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
8154         # If you try to read past the end you get the as much data as is there.
8155}
8156[TestServerAndFSBackend passes en total!
8157wilcoxjg@gmail.com**20110730010025
8158 Ignore-this: fdc92e08674af1da5708c30557ac5860
8159] {
8160hunk ./src/allmydata/storage/backends/das/core.py 83
8161         """ Return a frozenset of the shnum (as ints) of incoming shares. """
8162         incomingthissi = si_si2dir(self.incomingdir, storageindex)
8163         try:
8164-            childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
8165+            childfps = [ fp for fp in incomingthissi.children() if NUM_RE.match(fp.basename()) ]
8166             shnums = [ int(fp.basename()) for fp in childfps ]
8167             return frozenset(shnums)
8168         except UnlistableError:
8169hunk ./src/allmydata/test/test_backends.py 35
8170     cancelsecret + expirationtime + nextlease
8171 share_data = containerdata + client_data
8172 testnodeid = 'testnodeidxxxxxxxxxx'
8173-fakefilepaths = {}
8174 
8175 
8176hunk ./src/allmydata/test/test_backends.py 37
8177+class MockFiles(unittest.TestCase):
8178+    """ I simulate a filesystem that the code under test can use. I flag the
8179+    code under test if it reads or writes outside of its prescribed
8180+    subtree. I simulate just the parts of the filesystem that the current
8181+    implementation of DAS backend needs. """
8182+
8183+    def setUp(self):
8184+        # Make patcher, patch, and make effects for fs using functions.
8185+        msg( "%s.setUp()" % (self,))
8186+        self.fakefilepaths = {}
8187+        self.storedir = MockFilePath('teststoredir', self.fakefilepaths)
8188+        self.basedir = self.storedir.child('shares')
8189+        self.baseincdir = self.basedir.child('incoming')
8190+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8191+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8192+        self.shareincomingname = self.sharedirincomingname.child('0')
8193+        self.sharefinalname = self.sharedirfinalname.child('0')
8194+
8195+        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
8196+        FakePath = self.FilePathFake.__enter__()
8197+
8198+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
8199+        FakeBCC = self.BCountingCrawler.__enter__()
8200+        FakeBCC.side_effect = self.call_FakeBCC
8201+
8202+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
8203+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
8204+        FakeLCC.side_effect = self.call_FakeLCC
8205+
8206+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
8207+        GetSpace = self.get_available_space.__enter__()
8208+        GetSpace.side_effect = self.call_get_available_space
8209+
8210+        self.statforsize = mock.patch('allmydata.storage.backends.das.core.filepath.stat')
8211+        getsize = self.statforsize.__enter__()
8212+        getsize.side_effect = self.call_statforsize
8213+
8214+    def call_statforsize(self, fakefpname):
8215+        return self.fakefilepaths[fakefpname].fileobject.size()
8216+
8217+    def call_FakeBCC(self, StateFile):
8218+        return MockBCC()
8219+
8220+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
8221+        return MockLCC()
8222+
8223+    def call_listdir(self, fname):
8224+        fnamefp = FilePath(fname)
8225+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8226+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8227+
8228+    def call_stat(self, fname):
8229+        assert isinstance(fname, basestring), fname
8230+        fnamefp = FilePath(fname)
8231+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8232+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8233+        msg("%s.call_stat(%s)" % (self, fname,))
8234+        mstat = MockStat()
8235+        mstat.st_mode = 16893 # a directory
8236+        return mstat
8237+
8238+    def call_get_available_space(self, storedir, reservedspace):
8239+        # The input vector has an input size of 85.
8240+        return 85 - reservedspace
8241+
8242+    def call_exists(self):
8243+        # I'm only called in the ImmutableShareFile constructor.
8244+        return False
8245+
8246+    def call_setContent(self, inputstring):
8247+        self.incomingsharefilecontents = TemporaryFile(inputstring)
8248+
8249+    def tearDown(self):
8250+        msg( "%s.tearDown()" % (self,))
8251+        FakePath = self.FilePathFake.__exit__()       
8252+        FakeBCC = self.BCountingCrawler.__exit__()
8253+        getsize = self.statforsize.__exit__()
8254+        self.fakefilepaths = {}
8255+
8256 class MockStat:
8257     def __init__(self):
8258         self.st_mode = None
8259hunk ./src/allmydata/test/test_backends.py 122
8260 
8261 
8262 class MockFilePath:
8263-    def __init__(self, pathstring):
8264+    def __init__(self, pathstring, ffpathsenvironment):
8265+        self.fakefilepaths = ffpathsenvironment
8266         self.path = pathstring
8267         self.spawn = {}
8268         self.antecedent = os.path.dirname(self.path)
8269hunk ./src/allmydata/test/test_backends.py 129
8270     def child(self, childstring):
8271         arg2child = os.path.join(self.path, childstring)
8272-        if fakefilepaths.has_key(arg2child):
8273-            child = fakefilepaths[arg2child]
8274+        if self.fakefilepaths.has_key(arg2child):
8275+            child = self.fakefilepaths[arg2child]
8276         else:
8277hunk ./src/allmydata/test/test_backends.py 132
8278-            child = MockFilePath(arg2child)
8279+            child = MockFilePath(arg2child, self.fakefilepaths)
8280         return child
8281     def parent(self):
8282hunk ./src/allmydata/test/test_backends.py 135
8283-        if fakefilepaths.has_key(self.antecedent):
8284-            parent = fakefilepaths[self.antecedent]
8285+        if self.fakefilepaths.has_key(self.antecedent):
8286+            parent = self.fakefilepaths[self.antecedent]
8287         else:
8288hunk ./src/allmydata/test/test_backends.py 138
8289-            parent = MockFilePath(self.antecedent)
8290+            parent = MockFilePath(self.antecedent, self.fakefilepaths)
8291         return parent
8292     def children(self):
8293hunk ./src/allmydata/test/test_backends.py 141
8294-        childrenfromffs = [ffp for ffp in fakefilepaths.values() if ffp.path.startswith(self.path)]
8295+        childrenfromffs = [ffp for ffp in self.fakefilepaths.values() if ffp.path.startswith(self.path)]
8296         childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
8297         childrenfromffs = frozenset(childrenfromffs)
8298         return list(childrenfromffs | frozenset(self.spawn.values())) 
8299hunk ./src/allmydata/test/test_backends.py 165
8300                 f(newfps, antecedents)
8301         f(self.path, antecedents)
8302         for fps in antecedents:
8303-            if not fakefilepaths.has_key(fps):
8304-                fakefilepaths[fps] = MockFilePath(fps)
8305+            if not self.fakefilepaths.has_key(fps):
8306+                self.fakefilepaths[fps] = MockFilePath(fps, self.fakefilepaths)
8307     def setContent(self, contentstring):
8308hunk ./src/allmydata/test/test_backends.py 168
8309-        fakefilepaths[self.path] = self
8310+        self.fakefilepaths[self.path] = self
8311         self.fileobject = MockFileObject(contentstring)
8312         self.setparents()
8313     def create(self):
8314hunk ./src/allmydata/test/test_backends.py 172
8315-        fakefilepaths[self.path] = self
8316+        self.fakefilepaths[self.path] = self
8317         self.setparents()
8318     def basename(self):
8319         return os.path.split(self.path)[1]
8320hunk ./src/allmydata/test/test_backends.py 178
8321     def moveTo(self, newffp):
8322         #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
8323-        if fakefilepaths.has_key(newffp.path):
8324+        if self.fakefilepaths.has_key(newffp.path):
8325             raise OSError
8326         else:
8327hunk ./src/allmydata/test/test_backends.py 181
8328-            fakefilepaths[newffp.path] = self
8329+            self.fakefilepaths[newffp.path] = self
8330             self.path = newffp.path
8331     def getsize(self):
8332         return self.fileobject.getsize()
8333hunk ./src/allmydata/test/test_backends.py 225
8334         pass
8335 
8336 
8337-class MockFiles(unittest.TestCase):
8338-    """ I simulate a filesystem that the code under test can use. I flag the
8339-    code under test if it reads or writes outside of its prescribed
8340-    subtree. I simulate just the parts of the filesystem that the current
8341-    implementation of DAS backend needs. """
8342-
8343-    def setUp(self):
8344-        # Make patcher, patch, and make effects for fs using functions.
8345-        msg( "%s.setUp()" % (self,))
8346-        fakefilepaths = {}
8347-        self.storedir = MockFilePath('teststoredir')
8348-        self.basedir = self.storedir.child('shares')
8349-        self.baseincdir = self.basedir.child('incoming')
8350-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8351-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
8352-        self.shareincomingname = self.sharedirincomingname.child('0')
8353-        self.sharefinalname = self.sharedirfinalname.child('0')
8354-
8355-        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
8356-        FakePath = self.FilePathFake.__enter__()
8357-
8358-        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
8359-        FakeBCC = self.BCountingCrawler.__enter__()
8360-        FakeBCC.side_effect = self.call_FakeBCC
8361-
8362-        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
8363-        FakeLCC = self.LeaseCheckingCrawler.__enter__()
8364-        FakeLCC.side_effect = self.call_FakeLCC
8365-
8366-        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
8367-        GetSpace = self.get_available_space.__enter__()
8368-        GetSpace.side_effect = self.call_get_available_space
8369-
8370-        self.statforsize = mock.patch('allmydata.storage.backends.das.core.filepath.stat')
8371-        getsize = self.statforsize.__enter__()
8372-        getsize.side_effect = self.call_statforsize
8373-
8374-    def call_statforsize(self, fakefpname):
8375-        return fakefilepaths[fakefpname].fileobject.size()
8376-
8377-    def call_FakeBCC(self, StateFile):
8378-        return MockBCC()
8379-
8380-    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
8381-        return MockLCC()
8382 
8383hunk ./src/allmydata/test/test_backends.py 226
8384-    def call_listdir(self, fname):
8385-        fnamefp = FilePath(fname)
8386-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8387-                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8388-
8389-    def call_stat(self, fname):
8390-        assert isinstance(fname, basestring), fname
8391-        fnamefp = FilePath(fname)
8392-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
8393-                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
8394-        msg("%s.call_stat(%s)" % (self, fname,))
8395-        mstat = MockStat()
8396-        mstat.st_mode = 16893 # a directory
8397-        return mstat
8398-
8399-    def call_get_available_space(self, storedir, reservedspace):
8400-        # The input vector has an input size of 85.
8401-        return 85 - reservedspace
8402-
8403-    def call_exists(self):
8404-        # I'm only called in the ImmutableShareFile constructor.
8405-        return False
8406-
8407-    def call_setContent(self, inputstring):
8408-        self.incomingsharefilecontents = TemporaryFile(inputstring)
8409-
8410-    def tearDown(self):
8411-        msg( "%s.tearDown()" % (self,))
8412-        FakePath = self.FilePathFake.__exit__()       
8413-        FakeBCC = self.BCountingCrawler.__exit__()
8414-        getsize = self.statforsize.__exit__()
8415-        fakefilepaths = {}
8416 
8417 expiration_policy = {'enabled' : False,
8418                      'mode' : 'age',
8419}
8420
8421Context:
8422
8423[src/allmydata/scripts/cli.py: fix pyflakes warning.
8424david-sarah@jacaranda.org**20110728021402
8425 Ignore-this: 94050140ddb99865295973f49927c509
8426]
8427[Fix the help synopses of CLI commands to include [options] in the right place. fixes #1359, fixes #636
8428david-sarah@jacaranda.org**20110724225440
8429 Ignore-this: 2a8e488a5f63dabfa9db9efd83768a5
8430]
8431[encodingutil: argv and output encodings are always the same on all platforms. Lose the unnecessary generality of them being different. fixes #1120
8432david-sarah@jacaranda.org**20110629185356
8433 Ignore-this: 5ebacbe6903dfa83ffd3ff8436a97787
8434]
8435[docs/man/tahoe.1: add man page. fixes #1420
8436david-sarah@jacaranda.org**20110724171728
8437 Ignore-this: fc7601ec7f25494288d6141d0ae0004c
8438]
8439[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
8440david-sarah@jacaranda.org**20110721234941
8441 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
8442]
8443[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
8444david-sarah@jacaranda.org**20110722000320
8445 Ignore-this: 55cd558b791526113db3f83c00ec328a
8446]
8447[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
8448david-sarah@jacaranda.org**20110721233658
8449 Ignore-this: 81b41745477163c9b39c0b59db91cc62
8450]
8451[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
8452david-sarah@jacaranda.org**20110722035402
8453 Ignore-this: 5d03f544c4154f088e26c7107494bf39
8454]
8455[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
8456david-sarah@jacaranda.org**20110722024907
8457 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
8458]
8459[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
8460david-sarah@jacaranda.org**20110718005949
8461 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
8462]
8463[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
8464david-sarah@jacaranda.org**20110717194315
8465 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
8466]
8467[README.txt: say that quickstart.rst is in the docs directory.
8468david-sarah@jacaranda.org**20110717192400
8469 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
8470]
8471[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
8472zooko@zooko.com**20110717114226
8473 Ignore-this: df222120d41447ce4102616921626c82
8474 fixes #1383
8475]
8476[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
8477david-sarah@jacaranda.org**20110716181813
8478 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
8479]
8480[docs: add missing link in NEWS.rst
8481zooko@zooko.com**20110712153307
8482 Ignore-this: be7b7eb81c03700b739daa1027d72b35
8483]
8484[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
8485zooko@zooko.com**20110712153229
8486 Ignore-this: 723c4f9e2211027c79d711715d972c5
8487 Also remove a couple of vestigial references to figleaf, which is long gone.
8488 fixes #1409 (remove contrib/fuse)
8489]
8490[add Protovis.js-based download-status timeline visualization
8491Brian Warner <warner@lothar.com>**20110629222606
8492 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
8493 
8494 provide status overlap info on the webapi t=json output, add decode/decrypt
8495 rate tooltips, add zoomin/zoomout buttons
8496]
8497[add more download-status data, fix tests
8498Brian Warner <warner@lothar.com>**20110629222555
8499 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
8500]
8501[prepare for viz: improve DownloadStatus events
8502Brian Warner <warner@lothar.com>**20110629222542
8503 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
8504 
8505 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
8506]
8507[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
8508zooko@zooko.com**20110629185711
8509 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
8510]
8511[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
8512david-sarah@jacaranda.org**20110130235809
8513 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
8514]
8515[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
8516david-sarah@jacaranda.org**20110626054124
8517 Ignore-this: abb864427a1b91bd10d5132b4589fd90
8518]
8519[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
8520david-sarah@jacaranda.org**20110623205528
8521 Ignore-this: c63e23146c39195de52fb17c7c49b2da
8522]
8523[Rename test_package_initialization.py to (much shorter) test_import.py .
8524Brian Warner <warner@lothar.com>**20110611190234
8525 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
8526 
8527 The former name was making my 'ls' listings hard to read, by forcing them
8528 down to just two columns.
8529]
8530[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
8531zooko@zooko.com**20110611163741
8532 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
8533 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
8534 fixes #1412
8535]
8536[wui: right-align the size column in the WUI
8537zooko@zooko.com**20110611153758
8538 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
8539 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
8540 fixes #1412
8541]
8542[docs: three minor fixes
8543zooko@zooko.com**20110610121656
8544 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
8545 CREDITS for arc for stats tweak
8546 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
8547 English usage tweak
8548]
8549[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
8550david-sarah@jacaranda.org**20110609223719
8551 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
8552]
8553[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
8554wilcoxjg@gmail.com**20110527120135
8555 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
8556 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
8557 NEWS.rst, stats.py: documentation of change to get_latencies
8558 stats.rst: now documents percentile modification in get_latencies
8559 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
8560 fixes #1392
8561]
8562[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
8563david-sarah@jacaranda.org**20110517011214
8564 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
8565]
8566[docs: convert NEWS to NEWS.rst and change all references to it.
8567david-sarah@jacaranda.org**20110517010255
8568 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
8569]
8570[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
8571david-sarah@jacaranda.org**20110512140559
8572 Ignore-this: 784548fc5367fac5450df1c46890876d
8573]
8574[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
8575david-sarah@jacaranda.org**20110130164923
8576 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
8577]
8578[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
8579zooko@zooko.com**20110128142006
8580 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
8581 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
8582]
8583[M-x whitespace-cleanup
8584zooko@zooko.com**20110510193653
8585 Ignore-this: dea02f831298c0f65ad096960e7df5c7
8586]
8587[docs: fix typo in running.rst, thanks to arch_o_median
8588zooko@zooko.com**20110510193633
8589 Ignore-this: ca06de166a46abbc61140513918e79e8
8590]
8591[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
8592david-sarah@jacaranda.org**20110204204902
8593 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
8594]
8595[relnotes.txt: forseeable -> foreseeable. refs #1342
8596david-sarah@jacaranda.org**20110204204116
8597 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
8598]
8599[replace remaining .html docs with .rst docs
8600zooko@zooko.com**20110510191650
8601 Ignore-this: d557d960a986d4ac8216d1677d236399
8602 Remove install.html (long since deprecated).
8603 Also replace some obsolete references to install.html with references to quickstart.rst.
8604 Fix some broken internal references within docs/historical/historical_known_issues.txt.
8605 Thanks to Ravi Pinjala and Patrick McDonald.
8606 refs #1227
8607]
8608[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
8609zooko@zooko.com**20110428055232
8610 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
8611]
8612[munin tahoe_files plugin: fix incorrect file count
8613francois@ctrlaltdel.ch**20110428055312
8614 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
8615 fixes #1391
8616]
8617[corrected "k must never be smaller than N" to "k must never be greater than N"
8618secorp@allmydata.org**20110425010308
8619 Ignore-this: 233129505d6c70860087f22541805eac
8620]
8621[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
8622david-sarah@jacaranda.org**20110411190738
8623 Ignore-this: 7847d26bc117c328c679f08a7baee519
8624]
8625[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
8626david-sarah@jacaranda.org**20110410155844
8627 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
8628]
8629[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
8630david-sarah@jacaranda.org**20110410155705
8631 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
8632]
8633[remove unused variable detected by pyflakes
8634zooko@zooko.com**20110407172231
8635 Ignore-this: 7344652d5e0720af822070d91f03daf9
8636]
8637[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
8638david-sarah@jacaranda.org**20110401202750
8639 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
8640]
8641[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
8642Brian Warner <warner@lothar.com>**20110325232511
8643 Ignore-this: d5307faa6900f143193bfbe14e0f01a
8644]
8645[control.py: remove all uses of s.get_serverid()
8646warner@lothar.com**20110227011203
8647 Ignore-this: f80a787953bd7fa3d40e828bde00e855
8648]
8649[web: remove some uses of s.get_serverid(), not all
8650warner@lothar.com**20110227011159
8651 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
8652]
8653[immutable/downloader/fetcher.py: remove all get_serverid() calls
8654warner@lothar.com**20110227011156
8655 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
8656]
8657[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
8658warner@lothar.com**20110227011153
8659 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
8660 
8661 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
8662 _shares_from_server dict was being popped incorrectly (using shnum as the
8663 index instead of serverid). I'm still thinking through the consequences of
8664 this bug. It was probably benign and really hard to detect. I think it would
8665 cause us to incorrectly believe that we're pulling too many shares from a
8666 server, and thus prefer a different server rather than asking for a second
8667 share from the first server. The diversity code is intended to spread out the
8668 number of shares simultaneously being requested from each server, but with
8669 this bug, it might be spreading out the total number of shares requested at
8670 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
8671 segment, so the effect doesn't last very long).
8672]
8673[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
8674warner@lothar.com**20110227011150
8675 Ignore-this: d8d56dd8e7b280792b40105e13664554
8676 
8677 test_download.py: create+check MyShare instances better, make sure they share
8678 Server objects, now that finder.py cares
8679]
8680[immutable/downloader/finder.py: reduce use of get_serverid(), one left
8681warner@lothar.com**20110227011146
8682 Ignore-this: 5785be173b491ae8a78faf5142892020
8683]
8684[immutable/offloaded.py: reduce use of get_serverid() a bit more
8685warner@lothar.com**20110227011142
8686 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
8687]
8688[immutable/upload.py: reduce use of get_serverid()
8689warner@lothar.com**20110227011138
8690 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
8691]
8692[immutable/checker.py: remove some uses of s.get_serverid(), not all
8693warner@lothar.com**20110227011134
8694 Ignore-this: e480a37efa9e94e8016d826c492f626e
8695]
8696[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
8697warner@lothar.com**20110227011132
8698 Ignore-this: 6078279ddf42b179996a4b53bee8c421
8699 MockIServer stubs
8700]
8701[upload.py: rearrange _make_trackers a bit, no behavior changes
8702warner@lothar.com**20110227011128
8703 Ignore-this: 296d4819e2af452b107177aef6ebb40f
8704]
8705[happinessutil.py: finally rename merge_peers to merge_servers
8706warner@lothar.com**20110227011124
8707 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
8708]
8709[test_upload.py: factor out FakeServerTracker
8710warner@lothar.com**20110227011120
8711 Ignore-this: 6c182cba90e908221099472cc159325b
8712]
8713[test_upload.py: server-vs-tracker cleanup
8714warner@lothar.com**20110227011115
8715 Ignore-this: 2915133be1a3ba456e8603885437e03
8716]
8717[happinessutil.py: server-vs-tracker cleanup
8718warner@lothar.com**20110227011111
8719 Ignore-this: b856c84033562d7d718cae7cb01085a9
8720]
8721[upload.py: more tracker-vs-server cleanup
8722warner@lothar.com**20110227011107
8723 Ignore-this: bb75ed2afef55e47c085b35def2de315
8724]
8725[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
8726warner@lothar.com**20110227011103
8727 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
8728]
8729[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
8730warner@lothar.com**20110227011100
8731 Ignore-this: 7ea858755cbe5896ac212a925840fe68
8732 
8733 No behavioral changes, just updating variable/method names and log messages.
8734 The effects outside these three files should be minimal: some exception
8735 messages changed (to say "server" instead of "peer"), and some internal class
8736 names were changed. A few things still use "peer" to minimize external
8737 changes, like UploadResults.timings["peer_selection"] and
8738 happinessutil.merge_peers, which can be changed later.
8739]
8740[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
8741warner@lothar.com**20110227011056
8742 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
8743]
8744[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
8745warner@lothar.com**20110227011051
8746 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
8747]
8748[test: increase timeout on a network test because Francois's ARM machine hit that timeout
8749zooko@zooko.com**20110317165909
8750 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
8751 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
8752]
8753[docs/configuration.rst: add a "Frontend Configuration" section
8754Brian Warner <warner@lothar.com>**20110222014323
8755 Ignore-this: 657018aa501fe4f0efef9851628444ca
8756 
8757 this points to docs/frontends/*.rst, which were previously underlinked
8758]
8759[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
8760"Brian Warner <warner@lothar.com>"**20110221061544
8761 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
8762]
8763[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
8764david-sarah@jacaranda.org**20110221015817
8765 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
8766]
8767[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
8768david-sarah@jacaranda.org**20110221020125
8769 Ignore-this: b0744ed58f161bf188e037bad077fc48
8770]
8771[Refactor StorageFarmBroker handling of servers
8772Brian Warner <warner@lothar.com>**20110221015804
8773 Ignore-this: 842144ed92f5717699b8f580eab32a51
8774 
8775 Pass around IServer instance instead of (peerid, rref) tuple. Replace
8776 "descriptor" with "server". Other replacements:
8777 
8778  get_all_servers -> get_connected_servers/get_known_servers
8779  get_servers_for_index -> get_servers_for_psi (now returns IServers)
8780 
8781 This change still needs to be pushed further down: lots of code is now
8782 getting the IServer and then distributing (peerid, rref) internally.
8783 Instead, it ought to distribute the IServer internally and delay
8784 extracting a serverid or rref until the last moment.
8785 
8786 no_network.py was updated to retain parallelism.
8787]
8788[TAG allmydata-tahoe-1.8.2
8789warner@lothar.com**20110131020101]
8790Patch bundle hash:
8791c9fa129170e8e0f6889d6de5477967d0891785b7