Ticket #999: jacp18Zancas20110723.darcs.patch

File jacp18Zancas20110723.darcs.patch, 313.6 KB (added by arch_o_median, at 2011-07-23T03:19:05Z)
Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
50  * adding comments to clarify what I'm about to do.
51
52Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
53  * branching back, no longer attempting to mock inside TestServerFSBackend
54
55Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
56  * checkpoint12 TestServerFSBackend no longer mocks filesystem
57
58Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
59  * JACP
60
61Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
62  * testing get incoming
63
64Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
65  * ImmutableShareFile does not know its StorageIndex
66
67Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
68  * get_incoming correctly reports the 0 share after it has arrived
69
70Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
71  * jacp14
72
73Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
74  * jacp14 or so
75
76Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
77  * temporary work-in-progress patch to be unrecorded
78  tidy up a few tests, work done in pair-programming with Zancas
79
80Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
81  * work in progress intended to be unrecorded and never committed to trunk
82  switch from os.path.join to filepath
83  incomplete refactoring of common "stay in your subtree" tester code into a superclass
84 
85
86Fri Jul 15 13:15:00 MDT 2011  zooko@zooko.com
87  * another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
88  In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
89
90Tue Jul 19 23:59:18 MDT 2011  zooko@zooko.com
91  * another temporary patch for sharing work-in-progress
92  A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
93  There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
94  (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
95 
96
97Fri Jul 22 01:00:36 MDT 2011  wilcoxjg@gmail.com
98  * jacp16 or so
99
100Fri Jul 22 14:32:44 MDT 2011  wilcoxjg@gmail.com
101  * jacp17
102
103Fri Jul 22 21:19:15 MDT 2011  wilcoxjg@gmail.com
104  * jacp18
105
106New patches:
107
108[storage: new mocking tests of storage server read and write
109wilcoxjg@gmail.com**20110325203514
110 Ignore-this: df65c3c4f061dd1516f88662023fdb41
111 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
112] {
113addfile ./src/allmydata/test/test_server.py
114hunk ./src/allmydata/test/test_server.py 1
115+from twisted.trial import unittest
116+
117+from StringIO import StringIO
118+
119+from allmydata.test.common_util import ReallyEqualMixin
120+
121+import mock
122+
123+# This is the code that we're going to be testing.
124+from allmydata.storage.server import StorageServer
125+
126+# The following share file contents was generated with
127+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
128+# with share data == 'a'.
129+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
130+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
131+
132+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
133+
134+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
135+    @mock.patch('__builtin__.open')
136+    def test_create_server(self, mockopen):
137+        """ This tests whether a server instance can be constructed. """
138+
139+        def call_open(fname, mode):
140+            if fname == 'testdir/bucket_counter.state':
141+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
142+            elif fname == 'testdir/lease_checker.state':
143+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
144+            elif fname == 'testdir/lease_checker.history':
145+                return StringIO()
146+        mockopen.side_effect = call_open
147+
148+        # Now begin the test.
149+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
150+
151+        # You passed!
152+
153+class TestServer(unittest.TestCase, ReallyEqualMixin):
154+    @mock.patch('__builtin__.open')
155+    def setUp(self, mockopen):
156+        def call_open(fname, mode):
157+            if fname == 'testdir/bucket_counter.state':
158+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
159+            elif fname == 'testdir/lease_checker.state':
160+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
161+            elif fname == 'testdir/lease_checker.history':
162+                return StringIO()
163+        mockopen.side_effect = call_open
164+
165+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
166+
167+
168+    @mock.patch('time.time')
169+    @mock.patch('os.mkdir')
170+    @mock.patch('__builtin__.open')
171+    @mock.patch('os.listdir')
172+    @mock.patch('os.path.isdir')
173+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
174+        """Handle a report of corruption."""
175+
176+        def call_listdir(dirname):
177+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
178+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
179+
180+        mocklistdir.side_effect = call_listdir
181+
182+        class MockFile:
183+            def __init__(self):
184+                self.buffer = ''
185+                self.pos = 0
186+            def write(self, instring):
187+                begin = self.pos
188+                padlen = begin - len(self.buffer)
189+                if padlen > 0:
190+                    self.buffer += '\x00' * padlen
191+                end = self.pos + len(instring)
192+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
193+                self.pos = end
194+            def close(self):
195+                pass
196+            def seek(self, pos):
197+                self.pos = pos
198+            def read(self, numberbytes):
199+                return self.buffer[self.pos:self.pos+numberbytes]
200+            def tell(self):
201+                return self.pos
202+
203+        mocktime.return_value = 0
204+
205+        sharefile = MockFile()
206+        def call_open(fname, mode):
207+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
208+            return sharefile
209+
210+        mockopen.side_effect = call_open
211+        # Now begin the test.
212+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
213+        print bs
214+        bs[0].remote_write(0, 'a')
215+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
216+
217+
218+    @mock.patch('os.path.exists')
219+    @mock.patch('os.path.getsize')
220+    @mock.patch('__builtin__.open')
221+    @mock.patch('os.listdir')
222+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
223+        """ This tests whether the code correctly finds and reads
224+        shares written out by old (Tahoe-LAFS <= v1.8.2)
225+        servers. There is a similar test in test_download, but that one
226+        is from the perspective of the client and exercises a deeper
227+        stack of code. This one is for exercising just the
228+        StorageServer object. """
229+
230+        def call_listdir(dirname):
231+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
232+            return ['0']
233+
234+        mocklistdir.side_effect = call_listdir
235+
236+        def call_open(fname, mode):
237+            self.failUnlessReallyEqual(fname, sharefname)
238+            self.failUnless('r' in mode, mode)
239+            self.failUnless('b' in mode, mode)
240+
241+            return StringIO(share_file_data)
242+        mockopen.side_effect = call_open
243+
244+        datalen = len(share_file_data)
245+        def call_getsize(fname):
246+            self.failUnlessReallyEqual(fname, sharefname)
247+            return datalen
248+        mockgetsize.side_effect = call_getsize
249+
250+        def call_exists(fname):
251+            self.failUnlessReallyEqual(fname, sharefname)
252+            return True
253+        mockexists.side_effect = call_exists
254+
255+        # Now begin the test.
256+        bs = self.s.remote_get_buckets('teststorage_index')
257+
258+        self.failUnlessEqual(len(bs), 1)
259+        b = bs[0]
260+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
261+        # If you try to read past the end you get the as much data as is there.
262+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
263+        # If you start reading past the end of the file you get the empty string.
264+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
265}
266[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
267wilcoxjg@gmail.com**20110624202850
268 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
269 sloppy not for production
270] {
271move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
272hunk ./src/allmydata/storage/crawler.py 13
273     pass
274 
275 class ShareCrawler(service.MultiService):
276-    """A ShareCrawler subclass is attached to a StorageServer, and
277+    """A subcless of ShareCrawler is attached to a StorageServer, and
278     periodically walks all of its shares, processing each one in some
279     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
280     since large servers can easily have a terabyte of shares, in several
281hunk ./src/allmydata/storage/crawler.py 31
282     We assume that the normal upload/download/get_buckets traffic of a tahoe
283     grid will cause the prefixdir contents to be mostly cached in the kernel,
284     or that the number of buckets in each prefixdir will be small enough to
285-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
286+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
287     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
288     prefix. On this server, each prefixdir took 130ms-200ms to list the first
289     time, and 17ms to list the second time.
290hunk ./src/allmydata/storage/crawler.py 68
291     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
292     minimum_cycle_time = 300 # don't run a cycle faster than this
293 
294-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
295+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
296         service.MultiService.__init__(self)
297         if allowed_cpu_percentage is not None:
298             self.allowed_cpu_percentage = allowed_cpu_percentage
299hunk ./src/allmydata/storage/crawler.py 72
300-        self.server = server
301-        self.sharedir = server.sharedir
302-        self.statefile = statefile
303+        self.backend = backend
304         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
305                          for i in range(2**10)]
306         self.prefixes.sort()
307hunk ./src/allmydata/storage/crawler.py 446
308 
309     minimum_cycle_time = 60*60 # we don't need this more than once an hour
310 
311-    def __init__(self, server, statefile, num_sample_prefixes=1):
312-        ShareCrawler.__init__(self, server, statefile)
313+    def __init__(self, statefile, num_sample_prefixes=1):
314+        ShareCrawler.__init__(self, statefile)
315         self.num_sample_prefixes = num_sample_prefixes
316 
317     def add_initial_state(self):
318hunk ./src/allmydata/storage/expirer.py 15
319     removed.
320 
321     I collect statistics on the leases and make these available to a web
322-    status page, including::
323+    status page, including:
324 
325     Space recovered during this cycle-so-far:
326      actual (only if expiration_enabled=True):
327hunk ./src/allmydata/storage/expirer.py 51
328     slow_start = 360 # wait 6 minutes after startup
329     minimum_cycle_time = 12*60*60 # not more than twice per day
330 
331-    def __init__(self, server, statefile, historyfile,
332+    def __init__(self, statefile, historyfile,
333                  expiration_enabled, mode,
334                  override_lease_duration, # used if expiration_mode=="age"
335                  cutoff_date, # used if expiration_mode=="cutoff-date"
336hunk ./src/allmydata/storage/expirer.py 71
337         else:
338             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
339         self.sharetypes_to_expire = sharetypes
340-        ShareCrawler.__init__(self, server, statefile)
341+        ShareCrawler.__init__(self, statefile)
342 
343     def add_initial_state(self):
344         # we fill ["cycle-to-date"] here (even though they will be reset in
345hunk ./src/allmydata/storage/immutable.py 44
346     sharetype = "immutable"
347 
348     def __init__(self, filename, max_size=None, create=False):
349-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
350+        """ If max_size is not None then I won't allow more than
351+        max_size to be written to me. If create=True then max_size
352+        must not be None. """
353         precondition((max_size is not None) or (not create), max_size, create)
354         self.home = filename
355         self._max_size = max_size
356hunk ./src/allmydata/storage/immutable.py 87
357 
358     def read_share_data(self, offset, length):
359         precondition(offset >= 0)
360-        # reads beyond the end of the data are truncated. Reads that start
361-        # beyond the end of the data return an empty string. I wonder why
362-        # Python doesn't do the following computation for me?
363+        # Reads beyond the end of the data are truncated. Reads that start
364+        # beyond the end of the data return an empty string.
365         seekpos = self._data_offset+offset
366         fsize = os.path.getsize(self.home)
367         actuallength = max(0, min(length, fsize-seekpos))
368hunk ./src/allmydata/storage/immutable.py 198
369             space_freed += os.stat(self.home)[stat.ST_SIZE]
370             self.unlink()
371         return space_freed
372+class NullBucketWriter(Referenceable):
373+    implements(RIBucketWriter)
374 
375hunk ./src/allmydata/storage/immutable.py 201
376+    def remote_write(self, offset, data):
377+        return
378 
379 class BucketWriter(Referenceable):
380     implements(RIBucketWriter)
381hunk ./src/allmydata/storage/server.py 7
382 from twisted.application import service
383 
384 from zope.interface import implements
385-from allmydata.interfaces import RIStorageServer, IStatsProducer
386+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
387 from allmydata.util import fileutil, idlib, log, time_format
388 import allmydata # for __full_version__
389 
390hunk ./src/allmydata/storage/server.py 16
391 from allmydata.storage.lease import LeaseInfo
392 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
393      create_mutable_sharefile
394-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
395+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
396 from allmydata.storage.crawler import BucketCountingCrawler
397 from allmydata.storage.expirer import LeaseCheckingCrawler
398 
399hunk ./src/allmydata/storage/server.py 20
400+from zope.interface import implements
401+
402+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
403+# be started and stopped.
404+class Backend(service.MultiService):
405+    implements(IStatsProducer)
406+    def __init__(self):
407+        service.MultiService.__init__(self)
408+
409+    def get_bucket_shares(self):
410+        """XXX"""
411+        raise NotImplementedError
412+
413+    def get_share(self):
414+        """XXX"""
415+        raise NotImplementedError
416+
417+    def make_bucket_writer(self):
418+        """XXX"""
419+        raise NotImplementedError
420+
421+class NullBackend(Backend):
422+    def __init__(self):
423+        Backend.__init__(self)
424+
425+    def get_available_space(self):
426+        return None
427+
428+    def get_bucket_shares(self, storage_index):
429+        return set()
430+
431+    def get_share(self, storage_index, sharenum):
432+        return None
433+
434+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
435+        return NullBucketWriter()
436+
437+class FSBackend(Backend):
438+    def __init__(self, storedir, readonly=False, reserved_space=0):
439+        Backend.__init__(self)
440+
441+        self._setup_storage(storedir, readonly, reserved_space)
442+        self._setup_corruption_advisory()
443+        self._setup_bucket_counter()
444+        self._setup_lease_checkerf()
445+
446+    def _setup_storage(self, storedir, readonly, reserved_space):
447+        self.storedir = storedir
448+        self.readonly = readonly
449+        self.reserved_space = int(reserved_space)
450+        if self.reserved_space:
451+            if self.get_available_space() is None:
452+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
453+                        umid="0wZ27w", level=log.UNUSUAL)
454+
455+        self.sharedir = os.path.join(self.storedir, "shares")
456+        fileutil.make_dirs(self.sharedir)
457+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
458+        self._clean_incomplete()
459+
460+    def _clean_incomplete(self):
461+        fileutil.rm_dir(self.incomingdir)
462+        fileutil.make_dirs(self.incomingdir)
463+
464+    def _setup_corruption_advisory(self):
465+        # we don't actually create the corruption-advisory dir until necessary
466+        self.corruption_advisory_dir = os.path.join(self.storedir,
467+                                                    "corruption-advisories")
468+
469+    def _setup_bucket_counter(self):
470+        statefile = os.path.join(self.storedir, "bucket_counter.state")
471+        self.bucket_counter = BucketCountingCrawler(statefile)
472+        self.bucket_counter.setServiceParent(self)
473+
474+    def _setup_lease_checkerf(self):
475+        statefile = os.path.join(self.storedir, "lease_checker.state")
476+        historyfile = os.path.join(self.storedir, "lease_checker.history")
477+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
478+                                   expiration_enabled, expiration_mode,
479+                                   expiration_override_lease_duration,
480+                                   expiration_cutoff_date,
481+                                   expiration_sharetypes)
482+        self.lease_checker.setServiceParent(self)
483+
484+    def get_available_space(self):
485+        if self.readonly:
486+            return 0
487+        return fileutil.get_available_space(self.storedir, self.reserved_space)
488+
489+    def get_bucket_shares(self, storage_index):
490+        """Return a list of (shnum, pathname) tuples for files that hold
491+        shares for this storage_index. In each tuple, 'shnum' will always be
492+        the integer form of the last component of 'pathname'."""
493+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
494+        try:
495+            for f in os.listdir(storagedir):
496+                if NUM_RE.match(f):
497+                    filename = os.path.join(storagedir, f)
498+                    yield (int(f), filename)
499+        except OSError:
500+            # Commonly caused by there being no buckets at all.
501+            pass
502+
503 # storage/
504 # storage/shares/incoming
505 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
506hunk ./src/allmydata/storage/server.py 143
507     name = 'storage'
508     LeaseCheckerClass = LeaseCheckingCrawler
509 
510-    def __init__(self, storedir, nodeid, reserved_space=0,
511-                 discard_storage=False, readonly_storage=False,
512+    def __init__(self, nodeid, backend, reserved_space=0,
513+                 readonly_storage=False,
514                  stats_provider=None,
515                  expiration_enabled=False,
516                  expiration_mode="age",
517hunk ./src/allmydata/storage/server.py 155
518         assert isinstance(nodeid, str)
519         assert len(nodeid) == 20
520         self.my_nodeid = nodeid
521-        self.storedir = storedir
522-        sharedir = os.path.join(storedir, "shares")
523-        fileutil.make_dirs(sharedir)
524-        self.sharedir = sharedir
525-        # we don't actually create the corruption-advisory dir until necessary
526-        self.corruption_advisory_dir = os.path.join(storedir,
527-                                                    "corruption-advisories")
528-        self.reserved_space = int(reserved_space)
529-        self.no_storage = discard_storage
530-        self.readonly_storage = readonly_storage
531         self.stats_provider = stats_provider
532         if self.stats_provider:
533             self.stats_provider.register_producer(self)
534hunk ./src/allmydata/storage/server.py 158
535-        self.incomingdir = os.path.join(sharedir, 'incoming')
536-        self._clean_incomplete()
537-        fileutil.make_dirs(self.incomingdir)
538         self._active_writers = weakref.WeakKeyDictionary()
539hunk ./src/allmydata/storage/server.py 159
540+        self.backend = backend
541+        self.backend.setServiceParent(self)
542         log.msg("StorageServer created", facility="tahoe.storage")
543 
544hunk ./src/allmydata/storage/server.py 163
545-        if reserved_space:
546-            if self.get_available_space() is None:
547-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
548-                        umin="0wZ27w", level=log.UNUSUAL)
549-
550         self.latencies = {"allocate": [], # immutable
551                           "write": [],
552                           "close": [],
553hunk ./src/allmydata/storage/server.py 174
554                           "renew": [],
555                           "cancel": [],
556                           }
557-        self.add_bucket_counter()
558-
559-        statefile = os.path.join(self.storedir, "lease_checker.state")
560-        historyfile = os.path.join(self.storedir, "lease_checker.history")
561-        klass = self.LeaseCheckerClass
562-        self.lease_checker = klass(self, statefile, historyfile,
563-                                   expiration_enabled, expiration_mode,
564-                                   expiration_override_lease_duration,
565-                                   expiration_cutoff_date,
566-                                   expiration_sharetypes)
567-        self.lease_checker.setServiceParent(self)
568 
569     def __repr__(self):
570         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
571hunk ./src/allmydata/storage/server.py 178
572 
573-    def add_bucket_counter(self):
574-        statefile = os.path.join(self.storedir, "bucket_counter.state")
575-        self.bucket_counter = BucketCountingCrawler(self, statefile)
576-        self.bucket_counter.setServiceParent(self)
577-
578     def count(self, name, delta=1):
579         if self.stats_provider:
580             self.stats_provider.count("storage_server." + name, delta)
581hunk ./src/allmydata/storage/server.py 233
582             kwargs["facility"] = "tahoe.storage"
583         return log.msg(*args, **kwargs)
584 
585-    def _clean_incomplete(self):
586-        fileutil.rm_dir(self.incomingdir)
587-
588     def get_stats(self):
589         # remember: RIStatsProvider requires that our return dict
590         # contains numeric values.
591hunk ./src/allmydata/storage/server.py 269
592             stats['storage_server.total_bucket_count'] = bucket_count
593         return stats
594 
595-    def get_available_space(self):
596-        """Returns available space for share storage in bytes, or None if no
597-        API to get this information is available."""
598-
599-        if self.readonly_storage:
600-            return 0
601-        return fileutil.get_available_space(self.storedir, self.reserved_space)
602-
603     def allocated_size(self):
604         space = 0
605         for bw in self._active_writers:
606hunk ./src/allmydata/storage/server.py 276
607         return space
608 
609     def remote_get_version(self):
610-        remaining_space = self.get_available_space()
611+        remaining_space = self.backend.get_available_space()
612         if remaining_space is None:
613             # We're on a platform that has no API to get disk stats.
614             remaining_space = 2**64
615hunk ./src/allmydata/storage/server.py 301
616         self.count("allocate")
617         alreadygot = set()
618         bucketwriters = {} # k: shnum, v: BucketWriter
619-        si_dir = storage_index_to_dir(storage_index)
620-        si_s = si_b2a(storage_index)
621 
622hunk ./src/allmydata/storage/server.py 302
623+        si_s = si_b2a(storage_index)
624         log.msg("storage: allocate_buckets %s" % si_s)
625 
626         # in this implementation, the lease information (including secrets)
627hunk ./src/allmydata/storage/server.py 316
628 
629         max_space_per_bucket = allocated_size
630 
631-        remaining_space = self.get_available_space()
632+        remaining_space = self.backend.get_available_space()
633         limited = remaining_space is not None
634         if limited:
635             # this is a bit conservative, since some of this allocated_size()
636hunk ./src/allmydata/storage/server.py 329
637         # they asked about: this will save them a lot of work. Add or update
638         # leases for all of them: if they want us to hold shares for this
639         # file, they'll want us to hold leases for this file.
640-        for (shnum, fn) in self._get_bucket_shares(storage_index):
641+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
642             alreadygot.add(shnum)
643             sf = ShareFile(fn)
644             sf.add_or_renew_lease(lease_info)
645hunk ./src/allmydata/storage/server.py 335
646 
647         for shnum in sharenums:
648-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
649-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
650-            if os.path.exists(finalhome):
651+            share = self.backend.get_share(storage_index, shnum)
652+
653+            if not share:
654+                if (not limited) or (remaining_space >= max_space_per_bucket):
655+                    # ok! we need to create the new share file.
656+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
657+                                      max_space_per_bucket, lease_info, canary)
658+                    bucketwriters[shnum] = bw
659+                    self._active_writers[bw] = 1
660+                    if limited:
661+                        remaining_space -= max_space_per_bucket
662+                else:
663+                    # bummer! not enough space to accept this bucket
664+                    pass
665+
666+            elif share.is_complete():
667                 # great! we already have it. easy.
668                 pass
669hunk ./src/allmydata/storage/server.py 353
670-            elif os.path.exists(incominghome):
671+            elif not share.is_complete():
672                 # Note that we don't create BucketWriters for shnums that
673                 # have a partial share (in incoming/), so if a second upload
674                 # occurs while the first is still in progress, the second
675hunk ./src/allmydata/storage/server.py 359
676                 # uploader will use different storage servers.
677                 pass
678-            elif (not limited) or (remaining_space >= max_space_per_bucket):
679-                # ok! we need to create the new share file.
680-                bw = BucketWriter(self, incominghome, finalhome,
681-                                  max_space_per_bucket, lease_info, canary)
682-                if self.no_storage:
683-                    bw.throw_out_all_data = True
684-                bucketwriters[shnum] = bw
685-                self._active_writers[bw] = 1
686-                if limited:
687-                    remaining_space -= max_space_per_bucket
688-            else:
689-                # bummer! not enough space to accept this bucket
690-                pass
691-
692-        if bucketwriters:
693-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
694 
695         self.add_latency("allocate", time.time() - start)
696         return alreadygot, bucketwriters
697hunk ./src/allmydata/storage/server.py 437
698             self.stats_provider.count('storage_server.bytes_added', consumed_size)
699         del self._active_writers[bw]
700 
701-    def _get_bucket_shares(self, storage_index):
702-        """Return a list of (shnum, pathname) tuples for files that hold
703-        shares for this storage_index. In each tuple, 'shnum' will always be
704-        the integer form of the last component of 'pathname'."""
705-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
706-        try:
707-            for f in os.listdir(storagedir):
708-                if NUM_RE.match(f):
709-                    filename = os.path.join(storagedir, f)
710-                    yield (int(f), filename)
711-        except OSError:
712-            # Commonly caused by there being no buckets at all.
713-            pass
714 
715     def remote_get_buckets(self, storage_index):
716         start = time.time()
717hunk ./src/allmydata/storage/server.py 444
718         si_s = si_b2a(storage_index)
719         log.msg("storage: get_buckets %s" % si_s)
720         bucketreaders = {} # k: sharenum, v: BucketReader
721-        for shnum, filename in self._get_bucket_shares(storage_index):
722+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
723             bucketreaders[shnum] = BucketReader(self, filename,
724                                                 storage_index, shnum)
725         self.add_latency("get", time.time() - start)
726hunk ./src/allmydata/test/test_backends.py 10
727 import mock
728 
729 # This is the code that we're going to be testing.
730-from allmydata.storage.server import StorageServer
731+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
732 
733 # The following share file contents was generated with
734 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
735hunk ./src/allmydata/test/test_backends.py 21
736 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
737 
738 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
739+    @mock.patch('time.time')
740+    @mock.patch('os.mkdir')
741+    @mock.patch('__builtin__.open')
742+    @mock.patch('os.listdir')
743+    @mock.patch('os.path.isdir')
744+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
745+        """ This tests whether a server instance can be constructed
746+        with a null backend. The server instance fails the test if it
747+        tries to read or write to the file system. """
748+
749+        # Now begin the test.
750+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
751+
752+        self.failIf(mockisdir.called)
753+        self.failIf(mocklistdir.called)
754+        self.failIf(mockopen.called)
755+        self.failIf(mockmkdir.called)
756+
757+        # You passed!
758+
759+    @mock.patch('time.time')
760+    @mock.patch('os.mkdir')
761     @mock.patch('__builtin__.open')
762hunk ./src/allmydata/test/test_backends.py 44
763-    def test_create_server(self, mockopen):
764-        """ This tests whether a server instance can be constructed. """
765+    @mock.patch('os.listdir')
766+    @mock.patch('os.path.isdir')
767+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
768+        """ This tests whether a server instance can be constructed
769+        with a filesystem backend. To pass the test, it has to use the
770+        filesystem in only the prescribed ways. """
771 
772         def call_open(fname, mode):
773             if fname == 'testdir/bucket_counter.state':
774hunk ./src/allmydata/test/test_backends.py 58
775                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
776             elif fname == 'testdir/lease_checker.history':
777                 return StringIO()
778+            else:
779+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
780         mockopen.side_effect = call_open
781 
782         # Now begin the test.
783hunk ./src/allmydata/test/test_backends.py 63
784-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
785+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
786+
787+        self.failIf(mockisdir.called)
788+        self.failIf(mocklistdir.called)
789+        self.failIf(mockopen.called)
790+        self.failIf(mockmkdir.called)
791+        self.failIf(mocktime.called)
792 
793         # You passed!
794 
795hunk ./src/allmydata/test/test_backends.py 73
796-class TestServer(unittest.TestCase, ReallyEqualMixin):
797+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
798+    def setUp(self):
799+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
800+
801+    @mock.patch('os.mkdir')
802+    @mock.patch('__builtin__.open')
803+    @mock.patch('os.listdir')
804+    @mock.patch('os.path.isdir')
805+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
806+        """ Write a new share. """
807+
808+        # Now begin the test.
809+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
810+        bs[0].remote_write(0, 'a')
811+        self.failIf(mockisdir.called)
812+        self.failIf(mocklistdir.called)
813+        self.failIf(mockopen.called)
814+        self.failIf(mockmkdir.called)
815+
816+    @mock.patch('os.path.exists')
817+    @mock.patch('os.path.getsize')
818+    @mock.patch('__builtin__.open')
819+    @mock.patch('os.listdir')
820+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
821+        """ This tests whether the code correctly finds and reads
822+        shares written out by old (Tahoe-LAFS <= v1.8.2)
823+        servers. There is a similar test in test_download, but that one
824+        is from the perspective of the client and exercises a deeper
825+        stack of code. This one is for exercising just the
826+        StorageServer object. """
827+
828+        # Now begin the test.
829+        bs = self.s.remote_get_buckets('teststorage_index')
830+
831+        self.failUnlessEqual(len(bs), 0)
832+        self.failIf(mocklistdir.called)
833+        self.failIf(mockopen.called)
834+        self.failIf(mockgetsize.called)
835+        self.failIf(mockexists.called)
836+
837+
838+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
839     @mock.patch('__builtin__.open')
840     def setUp(self, mockopen):
841         def call_open(fname, mode):
842hunk ./src/allmydata/test/test_backends.py 126
843                 return StringIO()
844         mockopen.side_effect = call_open
845 
846-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
847-
848+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
849 
850     @mock.patch('time.time')
851     @mock.patch('os.mkdir')
852hunk ./src/allmydata/test/test_backends.py 134
853     @mock.patch('os.listdir')
854     @mock.patch('os.path.isdir')
855     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
856-        """Handle a report of corruption."""
857+        """ Write a new share. """
858 
859         def call_listdir(dirname):
860             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
861hunk ./src/allmydata/test/test_backends.py 173
862         mockopen.side_effect = call_open
863         # Now begin the test.
864         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
865-        print bs
866         bs[0].remote_write(0, 'a')
867         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
868 
869hunk ./src/allmydata/test/test_backends.py 176
870-
871     @mock.patch('os.path.exists')
872     @mock.patch('os.path.getsize')
873     @mock.patch('__builtin__.open')
874hunk ./src/allmydata/test/test_backends.py 218
875 
876         self.failUnlessEqual(len(bs), 1)
877         b = bs[0]
878+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
879         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
880         # If you try to read past the end you get the as much data as is there.
881         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
882hunk ./src/allmydata/test/test_backends.py 224
883         # If you start reading past the end of the file you get the empty string.
884         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
885+
886+
887}
888[a temp patch used as a snapshot
889wilcoxjg@gmail.com**20110626052732
890 Ignore-this: 95f05e314eaec870afa04c76d979aa44
891] {
892hunk ./docs/configuration.rst 637
893   [storage]
894   enabled = True
895   readonly = True
896-  sizelimit = 10000000000
897 
898 
899   [helper]
900hunk ./docs/garbage-collection.rst 16
901 
902 When a file or directory in the virtual filesystem is no longer referenced,
903 the space that its shares occupied on each storage server can be freed,
904-making room for other shares. Tahoe currently uses a garbage collection
905+making room for other shares. Tahoe uses a garbage collection
906 ("GC") mechanism to implement this space-reclamation process. Each share has
907 one or more "leases", which are managed by clients who want the
908 file/directory to be retained. The storage server accepts each share for a
909hunk ./docs/garbage-collection.rst 34
910 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
911 If lease renewal occurs quickly and with 100% reliability, than any renewal
912 time that is shorter than the lease duration will suffice, but a larger ratio
913-of duration-over-renewal-time will be more robust in the face of occasional
914+of lease duration to renewal time will be more robust in the face of occasional
915 delays or failures.
916 
917 The current recommended values for a small Tahoe grid are to renew the leases
918replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
919hunk ./src/allmydata/client.py 260
920             sharetypes.append("mutable")
921         expiration_sharetypes = tuple(sharetypes)
922 
923+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
924+            xyz
925+        xyz
926         ss = StorageServer(storedir, self.nodeid,
927                            reserved_space=reserved,
928                            discard_storage=discard,
929hunk ./src/allmydata/storage/crawler.py 234
930         f = open(tmpfile, "wb")
931         pickle.dump(self.state, f)
932         f.close()
933-        fileutil.move_into_place(tmpfile, self.statefile)
934+        fileutil.move_into_place(tmpfile, self.statefname)
935 
936     def startService(self):
937         # arrange things to look like we were just sleeping, so
938}
939[snapshot of progress on backend implementation (not suitable for trunk)
940wilcoxjg@gmail.com**20110626053244
941 Ignore-this: 50c764af791c2b99ada8289546806a0a
942] {
943adddir ./src/allmydata/storage/backends
944adddir ./src/allmydata/storage/backends/das
945move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
946adddir ./src/allmydata/storage/backends/null
947hunk ./src/allmydata/interfaces.py 270
948         store that on disk.
949         """
950 
951+class IStorageBackend(Interface):
952+    """
953+    Objects of this kind live on the server side and are used by the
954+    storage server object.
955+    """
956+    def get_available_space(self, reserved_space):
957+        """ Returns available space for share storage in bytes, or
958+        None if this information is not available or if the available
959+        space is unlimited.
960+
961+        If the backend is configured for read-only mode then this will
962+        return 0.
963+
964+        reserved_space is how many bytes to subtract from the answer, so
965+        you can pass how many bytes you would like to leave unused on this
966+        filesystem as reserved_space. """
967+
968+    def get_bucket_shares(self):
969+        """XXX"""
970+
971+    def get_share(self):
972+        """XXX"""
973+
974+    def make_bucket_writer(self):
975+        """XXX"""
976+
977+class IStorageBackendShare(Interface):
978+    """
979+    This object contains as much as all of the share data.  It is intended
980+    for lazy evaluation such that in many use cases substantially less than
981+    all of the share data will be accessed.
982+    """
983+    def is_complete(self):
984+        """
985+        Returns the share state, or None if the share does not exist.
986+        """
987+
988 class IStorageBucketWriter(Interface):
989     """
990     Objects of this kind live on the client side.
991hunk ./src/allmydata/interfaces.py 2492
992 
993 class EmptyPathnameComponentError(Exception):
994     """The webapi disallows empty pathname components."""
995+
996+class IShareStore(Interface):
997+    pass
998+
999addfile ./src/allmydata/storage/backends/__init__.py
1000addfile ./src/allmydata/storage/backends/das/__init__.py
1001addfile ./src/allmydata/storage/backends/das/core.py
1002hunk ./src/allmydata/storage/backends/das/core.py 1
1003+from allmydata.interfaces import IStorageBackend
1004+from allmydata.storage.backends.base import Backend
1005+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1006+from allmydata.util.assertutil import precondition
1007+
1008+import os, re, weakref, struct, time
1009+
1010+from foolscap.api import Referenceable
1011+from twisted.application import service
1012+
1013+from zope.interface import implements
1014+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
1015+from allmydata.util import fileutil, idlib, log, time_format
1016+import allmydata # for __full_version__
1017+
1018+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1019+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
1020+from allmydata.storage.lease import LeaseInfo
1021+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1022+     create_mutable_sharefile
1023+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1024+from allmydata.storage.crawler import FSBucketCountingCrawler
1025+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1026+
1027+from zope.interface import implements
1028+
1029+class DASCore(Backend):
1030+    implements(IStorageBackend)
1031+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1032+        Backend.__init__(self)
1033+
1034+        self._setup_storage(storedir, readonly, reserved_space)
1035+        self._setup_corruption_advisory()
1036+        self._setup_bucket_counter()
1037+        self._setup_lease_checkerf(expiration_policy)
1038+
1039+    def _setup_storage(self, storedir, readonly, reserved_space):
1040+        self.storedir = storedir
1041+        self.readonly = readonly
1042+        self.reserved_space = int(reserved_space)
1043+        if self.reserved_space:
1044+            if self.get_available_space() is None:
1045+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1046+                        umid="0wZ27w", level=log.UNUSUAL)
1047+
1048+        self.sharedir = os.path.join(self.storedir, "shares")
1049+        fileutil.make_dirs(self.sharedir)
1050+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1051+        self._clean_incomplete()
1052+
1053+    def _clean_incomplete(self):
1054+        fileutil.rm_dir(self.incomingdir)
1055+        fileutil.make_dirs(self.incomingdir)
1056+
1057+    def _setup_corruption_advisory(self):
1058+        # we don't actually create the corruption-advisory dir until necessary
1059+        self.corruption_advisory_dir = os.path.join(self.storedir,
1060+                                                    "corruption-advisories")
1061+
1062+    def _setup_bucket_counter(self):
1063+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1064+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1065+        self.bucket_counter.setServiceParent(self)
1066+
1067+    def _setup_lease_checkerf(self, expiration_policy):
1068+        statefile = os.path.join(self.storedir, "lease_checker.state")
1069+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1070+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1071+        self.lease_checker.setServiceParent(self)
1072+
1073+    def get_available_space(self):
1074+        if self.readonly:
1075+            return 0
1076+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1077+
1078+    def get_shares(self, storage_index):
1079+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1080+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1081+        try:
1082+            for f in os.listdir(finalstoragedir):
1083+                if NUM_RE.match(f):
1084+                    filename = os.path.join(finalstoragedir, f)
1085+                    yield FSBShare(filename, int(f))
1086+        except OSError:
1087+            # Commonly caused by there being no buckets at all.
1088+            pass
1089+       
1090+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1091+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1092+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1093+        return bw
1094+       
1095+
1096+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1097+# and share data. The share data is accessed by RIBucketWriter.write and
1098+# RIBucketReader.read . The lease information is not accessible through these
1099+# interfaces.
1100+
1101+# The share file has the following layout:
1102+#  0x00: share file version number, four bytes, current version is 1
1103+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1104+#  0x08: number of leases, four bytes big-endian
1105+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1106+#  A+0x0c = B: first lease. Lease format is:
1107+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1108+#   B+0x04: renew secret, 32 bytes (SHA256)
1109+#   B+0x24: cancel secret, 32 bytes (SHA256)
1110+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1111+#   B+0x48: next lease, or end of record
1112+
1113+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1114+# but it is still filled in by storage servers in case the storage server
1115+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1116+# share file is moved from one storage server to another. The value stored in
1117+# this field is truncated, so if the actual share data length is >= 2**32,
1118+# then the value stored in this field will be the actual share data length
1119+# modulo 2**32.
1120+
1121+class ImmutableShare:
1122+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1123+    sharetype = "immutable"
1124+
1125+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1126+        """ If max_size is not None then I won't allow more than
1127+        max_size to be written to me. If create=True then max_size
1128+        must not be None. """
1129+        precondition((max_size is not None) or (not create), max_size, create)
1130+        self.shnum = shnum
1131+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1132+        self._max_size = max_size
1133+        if create:
1134+            # touch the file, so later callers will see that we're working on
1135+            # it. Also construct the metadata.
1136+            assert not os.path.exists(self.fname)
1137+            fileutil.make_dirs(os.path.dirname(self.fname))
1138+            f = open(self.fname, 'wb')
1139+            # The second field -- the four-byte share data length -- is no
1140+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1141+            # there in case someone downgrades a storage server from >=
1142+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1143+            # server to another, etc. We do saturation -- a share data length
1144+            # larger than 2**32-1 (what can fit into the field) is marked as
1145+            # the largest length that can fit into the field. That way, even
1146+            # if this does happen, the old < v1.3.0 server will still allow
1147+            # clients to read the first part of the share.
1148+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1149+            f.close()
1150+            self._lease_offset = max_size + 0x0c
1151+            self._num_leases = 0
1152+        else:
1153+            f = open(self.fname, 'rb')
1154+            filesize = os.path.getsize(self.fname)
1155+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1156+            f.close()
1157+            if version != 1:
1158+                msg = "sharefile %s had version %d but we wanted 1" % \
1159+                      (self.fname, version)
1160+                raise UnknownImmutableContainerVersionError(msg)
1161+            self._num_leases = num_leases
1162+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1163+        self._data_offset = 0xc
1164+
1165+    def unlink(self):
1166+        os.unlink(self.fname)
1167+
1168+    def read_share_data(self, offset, length):
1169+        precondition(offset >= 0)
1170+        # Reads beyond the end of the data are truncated. Reads that start
1171+        # beyond the end of the data return an empty string.
1172+        seekpos = self._data_offset+offset
1173+        fsize = os.path.getsize(self.fname)
1174+        actuallength = max(0, min(length, fsize-seekpos))
1175+        if actuallength == 0:
1176+            return ""
1177+        f = open(self.fname, 'rb')
1178+        f.seek(seekpos)
1179+        return f.read(actuallength)
1180+
1181+    def write_share_data(self, offset, data):
1182+        length = len(data)
1183+        precondition(offset >= 0, offset)
1184+        if self._max_size is not None and offset+length > self._max_size:
1185+            raise DataTooLargeError(self._max_size, offset, length)
1186+        f = open(self.fname, 'rb+')
1187+        real_offset = self._data_offset+offset
1188+        f.seek(real_offset)
1189+        assert f.tell() == real_offset
1190+        f.write(data)
1191+        f.close()
1192+
1193+    def _write_lease_record(self, f, lease_number, lease_info):
1194+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1195+        f.seek(offset)
1196+        assert f.tell() == offset
1197+        f.write(lease_info.to_immutable_data())
1198+
1199+    def _read_num_leases(self, f):
1200+        f.seek(0x08)
1201+        (num_leases,) = struct.unpack(">L", f.read(4))
1202+        return num_leases
1203+
1204+    def _write_num_leases(self, f, num_leases):
1205+        f.seek(0x08)
1206+        f.write(struct.pack(">L", num_leases))
1207+
1208+    def _truncate_leases(self, f, num_leases):
1209+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1210+
1211+    def get_leases(self):
1212+        """Yields a LeaseInfo instance for all leases."""
1213+        f = open(self.fname, 'rb')
1214+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1215+        f.seek(self._lease_offset)
1216+        for i in range(num_leases):
1217+            data = f.read(self.LEASE_SIZE)
1218+            if data:
1219+                yield LeaseInfo().from_immutable_data(data)
1220+
1221+    def add_lease(self, lease_info):
1222+        f = open(self.fname, 'rb+')
1223+        num_leases = self._read_num_leases(f)
1224+        self._write_lease_record(f, num_leases, lease_info)
1225+        self._write_num_leases(f, num_leases+1)
1226+        f.close()
1227+
1228+    def renew_lease(self, renew_secret, new_expire_time):
1229+        for i,lease in enumerate(self.get_leases()):
1230+            if constant_time_compare(lease.renew_secret, renew_secret):
1231+                # yup. See if we need to update the owner time.
1232+                if new_expire_time > lease.expiration_time:
1233+                    # yes
1234+                    lease.expiration_time = new_expire_time
1235+                    f = open(self.fname, 'rb+')
1236+                    self._write_lease_record(f, i, lease)
1237+                    f.close()
1238+                return
1239+        raise IndexError("unable to renew non-existent lease")
1240+
1241+    def add_or_renew_lease(self, lease_info):
1242+        try:
1243+            self.renew_lease(lease_info.renew_secret,
1244+                             lease_info.expiration_time)
1245+        except IndexError:
1246+            self.add_lease(lease_info)
1247+
1248+
1249+    def cancel_lease(self, cancel_secret):
1250+        """Remove a lease with the given cancel_secret. If the last lease is
1251+        cancelled, the file will be removed. Return the number of bytes that
1252+        were freed (by truncating the list of leases, and possibly by
1253+        deleting the file. Raise IndexError if there was no lease with the
1254+        given cancel_secret.
1255+        """
1256+
1257+        leases = list(self.get_leases())
1258+        num_leases_removed = 0
1259+        for i,lease in enumerate(leases):
1260+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1261+                leases[i] = None
1262+                num_leases_removed += 1
1263+        if not num_leases_removed:
1264+            raise IndexError("unable to find matching lease to cancel")
1265+        if num_leases_removed:
1266+            # pack and write out the remaining leases. We write these out in
1267+            # the same order as they were added, so that if we crash while
1268+            # doing this, we won't lose any non-cancelled leases.
1269+            leases = [l for l in leases if l] # remove the cancelled leases
1270+            f = open(self.fname, 'rb+')
1271+            for i,lease in enumerate(leases):
1272+                self._write_lease_record(f, i, lease)
1273+            self._write_num_leases(f, len(leases))
1274+            self._truncate_leases(f, len(leases))
1275+            f.close()
1276+        space_freed = self.LEASE_SIZE * num_leases_removed
1277+        if not len(leases):
1278+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1279+            self.unlink()
1280+        return space_freed
1281hunk ./src/allmydata/storage/backends/das/expirer.py 2
1282 import time, os, pickle, struct
1283-from allmydata.storage.crawler import ShareCrawler
1284-from allmydata.storage.shares import get_share_file
1285+from allmydata.storage.crawler import FSShareCrawler
1286 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1287      UnknownImmutableContainerVersionError
1288 from twisted.python import log as twlog
1289hunk ./src/allmydata/storage/backends/das/expirer.py 7
1290 
1291-class LeaseCheckingCrawler(ShareCrawler):
1292+class FSLeaseCheckingCrawler(FSShareCrawler):
1293     """I examine the leases on all shares, determining which are still valid
1294     and which have expired. I can remove the expired leases (if so
1295     configured), and the share will be deleted when the last lease is
1296hunk ./src/allmydata/storage/backends/das/expirer.py 50
1297     slow_start = 360 # wait 6 minutes after startup
1298     minimum_cycle_time = 12*60*60 # not more than twice per day
1299 
1300-    def __init__(self, statefile, historyfile,
1301-                 expiration_enabled, mode,
1302-                 override_lease_duration, # used if expiration_mode=="age"
1303-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1304-                 sharetypes):
1305+    def __init__(self, statefile, historyfile, expiration_policy):
1306         self.historyfile = historyfile
1307hunk ./src/allmydata/storage/backends/das/expirer.py 52
1308-        self.expiration_enabled = expiration_enabled
1309-        self.mode = mode
1310+        self.expiration_enabled = expiration_policy['enabled']
1311+        self.mode = expiration_policy['mode']
1312         self.override_lease_duration = None
1313         self.cutoff_date = None
1314         if self.mode == "age":
1315hunk ./src/allmydata/storage/backends/das/expirer.py 57
1316-            assert isinstance(override_lease_duration, (int, type(None)))
1317-            self.override_lease_duration = override_lease_duration # seconds
1318+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1319+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1320         elif self.mode == "cutoff-date":
1321hunk ./src/allmydata/storage/backends/das/expirer.py 60
1322-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1323+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1324             assert cutoff_date is not None
1325hunk ./src/allmydata/storage/backends/das/expirer.py 62
1326-            self.cutoff_date = cutoff_date
1327+            self.cutoff_date = expiration_policy['cutoff_date']
1328         else:
1329hunk ./src/allmydata/storage/backends/das/expirer.py 64
1330-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1331-        self.sharetypes_to_expire = sharetypes
1332-        ShareCrawler.__init__(self, statefile)
1333+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1334+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1335+        FSShareCrawler.__init__(self, statefile)
1336 
1337     def add_initial_state(self):
1338         # we fill ["cycle-to-date"] here (even though they will be reset in
1339hunk ./src/allmydata/storage/backends/das/expirer.py 156
1340 
1341     def process_share(self, sharefilename):
1342         # first, find out what kind of a share it is
1343-        sf = get_share_file(sharefilename)
1344+        f = open(sharefilename, "rb")
1345+        prefix = f.read(32)
1346+        f.close()
1347+        if prefix == MutableShareFile.MAGIC:
1348+            sf = MutableShareFile(sharefilename)
1349+        else:
1350+            # otherwise assume it's immutable
1351+            sf = FSBShare(sharefilename)
1352         sharetype = sf.sharetype
1353         now = time.time()
1354         s = self.stat(sharefilename)
1355addfile ./src/allmydata/storage/backends/null/__init__.py
1356addfile ./src/allmydata/storage/backends/null/core.py
1357hunk ./src/allmydata/storage/backends/null/core.py 1
1358+from allmydata.storage.backends.base import Backend
1359+
1360+class NullCore(Backend):
1361+    def __init__(self):
1362+        Backend.__init__(self)
1363+
1364+    def get_available_space(self):
1365+        return None
1366+
1367+    def get_shares(self, storage_index):
1368+        return set()
1369+
1370+    def get_share(self, storage_index, sharenum):
1371+        return None
1372+
1373+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1374+        return NullBucketWriter()
1375hunk ./src/allmydata/storage/crawler.py 12
1376 class TimeSliceExceeded(Exception):
1377     pass
1378 
1379-class ShareCrawler(service.MultiService):
1380+class FSShareCrawler(service.MultiService):
1381     """A subcless of ShareCrawler is attached to a StorageServer, and
1382     periodically walks all of its shares, processing each one in some
1383     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1384hunk ./src/allmydata/storage/crawler.py 68
1385     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1386     minimum_cycle_time = 300 # don't run a cycle faster than this
1387 
1388-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1389+    def __init__(self, statefname, allowed_cpu_percentage=None):
1390         service.MultiService.__init__(self)
1391         if allowed_cpu_percentage is not None:
1392             self.allowed_cpu_percentage = allowed_cpu_percentage
1393hunk ./src/allmydata/storage/crawler.py 72
1394-        self.backend = backend
1395+        self.statefname = statefname
1396         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1397                          for i in range(2**10)]
1398         self.prefixes.sort()
1399hunk ./src/allmydata/storage/crawler.py 192
1400         #                            of the last bucket to be processed, or
1401         #                            None if we are sleeping between cycles
1402         try:
1403-            f = open(self.statefile, "rb")
1404+            f = open(self.statefname, "rb")
1405             state = pickle.load(f)
1406             f.close()
1407         except EnvironmentError:
1408hunk ./src/allmydata/storage/crawler.py 230
1409         else:
1410             last_complete_prefix = self.prefixes[lcpi]
1411         self.state["last-complete-prefix"] = last_complete_prefix
1412-        tmpfile = self.statefile + ".tmp"
1413+        tmpfile = self.statefname + ".tmp"
1414         f = open(tmpfile, "wb")
1415         pickle.dump(self.state, f)
1416         f.close()
1417hunk ./src/allmydata/storage/crawler.py 433
1418         pass
1419 
1420 
1421-class BucketCountingCrawler(ShareCrawler):
1422+class FSBucketCountingCrawler(FSShareCrawler):
1423     """I keep track of how many buckets are being managed by this server.
1424     This is equivalent to the number of distributed files and directories for
1425     which I am providing storage. The actual number of files+directories in
1426hunk ./src/allmydata/storage/crawler.py 446
1427 
1428     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1429 
1430-    def __init__(self, statefile, num_sample_prefixes=1):
1431-        ShareCrawler.__init__(self, statefile)
1432+    def __init__(self, statefname, num_sample_prefixes=1):
1433+        FSShareCrawler.__init__(self, statefname)
1434         self.num_sample_prefixes = num_sample_prefixes
1435 
1436     def add_initial_state(self):
1437hunk ./src/allmydata/storage/immutable.py 14
1438 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1439      DataTooLargeError
1440 
1441-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1442-# and share data. The share data is accessed by RIBucketWriter.write and
1443-# RIBucketReader.read . The lease information is not accessible through these
1444-# interfaces.
1445-
1446-# The share file has the following layout:
1447-#  0x00: share file version number, four bytes, current version is 1
1448-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1449-#  0x08: number of leases, four bytes big-endian
1450-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1451-#  A+0x0c = B: first lease. Lease format is:
1452-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1453-#   B+0x04: renew secret, 32 bytes (SHA256)
1454-#   B+0x24: cancel secret, 32 bytes (SHA256)
1455-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1456-#   B+0x48: next lease, or end of record
1457-
1458-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1459-# but it is still filled in by storage servers in case the storage server
1460-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1461-# share file is moved from one storage server to another. The value stored in
1462-# this field is truncated, so if the actual share data length is >= 2**32,
1463-# then the value stored in this field will be the actual share data length
1464-# modulo 2**32.
1465-
1466-class ShareFile:
1467-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1468-    sharetype = "immutable"
1469-
1470-    def __init__(self, filename, max_size=None, create=False):
1471-        """ If max_size is not None then I won't allow more than
1472-        max_size to be written to me. If create=True then max_size
1473-        must not be None. """
1474-        precondition((max_size is not None) or (not create), max_size, create)
1475-        self.home = filename
1476-        self._max_size = max_size
1477-        if create:
1478-            # touch the file, so later callers will see that we're working on
1479-            # it. Also construct the metadata.
1480-            assert not os.path.exists(self.home)
1481-            fileutil.make_dirs(os.path.dirname(self.home))
1482-            f = open(self.home, 'wb')
1483-            # The second field -- the four-byte share data length -- is no
1484-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1485-            # there in case someone downgrades a storage server from >=
1486-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1487-            # server to another, etc. We do saturation -- a share data length
1488-            # larger than 2**32-1 (what can fit into the field) is marked as
1489-            # the largest length that can fit into the field. That way, even
1490-            # if this does happen, the old < v1.3.0 server will still allow
1491-            # clients to read the first part of the share.
1492-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1493-            f.close()
1494-            self._lease_offset = max_size + 0x0c
1495-            self._num_leases = 0
1496-        else:
1497-            f = open(self.home, 'rb')
1498-            filesize = os.path.getsize(self.home)
1499-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1500-            f.close()
1501-            if version != 1:
1502-                msg = "sharefile %s had version %d but we wanted 1" % \
1503-                      (filename, version)
1504-                raise UnknownImmutableContainerVersionError(msg)
1505-            self._num_leases = num_leases
1506-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1507-        self._data_offset = 0xc
1508-
1509-    def unlink(self):
1510-        os.unlink(self.home)
1511-
1512-    def read_share_data(self, offset, length):
1513-        precondition(offset >= 0)
1514-        # Reads beyond the end of the data are truncated. Reads that start
1515-        # beyond the end of the data return an empty string.
1516-        seekpos = self._data_offset+offset
1517-        fsize = os.path.getsize(self.home)
1518-        actuallength = max(0, min(length, fsize-seekpos))
1519-        if actuallength == 0:
1520-            return ""
1521-        f = open(self.home, 'rb')
1522-        f.seek(seekpos)
1523-        return f.read(actuallength)
1524-
1525-    def write_share_data(self, offset, data):
1526-        length = len(data)
1527-        precondition(offset >= 0, offset)
1528-        if self._max_size is not None and offset+length > self._max_size:
1529-            raise DataTooLargeError(self._max_size, offset, length)
1530-        f = open(self.home, 'rb+')
1531-        real_offset = self._data_offset+offset
1532-        f.seek(real_offset)
1533-        assert f.tell() == real_offset
1534-        f.write(data)
1535-        f.close()
1536-
1537-    def _write_lease_record(self, f, lease_number, lease_info):
1538-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1539-        f.seek(offset)
1540-        assert f.tell() == offset
1541-        f.write(lease_info.to_immutable_data())
1542-
1543-    def _read_num_leases(self, f):
1544-        f.seek(0x08)
1545-        (num_leases,) = struct.unpack(">L", f.read(4))
1546-        return num_leases
1547-
1548-    def _write_num_leases(self, f, num_leases):
1549-        f.seek(0x08)
1550-        f.write(struct.pack(">L", num_leases))
1551-
1552-    def _truncate_leases(self, f, num_leases):
1553-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1554-
1555-    def get_leases(self):
1556-        """Yields a LeaseInfo instance for all leases."""
1557-        f = open(self.home, 'rb')
1558-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1559-        f.seek(self._lease_offset)
1560-        for i in range(num_leases):
1561-            data = f.read(self.LEASE_SIZE)
1562-            if data:
1563-                yield LeaseInfo().from_immutable_data(data)
1564-
1565-    def add_lease(self, lease_info):
1566-        f = open(self.home, 'rb+')
1567-        num_leases = self._read_num_leases(f)
1568-        self._write_lease_record(f, num_leases, lease_info)
1569-        self._write_num_leases(f, num_leases+1)
1570-        f.close()
1571-
1572-    def renew_lease(self, renew_secret, new_expire_time):
1573-        for i,lease in enumerate(self.get_leases()):
1574-            if constant_time_compare(lease.renew_secret, renew_secret):
1575-                # yup. See if we need to update the owner time.
1576-                if new_expire_time > lease.expiration_time:
1577-                    # yes
1578-                    lease.expiration_time = new_expire_time
1579-                    f = open(self.home, 'rb+')
1580-                    self._write_lease_record(f, i, lease)
1581-                    f.close()
1582-                return
1583-        raise IndexError("unable to renew non-existent lease")
1584-
1585-    def add_or_renew_lease(self, lease_info):
1586-        try:
1587-            self.renew_lease(lease_info.renew_secret,
1588-                             lease_info.expiration_time)
1589-        except IndexError:
1590-            self.add_lease(lease_info)
1591-
1592-
1593-    def cancel_lease(self, cancel_secret):
1594-        """Remove a lease with the given cancel_secret. If the last lease is
1595-        cancelled, the file will be removed. Return the number of bytes that
1596-        were freed (by truncating the list of leases, and possibly by
1597-        deleting the file. Raise IndexError if there was no lease with the
1598-        given cancel_secret.
1599-        """
1600-
1601-        leases = list(self.get_leases())
1602-        num_leases_removed = 0
1603-        for i,lease in enumerate(leases):
1604-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1605-                leases[i] = None
1606-                num_leases_removed += 1
1607-        if not num_leases_removed:
1608-            raise IndexError("unable to find matching lease to cancel")
1609-        if num_leases_removed:
1610-            # pack and write out the remaining leases. We write these out in
1611-            # the same order as they were added, so that if we crash while
1612-            # doing this, we won't lose any non-cancelled leases.
1613-            leases = [l for l in leases if l] # remove the cancelled leases
1614-            f = open(self.home, 'rb+')
1615-            for i,lease in enumerate(leases):
1616-                self._write_lease_record(f, i, lease)
1617-            self._write_num_leases(f, len(leases))
1618-            self._truncate_leases(f, len(leases))
1619-            f.close()
1620-        space_freed = self.LEASE_SIZE * num_leases_removed
1621-        if not len(leases):
1622-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1623-            self.unlink()
1624-        return space_freed
1625-class NullBucketWriter(Referenceable):
1626-    implements(RIBucketWriter)
1627-
1628-    def remote_write(self, offset, data):
1629-        return
1630-
1631 class BucketWriter(Referenceable):
1632     implements(RIBucketWriter)
1633 
1634hunk ./src/allmydata/storage/immutable.py 17
1635-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1636+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1637         self.ss = ss
1638hunk ./src/allmydata/storage/immutable.py 19
1639-        self.incominghome = incominghome
1640-        self.finalhome = finalhome
1641         self._max_size = max_size # don't allow the client to write more than this
1642         self._canary = canary
1643         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1644hunk ./src/allmydata/storage/immutable.py 24
1645         self.closed = False
1646         self.throw_out_all_data = False
1647-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1648+        self._sharefile = immutableshare
1649         # also, add our lease to the file now, so that other ones can be
1650         # added by simultaneous uploaders
1651         self._sharefile.add_lease(lease_info)
1652hunk ./src/allmydata/storage/server.py 16
1653 from allmydata.storage.lease import LeaseInfo
1654 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1655      create_mutable_sharefile
1656-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1657-from allmydata.storage.crawler import BucketCountingCrawler
1658-from allmydata.storage.expirer import LeaseCheckingCrawler
1659 
1660 from zope.interface import implements
1661 
1662hunk ./src/allmydata/storage/server.py 19
1663-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1664-# be started and stopped.
1665-class Backend(service.MultiService):
1666-    implements(IStatsProducer)
1667-    def __init__(self):
1668-        service.MultiService.__init__(self)
1669-
1670-    def get_bucket_shares(self):
1671-        """XXX"""
1672-        raise NotImplementedError
1673-
1674-    def get_share(self):
1675-        """XXX"""
1676-        raise NotImplementedError
1677-
1678-    def make_bucket_writer(self):
1679-        """XXX"""
1680-        raise NotImplementedError
1681-
1682-class NullBackend(Backend):
1683-    def __init__(self):
1684-        Backend.__init__(self)
1685-
1686-    def get_available_space(self):
1687-        return None
1688-
1689-    def get_bucket_shares(self, storage_index):
1690-        return set()
1691-
1692-    def get_share(self, storage_index, sharenum):
1693-        return None
1694-
1695-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1696-        return NullBucketWriter()
1697-
1698-class FSBackend(Backend):
1699-    def __init__(self, storedir, readonly=False, reserved_space=0):
1700-        Backend.__init__(self)
1701-
1702-        self._setup_storage(storedir, readonly, reserved_space)
1703-        self._setup_corruption_advisory()
1704-        self._setup_bucket_counter()
1705-        self._setup_lease_checkerf()
1706-
1707-    def _setup_storage(self, storedir, readonly, reserved_space):
1708-        self.storedir = storedir
1709-        self.readonly = readonly
1710-        self.reserved_space = int(reserved_space)
1711-        if self.reserved_space:
1712-            if self.get_available_space() is None:
1713-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1714-                        umid="0wZ27w", level=log.UNUSUAL)
1715-
1716-        self.sharedir = os.path.join(self.storedir, "shares")
1717-        fileutil.make_dirs(self.sharedir)
1718-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1719-        self._clean_incomplete()
1720-
1721-    def _clean_incomplete(self):
1722-        fileutil.rm_dir(self.incomingdir)
1723-        fileutil.make_dirs(self.incomingdir)
1724-
1725-    def _setup_corruption_advisory(self):
1726-        # we don't actually create the corruption-advisory dir until necessary
1727-        self.corruption_advisory_dir = os.path.join(self.storedir,
1728-                                                    "corruption-advisories")
1729-
1730-    def _setup_bucket_counter(self):
1731-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1732-        self.bucket_counter = BucketCountingCrawler(statefile)
1733-        self.bucket_counter.setServiceParent(self)
1734-
1735-    def _setup_lease_checkerf(self):
1736-        statefile = os.path.join(self.storedir, "lease_checker.state")
1737-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1738-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1739-                                   expiration_enabled, expiration_mode,
1740-                                   expiration_override_lease_duration,
1741-                                   expiration_cutoff_date,
1742-                                   expiration_sharetypes)
1743-        self.lease_checker.setServiceParent(self)
1744-
1745-    def get_available_space(self):
1746-        if self.readonly:
1747-            return 0
1748-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1749-
1750-    def get_bucket_shares(self, storage_index):
1751-        """Return a list of (shnum, pathname) tuples for files that hold
1752-        shares for this storage_index. In each tuple, 'shnum' will always be
1753-        the integer form of the last component of 'pathname'."""
1754-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1755-        try:
1756-            for f in os.listdir(storagedir):
1757-                if NUM_RE.match(f):
1758-                    filename = os.path.join(storagedir, f)
1759-                    yield (int(f), filename)
1760-        except OSError:
1761-            # Commonly caused by there being no buckets at all.
1762-            pass
1763-
1764 # storage/
1765 # storage/shares/incoming
1766 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1767hunk ./src/allmydata/storage/server.py 32
1768 # $SHARENUM matches this regex:
1769 NUM_RE=re.compile("^[0-9]+$")
1770 
1771-
1772-
1773 class StorageServer(service.MultiService, Referenceable):
1774     implements(RIStorageServer, IStatsProducer)
1775     name = 'storage'
1776hunk ./src/allmydata/storage/server.py 35
1777-    LeaseCheckerClass = LeaseCheckingCrawler
1778 
1779     def __init__(self, nodeid, backend, reserved_space=0,
1780                  readonly_storage=False,
1781hunk ./src/allmydata/storage/server.py 38
1782-                 stats_provider=None,
1783-                 expiration_enabled=False,
1784-                 expiration_mode="age",
1785-                 expiration_override_lease_duration=None,
1786-                 expiration_cutoff_date=None,
1787-                 expiration_sharetypes=("mutable", "immutable")):
1788+                 stats_provider=None ):
1789         service.MultiService.__init__(self)
1790         assert isinstance(nodeid, str)
1791         assert len(nodeid) == 20
1792hunk ./src/allmydata/storage/server.py 217
1793         # they asked about: this will save them a lot of work. Add or update
1794         # leases for all of them: if they want us to hold shares for this
1795         # file, they'll want us to hold leases for this file.
1796-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1797-            alreadygot.add(shnum)
1798-            sf = ShareFile(fn)
1799-            sf.add_or_renew_lease(lease_info)
1800-
1801-        for shnum in sharenums:
1802-            share = self.backend.get_share(storage_index, shnum)
1803+        for share in self.backend.get_shares(storage_index):
1804+            alreadygot.add(share.shnum)
1805+            share.add_or_renew_lease(lease_info)
1806 
1807hunk ./src/allmydata/storage/server.py 221
1808-            if not share:
1809-                if (not limited) or (remaining_space >= max_space_per_bucket):
1810-                    # ok! we need to create the new share file.
1811-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1812-                                      max_space_per_bucket, lease_info, canary)
1813-                    bucketwriters[shnum] = bw
1814-                    self._active_writers[bw] = 1
1815-                    if limited:
1816-                        remaining_space -= max_space_per_bucket
1817-                else:
1818-                    # bummer! not enough space to accept this bucket
1819-                    pass
1820+        for shnum in (sharenums - alreadygot):
1821+            if (not limited) or (remaining_space >= max_space_per_bucket):
1822+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1823+                self.backend.set_storage_server(self)
1824+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1825+                                                     max_space_per_bucket, lease_info, canary)
1826+                bucketwriters[shnum] = bw
1827+                self._active_writers[bw] = 1
1828+                if limited:
1829+                    remaining_space -= max_space_per_bucket
1830 
1831hunk ./src/allmydata/storage/server.py 232
1832-            elif share.is_complete():
1833-                # great! we already have it. easy.
1834-                pass
1835-            elif not share.is_complete():
1836-                # Note that we don't create BucketWriters for shnums that
1837-                # have a partial share (in incoming/), so if a second upload
1838-                # occurs while the first is still in progress, the second
1839-                # uploader will use different storage servers.
1840-                pass
1841+        #XXX We SHOULD DOCUMENT LATER.
1842 
1843         self.add_latency("allocate", time.time() - start)
1844         return alreadygot, bucketwriters
1845hunk ./src/allmydata/storage/server.py 238
1846 
1847     def _iter_share_files(self, storage_index):
1848-        for shnum, filename in self._get_bucket_shares(storage_index):
1849+        for shnum, filename in self._get_shares(storage_index):
1850             f = open(filename, 'rb')
1851             header = f.read(32)
1852             f.close()
1853hunk ./src/allmydata/storage/server.py 318
1854         si_s = si_b2a(storage_index)
1855         log.msg("storage: get_buckets %s" % si_s)
1856         bucketreaders = {} # k: sharenum, v: BucketReader
1857-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1858+        for shnum, filename in self.backend.get_shares(storage_index):
1859             bucketreaders[shnum] = BucketReader(self, filename,
1860                                                 storage_index, shnum)
1861         self.add_latency("get", time.time() - start)
1862hunk ./src/allmydata/storage/server.py 334
1863         # since all shares get the same lease data, we just grab the leases
1864         # from the first share
1865         try:
1866-            shnum, filename = self._get_bucket_shares(storage_index).next()
1867+            shnum, filename = self._get_shares(storage_index).next()
1868             sf = ShareFile(filename)
1869             return sf.get_leases()
1870         except StopIteration:
1871hunk ./src/allmydata/storage/shares.py 1
1872-#! /usr/bin/python
1873-
1874-from allmydata.storage.mutable import MutableShareFile
1875-from allmydata.storage.immutable import ShareFile
1876-
1877-def get_share_file(filename):
1878-    f = open(filename, "rb")
1879-    prefix = f.read(32)
1880-    f.close()
1881-    if prefix == MutableShareFile.MAGIC:
1882-        return MutableShareFile(filename)
1883-    # otherwise assume it's immutable
1884-    return ShareFile(filename)
1885-
1886rmfile ./src/allmydata/storage/shares.py
1887hunk ./src/allmydata/test/common_util.py 20
1888 
1889 def flip_one_bit(s, offset=0, size=None):
1890     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1891-    than offset+size. """
1892+    than offset+size. Return the new string. """
1893     if size is None:
1894         size=len(s)-offset
1895     i = randrange(offset, offset+size)
1896hunk ./src/allmydata/test/test_backends.py 7
1897 
1898 from allmydata.test.common_util import ReallyEqualMixin
1899 
1900-import mock
1901+import mock, os
1902 
1903 # This is the code that we're going to be testing.
1904hunk ./src/allmydata/test/test_backends.py 10
1905-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1906+from allmydata.storage.server import StorageServer
1907+
1908+from allmydata.storage.backends.das.core import DASCore
1909+from allmydata.storage.backends.null.core import NullCore
1910+
1911 
1912 # The following share file contents was generated with
1913 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1914hunk ./src/allmydata/test/test_backends.py 22
1915 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1916 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1917 
1918-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1919+tempdir = 'teststoredir'
1920+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1921+sharefname = os.path.join(sharedirname, '0')
1922 
1923 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1924     @mock.patch('time.time')
1925hunk ./src/allmydata/test/test_backends.py 58
1926         filesystem in only the prescribed ways. """
1927 
1928         def call_open(fname, mode):
1929-            if fname == 'testdir/bucket_counter.state':
1930-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1931-            elif fname == 'testdir/lease_checker.state':
1932-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1933-            elif fname == 'testdir/lease_checker.history':
1934+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1935+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1936+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1937+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1938+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1939                 return StringIO()
1940             else:
1941                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1942hunk ./src/allmydata/test/test_backends.py 124
1943     @mock.patch('__builtin__.open')
1944     def setUp(self, mockopen):
1945         def call_open(fname, mode):
1946-            if fname == 'testdir/bucket_counter.state':
1947-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1948-            elif fname == 'testdir/lease_checker.state':
1949-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1950-            elif fname == 'testdir/lease_checker.history':
1951+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1952+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1953+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1954+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1955+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1956                 return StringIO()
1957         mockopen.side_effect = call_open
1958hunk ./src/allmydata/test/test_backends.py 131
1959-
1960-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1961+        expiration_policy = {'enabled' : False,
1962+                             'mode' : 'age',
1963+                             'override_lease_duration' : None,
1964+                             'cutoff_date' : None,
1965+                             'sharetypes' : None}
1966+        testbackend = DASCore(tempdir, expiration_policy)
1967+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1968 
1969     @mock.patch('time.time')
1970     @mock.patch('os.mkdir')
1971hunk ./src/allmydata/test/test_backends.py 148
1972         """ Write a new share. """
1973 
1974         def call_listdir(dirname):
1975-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1976-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1977+            self.failUnlessReallyEqual(dirname, sharedirname)
1978+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1979 
1980         mocklistdir.side_effect = call_listdir
1981 
1982hunk ./src/allmydata/test/test_backends.py 178
1983 
1984         sharefile = MockFile()
1985         def call_open(fname, mode):
1986-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1987+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1988             return sharefile
1989 
1990         mockopen.side_effect = call_open
1991hunk ./src/allmydata/test/test_backends.py 200
1992         StorageServer object. """
1993 
1994         def call_listdir(dirname):
1995-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1996+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1997             return ['0']
1998 
1999         mocklistdir.side_effect = call_listdir
2000}
2001[checkpoint patch
2002wilcoxjg@gmail.com**20110626165715
2003 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
2004] {
2005hunk ./src/allmydata/storage/backends/das/core.py 21
2006 from allmydata.storage.lease import LeaseInfo
2007 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2008      create_mutable_sharefile
2009-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
2010+from allmydata.storage.immutable import BucketWriter, BucketReader
2011 from allmydata.storage.crawler import FSBucketCountingCrawler
2012 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
2013 
2014hunk ./src/allmydata/storage/backends/das/core.py 27
2015 from zope.interface import implements
2016 
2017+# $SHARENUM matches this regex:
2018+NUM_RE=re.compile("^[0-9]+$")
2019+
2020 class DASCore(Backend):
2021     implements(IStorageBackend)
2022     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
2023hunk ./src/allmydata/storage/backends/das/core.py 80
2024         return fileutil.get_available_space(self.storedir, self.reserved_space)
2025 
2026     def get_shares(self, storage_index):
2027-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
2028+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
2029         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
2030         try:
2031             for f in os.listdir(finalstoragedir):
2032hunk ./src/allmydata/storage/backends/das/core.py 86
2033                 if NUM_RE.match(f):
2034                     filename = os.path.join(finalstoragedir, f)
2035-                    yield FSBShare(filename, int(f))
2036+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
2037         except OSError:
2038             # Commonly caused by there being no buckets at all.
2039             pass
2040hunk ./src/allmydata/storage/backends/das/core.py 95
2041         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
2042         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2043         return bw
2044+
2045+    def set_storage_server(self, ss):
2046+        self.ss = ss
2047         
2048 
2049 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2050hunk ./src/allmydata/storage/server.py 29
2051 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2052 # base-32 chars).
2053 
2054-# $SHARENUM matches this regex:
2055-NUM_RE=re.compile("^[0-9]+$")
2056 
2057 class StorageServer(service.MultiService, Referenceable):
2058     implements(RIStorageServer, IStatsProducer)
2059}
2060[checkpoint4
2061wilcoxjg@gmail.com**20110628202202
2062 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2063] {
2064hunk ./src/allmydata/storage/backends/das/core.py 96
2065         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2066         return bw
2067 
2068+    def make_bucket_reader(self, share):
2069+        return BucketReader(self.ss, share)
2070+
2071     def set_storage_server(self, ss):
2072         self.ss = ss
2073         
2074hunk ./src/allmydata/storage/backends/das/core.py 138
2075         must not be None. """
2076         precondition((max_size is not None) or (not create), max_size, create)
2077         self.shnum = shnum
2078+        self.storage_index = storageindex
2079         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2080         self._max_size = max_size
2081         if create:
2082hunk ./src/allmydata/storage/backends/das/core.py 173
2083             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2084         self._data_offset = 0xc
2085 
2086+    def get_shnum(self):
2087+        return self.shnum
2088+
2089     def unlink(self):
2090         os.unlink(self.fname)
2091 
2092hunk ./src/allmydata/storage/backends/null/core.py 2
2093 from allmydata.storage.backends.base import Backend
2094+from allmydata.storage.immutable import BucketWriter, BucketReader
2095 
2096 class NullCore(Backend):
2097     def __init__(self):
2098hunk ./src/allmydata/storage/backends/null/core.py 17
2099     def get_share(self, storage_index, sharenum):
2100         return None
2101 
2102-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2103-        return NullBucketWriter()
2104+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2105+       
2106+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2107+
2108+    def set_storage_server(self, ss):
2109+        self.ss = ss
2110+
2111+class ImmutableShare:
2112+    sharetype = "immutable"
2113+
2114+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2115+        """ If max_size is not None then I won't allow more than
2116+        max_size to be written to me. If create=True then max_size
2117+        must not be None. """
2118+        precondition((max_size is not None) or (not create), max_size, create)
2119+        self.shnum = shnum
2120+        self.storage_index = storageindex
2121+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2122+        self._max_size = max_size
2123+        if create:
2124+            # touch the file, so later callers will see that we're working on
2125+            # it. Also construct the metadata.
2126+            assert not os.path.exists(self.fname)
2127+            fileutil.make_dirs(os.path.dirname(self.fname))
2128+            f = open(self.fname, 'wb')
2129+            # The second field -- the four-byte share data length -- is no
2130+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2131+            # there in case someone downgrades a storage server from >=
2132+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2133+            # server to another, etc. We do saturation -- a share data length
2134+            # larger than 2**32-1 (what can fit into the field) is marked as
2135+            # the largest length that can fit into the field. That way, even
2136+            # if this does happen, the old < v1.3.0 server will still allow
2137+            # clients to read the first part of the share.
2138+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2139+            f.close()
2140+            self._lease_offset = max_size + 0x0c
2141+            self._num_leases = 0
2142+        else:
2143+            f = open(self.fname, 'rb')
2144+            filesize = os.path.getsize(self.fname)
2145+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2146+            f.close()
2147+            if version != 1:
2148+                msg = "sharefile %s had version %d but we wanted 1" % \
2149+                      (self.fname, version)
2150+                raise UnknownImmutableContainerVersionError(msg)
2151+            self._num_leases = num_leases
2152+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2153+        self._data_offset = 0xc
2154+
2155+    def get_shnum(self):
2156+        return self.shnum
2157+
2158+    def unlink(self):
2159+        os.unlink(self.fname)
2160+
2161+    def read_share_data(self, offset, length):
2162+        precondition(offset >= 0)
2163+        # Reads beyond the end of the data are truncated. Reads that start
2164+        # beyond the end of the data return an empty string.
2165+        seekpos = self._data_offset+offset
2166+        fsize = os.path.getsize(self.fname)
2167+        actuallength = max(0, min(length, fsize-seekpos))
2168+        if actuallength == 0:
2169+            return ""
2170+        f = open(self.fname, 'rb')
2171+        f.seek(seekpos)
2172+        return f.read(actuallength)
2173+
2174+    def write_share_data(self, offset, data):
2175+        length = len(data)
2176+        precondition(offset >= 0, offset)
2177+        if self._max_size is not None and offset+length > self._max_size:
2178+            raise DataTooLargeError(self._max_size, offset, length)
2179+        f = open(self.fname, 'rb+')
2180+        real_offset = self._data_offset+offset
2181+        f.seek(real_offset)
2182+        assert f.tell() == real_offset
2183+        f.write(data)
2184+        f.close()
2185+
2186+    def _write_lease_record(self, f, lease_number, lease_info):
2187+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2188+        f.seek(offset)
2189+        assert f.tell() == offset
2190+        f.write(lease_info.to_immutable_data())
2191+
2192+    def _read_num_leases(self, f):
2193+        f.seek(0x08)
2194+        (num_leases,) = struct.unpack(">L", f.read(4))
2195+        return num_leases
2196+
2197+    def _write_num_leases(self, f, num_leases):
2198+        f.seek(0x08)
2199+        f.write(struct.pack(">L", num_leases))
2200+
2201+    def _truncate_leases(self, f, num_leases):
2202+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2203+
2204+    def get_leases(self):
2205+        """Yields a LeaseInfo instance for all leases."""
2206+        f = open(self.fname, 'rb')
2207+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2208+        f.seek(self._lease_offset)
2209+        for i in range(num_leases):
2210+            data = f.read(self.LEASE_SIZE)
2211+            if data:
2212+                yield LeaseInfo().from_immutable_data(data)
2213+
2214+    def add_lease(self, lease_info):
2215+        f = open(self.fname, 'rb+')
2216+        num_leases = self._read_num_leases(f)
2217+        self._write_lease_record(f, num_leases, lease_info)
2218+        self._write_num_leases(f, num_leases+1)
2219+        f.close()
2220+
2221+    def renew_lease(self, renew_secret, new_expire_time):
2222+        for i,lease in enumerate(self.get_leases()):
2223+            if constant_time_compare(lease.renew_secret, renew_secret):
2224+                # yup. See if we need to update the owner time.
2225+                if new_expire_time > lease.expiration_time:
2226+                    # yes
2227+                    lease.expiration_time = new_expire_time
2228+                    f = open(self.fname, 'rb+')
2229+                    self._write_lease_record(f, i, lease)
2230+                    f.close()
2231+                return
2232+        raise IndexError("unable to renew non-existent lease")
2233+
2234+    def add_or_renew_lease(self, lease_info):
2235+        try:
2236+            self.renew_lease(lease_info.renew_secret,
2237+                             lease_info.expiration_time)
2238+        except IndexError:
2239+            self.add_lease(lease_info)
2240+
2241+
2242+    def cancel_lease(self, cancel_secret):
2243+        """Remove a lease with the given cancel_secret. If the last lease is
2244+        cancelled, the file will be removed. Return the number of bytes that
2245+        were freed (by truncating the list of leases, and possibly by
2246+        deleting the file. Raise IndexError if there was no lease with the
2247+        given cancel_secret.
2248+        """
2249+
2250+        leases = list(self.get_leases())
2251+        num_leases_removed = 0
2252+        for i,lease in enumerate(leases):
2253+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2254+                leases[i] = None
2255+                num_leases_removed += 1
2256+        if not num_leases_removed:
2257+            raise IndexError("unable to find matching lease to cancel")
2258+        if num_leases_removed:
2259+            # pack and write out the remaining leases. We write these out in
2260+            # the same order as they were added, so that if we crash while
2261+            # doing this, we won't lose any non-cancelled leases.
2262+            leases = [l for l in leases if l] # remove the cancelled leases
2263+            f = open(self.fname, 'rb+')
2264+            for i,lease in enumerate(leases):
2265+                self._write_lease_record(f, i, lease)
2266+            self._write_num_leases(f, len(leases))
2267+            self._truncate_leases(f, len(leases))
2268+            f.close()
2269+        space_freed = self.LEASE_SIZE * num_leases_removed
2270+        if not len(leases):
2271+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2272+            self.unlink()
2273+        return space_freed
2274hunk ./src/allmydata/storage/immutable.py 114
2275 class BucketReader(Referenceable):
2276     implements(RIBucketReader)
2277 
2278-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2279+    def __init__(self, ss, share):
2280         self.ss = ss
2281hunk ./src/allmydata/storage/immutable.py 116
2282-        self._share_file = ShareFile(sharefname)
2283-        self.storage_index = storage_index
2284-        self.shnum = shnum
2285+        self._share_file = share
2286+        self.storage_index = share.storage_index
2287+        self.shnum = share.shnum
2288 
2289     def __repr__(self):
2290         return "<%s %s %s>" % (self.__class__.__name__,
2291hunk ./src/allmydata/storage/server.py 316
2292         si_s = si_b2a(storage_index)
2293         log.msg("storage: get_buckets %s" % si_s)
2294         bucketreaders = {} # k: sharenum, v: BucketReader
2295-        for shnum, filename in self.backend.get_shares(storage_index):
2296-            bucketreaders[shnum] = BucketReader(self, filename,
2297-                                                storage_index, shnum)
2298+        self.backend.set_storage_server(self)
2299+        for share in self.backend.get_shares(storage_index):
2300+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2301         self.add_latency("get", time.time() - start)
2302         return bucketreaders
2303 
2304hunk ./src/allmydata/test/test_backends.py 25
2305 tempdir = 'teststoredir'
2306 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2307 sharefname = os.path.join(sharedirname, '0')
2308+expiration_policy = {'enabled' : False,
2309+                     'mode' : 'age',
2310+                     'override_lease_duration' : None,
2311+                     'cutoff_date' : None,
2312+                     'sharetypes' : None}
2313 
2314 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2315     @mock.patch('time.time')
2316hunk ./src/allmydata/test/test_backends.py 43
2317         tries to read or write to the file system. """
2318 
2319         # Now begin the test.
2320-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2321+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2322 
2323         self.failIf(mockisdir.called)
2324         self.failIf(mocklistdir.called)
2325hunk ./src/allmydata/test/test_backends.py 74
2326         mockopen.side_effect = call_open
2327 
2328         # Now begin the test.
2329-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2330+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2331 
2332         self.failIf(mockisdir.called)
2333         self.failIf(mocklistdir.called)
2334hunk ./src/allmydata/test/test_backends.py 86
2335 
2336 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2337     def setUp(self):
2338-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2339+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2340 
2341     @mock.patch('os.mkdir')
2342     @mock.patch('__builtin__.open')
2343hunk ./src/allmydata/test/test_backends.py 136
2344             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2345                 return StringIO()
2346         mockopen.side_effect = call_open
2347-        expiration_policy = {'enabled' : False,
2348-                             'mode' : 'age',
2349-                             'override_lease_duration' : None,
2350-                             'cutoff_date' : None,
2351-                             'sharetypes' : None}
2352         testbackend = DASCore(tempdir, expiration_policy)
2353         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2354 
2355}
2356[checkpoint5
2357wilcoxjg@gmail.com**20110705034626
2358 Ignore-this: 255780bd58299b0aa33c027e9d008262
2359] {
2360addfile ./src/allmydata/storage/backends/base.py
2361hunk ./src/allmydata/storage/backends/base.py 1
2362+from twisted.application import service
2363+
2364+class Backend(service.MultiService):
2365+    def __init__(self):
2366+        service.MultiService.__init__(self)
2367hunk ./src/allmydata/storage/backends/null/core.py 19
2368 
2369     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2370         
2371+        immutableshare = ImmutableShare()
2372         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2373 
2374     def set_storage_server(self, ss):
2375hunk ./src/allmydata/storage/backends/null/core.py 28
2376 class ImmutableShare:
2377     sharetype = "immutable"
2378 
2379-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2380+    def __init__(self):
2381         """ If max_size is not None then I won't allow more than
2382         max_size to be written to me. If create=True then max_size
2383         must not be None. """
2384hunk ./src/allmydata/storage/backends/null/core.py 32
2385-        precondition((max_size is not None) or (not create), max_size, create)
2386-        self.shnum = shnum
2387-        self.storage_index = storageindex
2388-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2389-        self._max_size = max_size
2390-        if create:
2391-            # touch the file, so later callers will see that we're working on
2392-            # it. Also construct the metadata.
2393-            assert not os.path.exists(self.fname)
2394-            fileutil.make_dirs(os.path.dirname(self.fname))
2395-            f = open(self.fname, 'wb')
2396-            # The second field -- the four-byte share data length -- is no
2397-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2398-            # there in case someone downgrades a storage server from >=
2399-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2400-            # server to another, etc. We do saturation -- a share data length
2401-            # larger than 2**32-1 (what can fit into the field) is marked as
2402-            # the largest length that can fit into the field. That way, even
2403-            # if this does happen, the old < v1.3.0 server will still allow
2404-            # clients to read the first part of the share.
2405-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2406-            f.close()
2407-            self._lease_offset = max_size + 0x0c
2408-            self._num_leases = 0
2409-        else:
2410-            f = open(self.fname, 'rb')
2411-            filesize = os.path.getsize(self.fname)
2412-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2413-            f.close()
2414-            if version != 1:
2415-                msg = "sharefile %s had version %d but we wanted 1" % \
2416-                      (self.fname, version)
2417-                raise UnknownImmutableContainerVersionError(msg)
2418-            self._num_leases = num_leases
2419-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2420-        self._data_offset = 0xc
2421+        pass
2422 
2423     def get_shnum(self):
2424         return self.shnum
2425hunk ./src/allmydata/storage/backends/null/core.py 54
2426         return f.read(actuallength)
2427 
2428     def write_share_data(self, offset, data):
2429-        length = len(data)
2430-        precondition(offset >= 0, offset)
2431-        if self._max_size is not None and offset+length > self._max_size:
2432-            raise DataTooLargeError(self._max_size, offset, length)
2433-        f = open(self.fname, 'rb+')
2434-        real_offset = self._data_offset+offset
2435-        f.seek(real_offset)
2436-        assert f.tell() == real_offset
2437-        f.write(data)
2438-        f.close()
2439+        pass
2440 
2441     def _write_lease_record(self, f, lease_number, lease_info):
2442         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2443hunk ./src/allmydata/storage/backends/null/core.py 84
2444             if data:
2445                 yield LeaseInfo().from_immutable_data(data)
2446 
2447-    def add_lease(self, lease_info):
2448-        f = open(self.fname, 'rb+')
2449-        num_leases = self._read_num_leases(f)
2450-        self._write_lease_record(f, num_leases, lease_info)
2451-        self._write_num_leases(f, num_leases+1)
2452-        f.close()
2453+    def add_lease(self, lease):
2454+        pass
2455 
2456     def renew_lease(self, renew_secret, new_expire_time):
2457         for i,lease in enumerate(self.get_leases()):
2458hunk ./src/allmydata/test/test_backends.py 32
2459                      'sharetypes' : None}
2460 
2461 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2462-    @mock.patch('time.time')
2463-    @mock.patch('os.mkdir')
2464-    @mock.patch('__builtin__.open')
2465-    @mock.patch('os.listdir')
2466-    @mock.patch('os.path.isdir')
2467-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2468-        """ This tests whether a server instance can be constructed
2469-        with a null backend. The server instance fails the test if it
2470-        tries to read or write to the file system. """
2471-
2472-        # Now begin the test.
2473-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2474-
2475-        self.failIf(mockisdir.called)
2476-        self.failIf(mocklistdir.called)
2477-        self.failIf(mockopen.called)
2478-        self.failIf(mockmkdir.called)
2479-
2480-        # You passed!
2481-
2482     @mock.patch('time.time')
2483     @mock.patch('os.mkdir')
2484     @mock.patch('__builtin__.open')
2485hunk ./src/allmydata/test/test_backends.py 53
2486                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2487         mockopen.side_effect = call_open
2488 
2489-        # Now begin the test.
2490-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2491-
2492-        self.failIf(mockisdir.called)
2493-        self.failIf(mocklistdir.called)
2494-        self.failIf(mockopen.called)
2495-        self.failIf(mockmkdir.called)
2496-        self.failIf(mocktime.called)
2497-
2498-        # You passed!
2499-
2500-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2501-    def setUp(self):
2502-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2503-
2504-    @mock.patch('os.mkdir')
2505-    @mock.patch('__builtin__.open')
2506-    @mock.patch('os.listdir')
2507-    @mock.patch('os.path.isdir')
2508-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2509-        """ Write a new share. """
2510-
2511-        # Now begin the test.
2512-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2513-        bs[0].remote_write(0, 'a')
2514-        self.failIf(mockisdir.called)
2515-        self.failIf(mocklistdir.called)
2516-        self.failIf(mockopen.called)
2517-        self.failIf(mockmkdir.called)
2518+        def call_isdir(fname):
2519+            if fname == os.path.join(tempdir,'shares'):
2520+                return True
2521+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2522+                return True
2523+            else:
2524+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2525+        mockisdir.side_effect = call_isdir
2526 
2527hunk ./src/allmydata/test/test_backends.py 62
2528-    @mock.patch('os.path.exists')
2529-    @mock.patch('os.path.getsize')
2530-    @mock.patch('__builtin__.open')
2531-    @mock.patch('os.listdir')
2532-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2533-        """ This tests whether the code correctly finds and reads
2534-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2535-        servers. There is a similar test in test_download, but that one
2536-        is from the perspective of the client and exercises a deeper
2537-        stack of code. This one is for exercising just the
2538-        StorageServer object. """
2539+        def call_mkdir(fname, mode):
2540+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2541+            self.failUnlessEqual(0777, mode)
2542+            if fname == tempdir:
2543+                return None
2544+            elif fname == os.path.join(tempdir,'shares'):
2545+                return None
2546+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2547+                return None
2548+            else:
2549+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2550+        mockmkdir.side_effect = call_mkdir
2551 
2552         # Now begin the test.
2553hunk ./src/allmydata/test/test_backends.py 76
2554-        bs = self.s.remote_get_buckets('teststorage_index')
2555+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2556 
2557hunk ./src/allmydata/test/test_backends.py 78
2558-        self.failUnlessEqual(len(bs), 0)
2559-        self.failIf(mocklistdir.called)
2560-        self.failIf(mockopen.called)
2561-        self.failIf(mockgetsize.called)
2562-        self.failIf(mockexists.called)
2563+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2564 
2565 
2566 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2567hunk ./src/allmydata/test/test_backends.py 193
2568         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2569 
2570 
2571+
2572+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2573+    @mock.patch('time.time')
2574+    @mock.patch('os.mkdir')
2575+    @mock.patch('__builtin__.open')
2576+    @mock.patch('os.listdir')
2577+    @mock.patch('os.path.isdir')
2578+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2579+        """ This tests whether a file system backend instance can be
2580+        constructed. To pass the test, it has to use the
2581+        filesystem in only the prescribed ways. """
2582+
2583+        def call_open(fname, mode):
2584+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2585+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2586+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2587+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2588+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2589+                return StringIO()
2590+            else:
2591+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2592+        mockopen.side_effect = call_open
2593+
2594+        def call_isdir(fname):
2595+            if fname == os.path.join(tempdir,'shares'):
2596+                return True
2597+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2598+                return True
2599+            else:
2600+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2601+        mockisdir.side_effect = call_isdir
2602+
2603+        def call_mkdir(fname, mode):
2604+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2605+            self.failUnlessEqual(0777, mode)
2606+            if fname == tempdir:
2607+                return None
2608+            elif fname == os.path.join(tempdir,'shares'):
2609+                return None
2610+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2611+                return None
2612+            else:
2613+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2614+        mockmkdir.side_effect = call_mkdir
2615+
2616+        # Now begin the test.
2617+        DASCore('teststoredir', expiration_policy)
2618+
2619+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2620}
2621[checkpoint 6
2622wilcoxjg@gmail.com**20110706190824
2623 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2624] {
2625hunk ./src/allmydata/interfaces.py 100
2626                          renew_secret=LeaseRenewSecret,
2627                          cancel_secret=LeaseCancelSecret,
2628                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2629-                         allocated_size=Offset, canary=Referenceable):
2630+                         allocated_size=Offset,
2631+                         canary=Referenceable):
2632         """
2633hunk ./src/allmydata/interfaces.py 103
2634-        @param storage_index: the index of the bucket to be created or
2635+        @param storage_index: the index of the shares to be created or
2636                               increfed.
2637hunk ./src/allmydata/interfaces.py 105
2638-        @param sharenums: these are the share numbers (probably between 0 and
2639-                          99) that the sender is proposing to store on this
2640-                          server.
2641-        @param renew_secret: This is the secret used to protect bucket refresh
2642+        @param renew_secret: This is the secret used to protect shares refresh
2643                              This secret is generated by the client and
2644                              stored for later comparison by the server. Each
2645                              server is given a different secret.
2646hunk ./src/allmydata/interfaces.py 109
2647-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2648-        @param canary: If the canary is lost before close(), the bucket is
2649+        @param cancel_secret: Like renew_secret, but protects shares decref.
2650+        @param sharenums: these are the share numbers (probably between 0 and
2651+                          99) that the sender is proposing to store on this
2652+                          server.
2653+        @param allocated_size: XXX The size of the shares the client wishes to store.
2654+        @param canary: If the canary is lost before close(), the shares are
2655                        deleted.
2656hunk ./src/allmydata/interfaces.py 116
2657+
2658         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2659                  already have and allocated is what we hereby agree to accept.
2660                  New leases are added for shares in both lists.
2661hunk ./src/allmydata/interfaces.py 128
2662                   renew_secret=LeaseRenewSecret,
2663                   cancel_secret=LeaseCancelSecret):
2664         """
2665-        Add a new lease on the given bucket. If the renew_secret matches an
2666+        Add a new lease on the given shares. If the renew_secret matches an
2667         existing lease, that lease will be renewed instead. If there is no
2668         bucket for the given storage_index, return silently. (note that in
2669         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2670hunk ./src/allmydata/storage/server.py 17
2671 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2672      create_mutable_sharefile
2673 
2674-from zope.interface import implements
2675-
2676 # storage/
2677 # storage/shares/incoming
2678 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2679hunk ./src/allmydata/test/test_backends.py 6
2680 from StringIO import StringIO
2681 
2682 from allmydata.test.common_util import ReallyEqualMixin
2683+from allmydata.util.assertutil import _assert
2684 
2685 import mock, os
2686 
2687hunk ./src/allmydata/test/test_backends.py 92
2688                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2689             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2690                 return StringIO()
2691+            else:
2692+                _assert(False, "The tester code doesn't recognize this case.") 
2693+
2694         mockopen.side_effect = call_open
2695         testbackend = DASCore(tempdir, expiration_policy)
2696         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2697hunk ./src/allmydata/test/test_backends.py 109
2698 
2699         def call_listdir(dirname):
2700             self.failUnlessReallyEqual(dirname, sharedirname)
2701-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2702+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2703 
2704         mocklistdir.side_effect = call_listdir
2705 
2706hunk ./src/allmydata/test/test_backends.py 113
2707+        def call_isdir(dirname):
2708+            self.failUnlessReallyEqual(dirname, sharedirname)
2709+            return True
2710+
2711+        mockisdir.side_effect = call_isdir
2712+
2713+        def call_mkdir(dirname, permissions):
2714+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2715+                self.Fail
2716+            else:
2717+                return True
2718+
2719+        mockmkdir.side_effect = call_mkdir
2720+
2721         class MockFile:
2722             def __init__(self):
2723                 self.buffer = ''
2724hunk ./src/allmydata/test/test_backends.py 156
2725             return sharefile
2726 
2727         mockopen.side_effect = call_open
2728+
2729         # Now begin the test.
2730         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2731         bs[0].remote_write(0, 'a')
2732hunk ./src/allmydata/test/test_backends.py 161
2733         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2734+       
2735+        # Now test the allocated_size method.
2736+        spaceint = self.s.allocated_size()
2737 
2738     @mock.patch('os.path.exists')
2739     @mock.patch('os.path.getsize')
2740}
2741[checkpoint 7
2742wilcoxjg@gmail.com**20110706200820
2743 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2744] hunk ./src/allmydata/test/test_backends.py 164
2745         
2746         # Now test the allocated_size method.
2747         spaceint = self.s.allocated_size()
2748+        self.failUnlessReallyEqual(spaceint, 1)
2749 
2750     @mock.patch('os.path.exists')
2751     @mock.patch('os.path.getsize')
2752[checkpoint8
2753wilcoxjg@gmail.com**20110706223126
2754 Ignore-this: 97336180883cb798b16f15411179f827
2755   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2756] hunk ./src/allmydata/test/test_backends.py 32
2757                      'cutoff_date' : None,
2758                      'sharetypes' : None}
2759 
2760+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2761+    def setUp(self):
2762+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2763+
2764+    @mock.patch('os.mkdir')
2765+    @mock.patch('__builtin__.open')
2766+    @mock.patch('os.listdir')
2767+    @mock.patch('os.path.isdir')
2768+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2769+        """ Write a new share. """
2770+
2771+        # Now begin the test.
2772+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2773+        bs[0].remote_write(0, 'a')
2774+        self.failIf(mockisdir.called)
2775+        self.failIf(mocklistdir.called)
2776+        self.failIf(mockopen.called)
2777+        self.failIf(mockmkdir.called)
2778+
2779 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2780     @mock.patch('time.time')
2781     @mock.patch('os.mkdir')
2782[checkpoint 9
2783wilcoxjg@gmail.com**20110707042942
2784 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2785] {
2786hunk ./src/allmydata/storage/backends/das/core.py 88
2787                     filename = os.path.join(finalstoragedir, f)
2788                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2789         except OSError:
2790-            # Commonly caused by there being no buckets at all.
2791+            # Commonly caused by there being no shares at all.
2792             pass
2793         
2794     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2795hunk ./src/allmydata/storage/backends/das/core.py 141
2796         self.storage_index = storageindex
2797         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2798         self._max_size = max_size
2799+        self.incomingdir = os.path.join(sharedir, 'incoming')
2800+        si_dir = storage_index_to_dir(storageindex)
2801+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2802+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2803         if create:
2804             # touch the file, so later callers will see that we're working on
2805             # it. Also construct the metadata.
2806hunk ./src/allmydata/storage/backends/das/core.py 177
2807             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2808         self._data_offset = 0xc
2809 
2810+    def close(self):
2811+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2812+        fileutil.rename(self.incominghome, self.finalhome)
2813+        try:
2814+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2815+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2816+            # these directories lying around forever, but the delete might
2817+            # fail if we're working on another share for the same storage
2818+            # index (like ab/abcde/5). The alternative approach would be to
2819+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2820+            # ShareWriter), each of which is responsible for a single
2821+            # directory on disk, and have them use reference counting of
2822+            # their children to know when they should do the rmdir. This
2823+            # approach is simpler, but relies on os.rmdir refusing to delete
2824+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2825+            os.rmdir(os.path.dirname(self.incominghome))
2826+            # we also delete the grandparent (prefix) directory, .../ab ,
2827+            # again to avoid leaving directories lying around. This might
2828+            # fail if there is another bucket open that shares a prefix (like
2829+            # ab/abfff).
2830+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2831+            # we leave the great-grandparent (incoming/) directory in place.
2832+        except EnvironmentError:
2833+            # ignore the "can't rmdir because the directory is not empty"
2834+            # exceptions, those are normal consequences of the
2835+            # above-mentioned conditions.
2836+            pass
2837+        pass
2838+       
2839+    def stat(self):
2840+        return os.stat(self.finalhome)[stat.ST_SIZE]
2841+
2842     def get_shnum(self):
2843         return self.shnum
2844 
2845hunk ./src/allmydata/storage/immutable.py 7
2846 
2847 from zope.interface import implements
2848 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2849-from allmydata.util import base32, fileutil, log
2850+from allmydata.util import base32, log
2851 from allmydata.util.assertutil import precondition
2852 from allmydata.util.hashutil import constant_time_compare
2853 from allmydata.storage.lease import LeaseInfo
2854hunk ./src/allmydata/storage/immutable.py 44
2855     def remote_close(self):
2856         precondition(not self.closed)
2857         start = time.time()
2858-
2859-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2860-        fileutil.rename(self.incominghome, self.finalhome)
2861-        try:
2862-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2863-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2864-            # these directories lying around forever, but the delete might
2865-            # fail if we're working on another share for the same storage
2866-            # index (like ab/abcde/5). The alternative approach would be to
2867-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2868-            # ShareWriter), each of which is responsible for a single
2869-            # directory on disk, and have them use reference counting of
2870-            # their children to know when they should do the rmdir. This
2871-            # approach is simpler, but relies on os.rmdir refusing to delete
2872-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2873-            os.rmdir(os.path.dirname(self.incominghome))
2874-            # we also delete the grandparent (prefix) directory, .../ab ,
2875-            # again to avoid leaving directories lying around. This might
2876-            # fail if there is another bucket open that shares a prefix (like
2877-            # ab/abfff).
2878-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2879-            # we leave the great-grandparent (incoming/) directory in place.
2880-        except EnvironmentError:
2881-            # ignore the "can't rmdir because the directory is not empty"
2882-            # exceptions, those are normal consequences of the
2883-            # above-mentioned conditions.
2884-            pass
2885+        self._sharefile.close()
2886         self._sharefile = None
2887         self.closed = True
2888         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2889hunk ./src/allmydata/storage/immutable.py 49
2890 
2891-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2892+        filelen = self._sharefile.stat()
2893         self.ss.bucket_writer_closed(self, filelen)
2894         self.ss.add_latency("close", time.time() - start)
2895         self.ss.count("close")
2896hunk ./src/allmydata/storage/server.py 45
2897         self._active_writers = weakref.WeakKeyDictionary()
2898         self.backend = backend
2899         self.backend.setServiceParent(self)
2900+        self.backend.set_storage_server(self)
2901         log.msg("StorageServer created", facility="tahoe.storage")
2902 
2903         self.latencies = {"allocate": [], # immutable
2904hunk ./src/allmydata/storage/server.py 220
2905 
2906         for shnum in (sharenums - alreadygot):
2907             if (not limited) or (remaining_space >= max_space_per_bucket):
2908-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2909-                self.backend.set_storage_server(self)
2910                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2911                                                      max_space_per_bucket, lease_info, canary)
2912                 bucketwriters[shnum] = bw
2913hunk ./src/allmydata/test/test_backends.py 117
2914         mockopen.side_effect = call_open
2915         testbackend = DASCore(tempdir, expiration_policy)
2916         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2917-
2918+   
2919+    @mock.patch('allmydata.util.fileutil.get_available_space')
2920     @mock.patch('time.time')
2921     @mock.patch('os.mkdir')
2922     @mock.patch('__builtin__.open')
2923hunk ./src/allmydata/test/test_backends.py 124
2924     @mock.patch('os.listdir')
2925     @mock.patch('os.path.isdir')
2926-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2927+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2928+                             mockget_available_space):
2929         """ Write a new share. """
2930 
2931         def call_listdir(dirname):
2932hunk ./src/allmydata/test/test_backends.py 148
2933 
2934         mockmkdir.side_effect = call_mkdir
2935 
2936+        def call_get_available_space(storedir, reserved_space):
2937+            self.failUnlessReallyEqual(storedir, tempdir)
2938+            return 1
2939+
2940+        mockget_available_space.side_effect = call_get_available_space
2941+
2942         class MockFile:
2943             def __init__(self):
2944                 self.buffer = ''
2945hunk ./src/allmydata/test/test_backends.py 188
2946         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2947         bs[0].remote_write(0, 'a')
2948         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2949-       
2950+
2951+        # What happens when there's not enough space for the client's request?
2952+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2953+
2954         # Now test the allocated_size method.
2955         spaceint = self.s.allocated_size()
2956         self.failUnlessReallyEqual(spaceint, 1)
2957}
2958[checkpoint10
2959wilcoxjg@gmail.com**20110707172049
2960 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2961] {
2962hunk ./src/allmydata/test/test_backends.py 20
2963 # The following share file contents was generated with
2964 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2965 # with share data == 'a'.
2966-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2967+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2968+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2969+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2970 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2971 
2972hunk ./src/allmydata/test/test_backends.py 25
2973+testnodeid = 'testnodeidxxxxxxxxxx'
2974 tempdir = 'teststoredir'
2975 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2976 sharefname = os.path.join(sharedirname, '0')
2977hunk ./src/allmydata/test/test_backends.py 37
2978 
2979 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2980     def setUp(self):
2981-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2982+        self.s = StorageServer(testnodeid, backend=NullCore())
2983 
2984     @mock.patch('os.mkdir')
2985     @mock.patch('__builtin__.open')
2986hunk ./src/allmydata/test/test_backends.py 99
2987         mockmkdir.side_effect = call_mkdir
2988 
2989         # Now begin the test.
2990-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2991+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2992 
2993         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2994 
2995hunk ./src/allmydata/test/test_backends.py 119
2996 
2997         mockopen.side_effect = call_open
2998         testbackend = DASCore(tempdir, expiration_policy)
2999-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
3000-   
3001+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3002+       
3003+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3004     @mock.patch('allmydata.util.fileutil.get_available_space')
3005     @mock.patch('time.time')
3006     @mock.patch('os.mkdir')
3007hunk ./src/allmydata/test/test_backends.py 129
3008     @mock.patch('os.listdir')
3009     @mock.patch('os.path.isdir')
3010     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3011-                             mockget_available_space):
3012+                             mockget_available_space, mockget_shares):
3013         """ Write a new share. """
3014 
3015         def call_listdir(dirname):
3016hunk ./src/allmydata/test/test_backends.py 139
3017         mocklistdir.side_effect = call_listdir
3018 
3019         def call_isdir(dirname):
3020+            #XXX Should there be any other tests here?
3021             self.failUnlessReallyEqual(dirname, sharedirname)
3022             return True
3023 
3024hunk ./src/allmydata/test/test_backends.py 159
3025 
3026         mockget_available_space.side_effect = call_get_available_space
3027 
3028+        mocktime.return_value = 0
3029+        class MockShare:
3030+            def __init__(self):
3031+                self.shnum = 1
3032+               
3033+            def add_or_renew_lease(elf, lease_info):
3034+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3035+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3036+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3037+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3038+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3039+               
3040+
3041+        share = MockShare()
3042+        def call_get_shares(storageindex):
3043+            return [share]
3044+
3045+        mockget_shares.side_effect = call_get_shares
3046+
3047         class MockFile:
3048             def __init__(self):
3049                 self.buffer = ''
3050hunk ./src/allmydata/test/test_backends.py 199
3051             def tell(self):
3052                 return self.pos
3053 
3054-        mocktime.return_value = 0
3055 
3056         sharefile = MockFile()
3057         def call_open(fname, mode):
3058}
3059[jacp 11
3060wilcoxjg@gmail.com**20110708213919
3061 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3062] {
3063hunk ./src/allmydata/storage/backends/das/core.py 144
3064         self.incomingdir = os.path.join(sharedir, 'incoming')
3065         si_dir = storage_index_to_dir(storageindex)
3066         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3067+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3068         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3069         if create:
3070             # touch the file, so later callers will see that we're working on
3071hunk ./src/allmydata/storage/backends/das/core.py 208
3072         pass
3073         
3074     def stat(self):
3075-        return os.stat(self.finalhome)[stat.ST_SIZE]
3076+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3077 
3078     def get_shnum(self):
3079         return self.shnum
3080hunk ./src/allmydata/storage/immutable.py 44
3081     def remote_close(self):
3082         precondition(not self.closed)
3083         start = time.time()
3084+
3085         self._sharefile.close()
3086hunk ./src/allmydata/storage/immutable.py 46
3087+        filelen = self._sharefile.stat()
3088         self._sharefile = None
3089hunk ./src/allmydata/storage/immutable.py 48
3090+
3091         self.closed = True
3092         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3093 
3094hunk ./src/allmydata/storage/immutable.py 52
3095-        filelen = self._sharefile.stat()
3096         self.ss.bucket_writer_closed(self, filelen)
3097         self.ss.add_latency("close", time.time() - start)
3098         self.ss.count("close")
3099hunk ./src/allmydata/storage/server.py 220
3100 
3101         for shnum in (sharenums - alreadygot):
3102             if (not limited) or (remaining_space >= max_space_per_bucket):
3103-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3104-                                                     max_space_per_bucket, lease_info, canary)
3105+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3106                 bucketwriters[shnum] = bw
3107                 self._active_writers[bw] = 1
3108                 if limited:
3109hunk ./src/allmydata/test/test_backends.py 20
3110 # The following share file contents was generated with
3111 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3112 # with share data == 'a'.
3113-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3114-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3115+renew_secret  = 'x'*32
3116+cancel_secret = 'y'*32
3117 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3118 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3119 
3120hunk ./src/allmydata/test/test_backends.py 27
3121 testnodeid = 'testnodeidxxxxxxxxxx'
3122 tempdir = 'teststoredir'
3123-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3124-sharefname = os.path.join(sharedirname, '0')
3125+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3126+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3127+shareincomingname = os.path.join(sharedirincomingname, '0')
3128+sharefname = os.path.join(sharedirfinalname, '0')
3129+
3130 expiration_policy = {'enabled' : False,
3131                      'mode' : 'age',
3132                      'override_lease_duration' : None,
3133hunk ./src/allmydata/test/test_backends.py 123
3134         mockopen.side_effect = call_open
3135         testbackend = DASCore(tempdir, expiration_policy)
3136         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3137-       
3138+
3139+    @mock.patch('allmydata.util.fileutil.rename')
3140+    @mock.patch('allmydata.util.fileutil.make_dirs')
3141+    @mock.patch('os.path.exists')
3142+    @mock.patch('os.stat')
3143     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3144     @mock.patch('allmydata.util.fileutil.get_available_space')
3145     @mock.patch('time.time')
3146hunk ./src/allmydata/test/test_backends.py 136
3147     @mock.patch('os.listdir')
3148     @mock.patch('os.path.isdir')
3149     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3150-                             mockget_available_space, mockget_shares):
3151+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3152+                             mockmake_dirs, mockrename):
3153         """ Write a new share. """
3154 
3155         def call_listdir(dirname):
3156hunk ./src/allmydata/test/test_backends.py 141
3157-            self.failUnlessReallyEqual(dirname, sharedirname)
3158+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3159             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3160 
3161         mocklistdir.side_effect = call_listdir
3162hunk ./src/allmydata/test/test_backends.py 148
3163 
3164         def call_isdir(dirname):
3165             #XXX Should there be any other tests here?
3166-            self.failUnlessReallyEqual(dirname, sharedirname)
3167+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3168             return True
3169 
3170         mockisdir.side_effect = call_isdir
3171hunk ./src/allmydata/test/test_backends.py 154
3172 
3173         def call_mkdir(dirname, permissions):
3174-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3175+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3176                 self.Fail
3177             else:
3178                 return True
3179hunk ./src/allmydata/test/test_backends.py 208
3180                 return self.pos
3181 
3182 
3183-        sharefile = MockFile()
3184+        fobj = MockFile()
3185         def call_open(fname, mode):
3186             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3187hunk ./src/allmydata/test/test_backends.py 211
3188-            return sharefile
3189+            return fobj
3190 
3191         mockopen.side_effect = call_open
3192 
3193hunk ./src/allmydata/test/test_backends.py 215
3194+        def call_make_dirs(dname):
3195+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3196+           
3197+        mockmake_dirs.side_effect = call_make_dirs
3198+
3199+        def call_rename(src, dst):
3200+           self.failUnlessReallyEqual(src, shareincomingname)
3201+           self.failUnlessReallyEqual(dst, sharefname)
3202+           
3203+        mockrename.side_effect = call_rename
3204+
3205+        def call_exists(fname):
3206+            self.failUnlessReallyEqual(fname, sharefname)
3207+
3208+        mockexists.side_effect = call_exists
3209+
3210         # Now begin the test.
3211         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3212         bs[0].remote_write(0, 'a')
3213hunk ./src/allmydata/test/test_backends.py 234
3214-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3215+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3216+        spaceint = self.s.allocated_size()
3217+        self.failUnlessReallyEqual(spaceint, 1)
3218+
3219+        bs[0].remote_close()
3220 
3221         # What happens when there's not enough space for the client's request?
3222hunk ./src/allmydata/test/test_backends.py 241
3223-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3224+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3225 
3226         # Now test the allocated_size method.
3227hunk ./src/allmydata/test/test_backends.py 244
3228-        spaceint = self.s.allocated_size()
3229-        self.failUnlessReallyEqual(spaceint, 1)
3230+        #self.failIf(mockexists.called, mockexists.call_args_list)
3231+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3232+        #self.failIf(mockrename.called, mockrename.call_args_list)
3233+        #self.failIf(mockstat.called, mockstat.call_args_list)
3234 
3235     @mock.patch('os.path.exists')
3236     @mock.patch('os.path.getsize')
3237}
3238[checkpoint12 testing correct behavior with regard to incoming and final
3239wilcoxjg@gmail.com**20110710191915
3240 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3241] {
3242hunk ./src/allmydata/storage/backends/das/core.py 74
3243         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3244         self.lease_checker.setServiceParent(self)
3245 
3246+    def get_incoming(self, storageindex):
3247+        return set((1,))
3248+
3249     def get_available_space(self):
3250         if self.readonly:
3251             return 0
3252hunk ./src/allmydata/storage/server.py 77
3253         """Return a dict, indexed by category, that contains a dict of
3254         latency numbers for each category. If there are sufficient samples
3255         for unambiguous interpretation, each dict will contain the
3256-        following keys: mean, 01_0_percentile, 10_0_percentile,
3257+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3258         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3259         99_0_percentile, 99_9_percentile.  If there are insufficient
3260         samples for a given percentile to be interpreted unambiguously
3261hunk ./src/allmydata/storage/server.py 120
3262 
3263     def get_stats(self):
3264         # remember: RIStatsProvider requires that our return dict
3265-        # contains numeric values.
3266+        # contains numeric, or None values.
3267         stats = { 'storage_server.allocated': self.allocated_size(), }
3268         stats['storage_server.reserved_space'] = self.reserved_space
3269         for category,ld in self.get_latencies().items():
3270hunk ./src/allmydata/storage/server.py 185
3271         start = time.time()
3272         self.count("allocate")
3273         alreadygot = set()
3274+        incoming = set()
3275         bucketwriters = {} # k: shnum, v: BucketWriter
3276 
3277         si_s = si_b2a(storage_index)
3278hunk ./src/allmydata/storage/server.py 219
3279             alreadygot.add(share.shnum)
3280             share.add_or_renew_lease(lease_info)
3281 
3282-        for shnum in (sharenums - alreadygot):
3283+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3284+        incoming = self.backend.get_incoming(storageindex)
3285+
3286+        for shnum in ((sharenums - alreadygot) - incoming):
3287             if (not limited) or (remaining_space >= max_space_per_bucket):
3288                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3289                 bucketwriters[shnum] = bw
3290hunk ./src/allmydata/storage/server.py 229
3291                 self._active_writers[bw] = 1
3292                 if limited:
3293                     remaining_space -= max_space_per_bucket
3294-
3295-        #XXX We SHOULD DOCUMENT LATER.
3296+            else:
3297+                # Bummer not enough space to accept this share.
3298+                pass
3299 
3300         self.add_latency("allocate", time.time() - start)
3301         return alreadygot, bucketwriters
3302hunk ./src/allmydata/storage/server.py 323
3303         self.add_latency("get", time.time() - start)
3304         return bucketreaders
3305 
3306-    def get_leases(self, storage_index):
3307+    def remote_get_incoming(self, storageindex):
3308+        incoming_share_set = self.backend.get_incoming(storageindex)
3309+        return incoming_share_set
3310+
3311+    def get_leases(self, storageindex):
3312         """Provide an iterator that yields all of the leases attached to this
3313         bucket. Each lease is returned as a LeaseInfo instance.
3314 
3315hunk ./src/allmydata/storage/server.py 337
3316         # since all shares get the same lease data, we just grab the leases
3317         # from the first share
3318         try:
3319-            shnum, filename = self._get_shares(storage_index).next()
3320+            shnum, filename = self._get_shares(storageindex).next()
3321             sf = ShareFile(filename)
3322             return sf.get_leases()
3323         except StopIteration:
3324hunk ./src/allmydata/test/test_backends.py 182
3325 
3326         share = MockShare()
3327         def call_get_shares(storageindex):
3328-            return [share]
3329+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3330+            return []#share]
3331 
3332         mockget_shares.side_effect = call_get_shares
3333 
3334hunk ./src/allmydata/test/test_backends.py 222
3335         mockmake_dirs.side_effect = call_make_dirs
3336 
3337         def call_rename(src, dst):
3338-           self.failUnlessReallyEqual(src, shareincomingname)
3339-           self.failUnlessReallyEqual(dst, sharefname)
3340+            self.failUnlessReallyEqual(src, shareincomingname)
3341+            self.failUnlessReallyEqual(dst, sharefname)
3342             
3343         mockrename.side_effect = call_rename
3344 
3345hunk ./src/allmydata/test/test_backends.py 233
3346         mockexists.side_effect = call_exists
3347 
3348         # Now begin the test.
3349+
3350+        # XXX (0) ???  Fail unless something is not properly set-up?
3351         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3352hunk ./src/allmydata/test/test_backends.py 236
3353+
3354+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3355+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3356+
3357+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3358+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3359+        # with the same si, until BucketWriter.remote_close() has been called.
3360+        # self.failIf(bsa)
3361+
3362+        # XXX (3) Inspect final and fail unless there's nothing there.
3363         bs[0].remote_write(0, 'a')
3364hunk ./src/allmydata/test/test_backends.py 247
3365+        # XXX (4a) Inspect final and fail unless share 0 is there.
3366+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3367         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3368         spaceint = self.s.allocated_size()
3369         self.failUnlessReallyEqual(spaceint, 1)
3370hunk ./src/allmydata/test/test_backends.py 253
3371 
3372+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3373         bs[0].remote_close()
3374 
3375         # What happens when there's not enough space for the client's request?
3376hunk ./src/allmydata/test/test_backends.py 260
3377         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3378 
3379         # Now test the allocated_size method.
3380-        #self.failIf(mockexists.called, mockexists.call_args_list)
3381+        # self.failIf(mockexists.called, mockexists.call_args_list)
3382         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3383         #self.failIf(mockrename.called, mockrename.call_args_list)
3384         #self.failIf(mockstat.called, mockstat.call_args_list)
3385}
3386[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3387wilcoxjg@gmail.com**20110710195139
3388 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3389] {
3390hunk ./src/allmydata/storage/server.py 220
3391             share.add_or_renew_lease(lease_info)
3392 
3393         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3394-        incoming = self.backend.get_incoming(storageindex)
3395+        incoming = self.backend.get_incoming(storage_index)
3396 
3397         for shnum in ((sharenums - alreadygot) - incoming):
3398             if (not limited) or (remaining_space >= max_space_per_bucket):
3399hunk ./src/allmydata/storage/server.py 323
3400         self.add_latency("get", time.time() - start)
3401         return bucketreaders
3402 
3403-    def remote_get_incoming(self, storageindex):
3404-        incoming_share_set = self.backend.get_incoming(storageindex)
3405+    def remote_get_incoming(self, storage_index):
3406+        incoming_share_set = self.backend.get_incoming(storage_index)
3407         return incoming_share_set
3408 
3409hunk ./src/allmydata/storage/server.py 327
3410-    def get_leases(self, storageindex):
3411+    def get_leases(self, storage_index):
3412         """Provide an iterator that yields all of the leases attached to this
3413         bucket. Each lease is returned as a LeaseInfo instance.
3414 
3415hunk ./src/allmydata/storage/server.py 337
3416         # since all shares get the same lease data, we just grab the leases
3417         # from the first share
3418         try:
3419-            shnum, filename = self._get_shares(storageindex).next()
3420+            shnum, filename = self._get_shares(storage_index).next()
3421             sf = ShareFile(filename)
3422             return sf.get_leases()
3423         except StopIteration:
3424replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3425}
3426[adding comments to clarify what I'm about to do.
3427wilcoxjg@gmail.com**20110710220623
3428 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3429] {
3430hunk ./src/allmydata/storage/backends/das/core.py 8
3431 
3432 import os, re, weakref, struct, time
3433 
3434-from foolscap.api import Referenceable
3435+#from foolscap.api import Referenceable
3436 from twisted.application import service
3437 
3438 from zope.interface import implements
3439hunk ./src/allmydata/storage/backends/das/core.py 12
3440-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3441+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3442 from allmydata.util import fileutil, idlib, log, time_format
3443 import allmydata # for __full_version__
3444 
3445hunk ./src/allmydata/storage/server.py 219
3446             alreadygot.add(share.shnum)
3447             share.add_or_renew_lease(lease_info)
3448 
3449-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3450+        # fill incoming with all shares that are incoming use a set operation
3451+        # since there's no need to operate on individual pieces
3452         incoming = self.backend.get_incoming(storageindex)
3453 
3454         for shnum in ((sharenums - alreadygot) - incoming):
3455hunk ./src/allmydata/test/test_backends.py 245
3456         # with the same si, until BucketWriter.remote_close() has been called.
3457         # self.failIf(bsa)
3458 
3459-        # XXX (3) Inspect final and fail unless there's nothing there.
3460         bs[0].remote_write(0, 'a')
3461hunk ./src/allmydata/test/test_backends.py 246
3462-        # XXX (4a) Inspect final and fail unless share 0 is there.
3463-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3464         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3465         spaceint = self.s.allocated_size()
3466         self.failUnlessReallyEqual(spaceint, 1)
3467hunk ./src/allmydata/test/test_backends.py 250
3468 
3469-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3470+        # XXX (3) Inspect final and fail unless there's nothing there.
3471         bs[0].remote_close()
3472hunk ./src/allmydata/test/test_backends.py 252
3473+        # XXX (4a) Inspect final and fail unless share 0 is there.
3474+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3475 
3476         # What happens when there's not enough space for the client's request?
3477         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3478}
3479[branching back, no longer attempting to mock inside TestServerFSBackend
3480wilcoxjg@gmail.com**20110711190849
3481 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3482] {
3483hunk ./src/allmydata/storage/backends/das/core.py 75
3484         self.lease_checker.setServiceParent(self)
3485 
3486     def get_incoming(self, storageindex):
3487-        return set((1,))
3488-
3489-    def get_available_space(self):
3490-        if self.readonly:
3491-            return 0
3492-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3493+        """Return the set of incoming shnums."""
3494+        return set(os.listdir(self.incomingdir))
3495 
3496     def get_shares(self, storage_index):
3497         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3498hunk ./src/allmydata/storage/backends/das/core.py 90
3499             # Commonly caused by there being no shares at all.
3500             pass
3501         
3502+    def get_available_space(self):
3503+        if self.readonly:
3504+            return 0
3505+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3506+
3507     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3508         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3509         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3510hunk ./src/allmydata/test/test_backends.py 27
3511 
3512 testnodeid = 'testnodeidxxxxxxxxxx'
3513 tempdir = 'teststoredir'
3514-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3515-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3516+basedir = os.path.join(tempdir, 'shares')
3517+baseincdir = os.path.join(basedir, 'incoming')
3518+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3519+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3520 shareincomingname = os.path.join(sharedirincomingname, '0')
3521 sharefname = os.path.join(sharedirfinalname, '0')
3522 
3523hunk ./src/allmydata/test/test_backends.py 142
3524                              mockmake_dirs, mockrename):
3525         """ Write a new share. """
3526 
3527-        def call_listdir(dirname):
3528-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3529-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3530-
3531-        mocklistdir.side_effect = call_listdir
3532-
3533-        def call_isdir(dirname):
3534-            #XXX Should there be any other tests here?
3535-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3536-            return True
3537-
3538-        mockisdir.side_effect = call_isdir
3539-
3540-        def call_mkdir(dirname, permissions):
3541-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3542-                self.Fail
3543-            else:
3544-                return True
3545-
3546-        mockmkdir.side_effect = call_mkdir
3547-
3548-        def call_get_available_space(storedir, reserved_space):
3549-            self.failUnlessReallyEqual(storedir, tempdir)
3550-            return 1
3551-
3552-        mockget_available_space.side_effect = call_get_available_space
3553-
3554-        mocktime.return_value = 0
3555         class MockShare:
3556             def __init__(self):
3557                 self.shnum = 1
3558hunk ./src/allmydata/test/test_backends.py 152
3559                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3560                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3561                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3562-               
3563 
3564         share = MockShare()
3565hunk ./src/allmydata/test/test_backends.py 154
3566-        def call_get_shares(storageindex):
3567-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3568-            return []#share]
3569-
3570-        mockget_shares.side_effect = call_get_shares
3571 
3572         class MockFile:
3573             def __init__(self):
3574hunk ./src/allmydata/test/test_backends.py 176
3575             def tell(self):
3576                 return self.pos
3577 
3578-
3579         fobj = MockFile()
3580hunk ./src/allmydata/test/test_backends.py 177
3581+
3582+        directories = {}
3583+        def call_listdir(dirname):
3584+            if dirname not in directories:
3585+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3586+            else:
3587+                return directories[dirname].get_contents()
3588+
3589+        mocklistdir.side_effect = call_listdir
3590+
3591+        class MockDir:
3592+            def __init__(self, dirname):
3593+                self.name = dirname
3594+                self.contents = []
3595+   
3596+            def get_contents(self):
3597+                return self.contents
3598+
3599+        def call_isdir(dirname):
3600+            #XXX Should there be any other tests here?
3601+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3602+            return True
3603+
3604+        mockisdir.side_effect = call_isdir
3605+
3606+        def call_mkdir(dirname, permissions):
3607+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3608+                self.Fail
3609+            if dirname in directories:
3610+                raise OSError(17, "File exists: '%s'" % dirname)
3611+                self.Fail
3612+            elif dirname not in directories:
3613+                directories[dirname] = MockDir(dirname)
3614+                return True
3615+
3616+        mockmkdir.side_effect = call_mkdir
3617+
3618+        def call_get_available_space(storedir, reserved_space):
3619+            self.failUnlessReallyEqual(storedir, tempdir)
3620+            return 1
3621+
3622+        mockget_available_space.side_effect = call_get_available_space
3623+
3624+        mocktime.return_value = 0
3625+        def call_get_shares(storageindex):
3626+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3627+            return []#share]
3628+
3629+        mockget_shares.side_effect = call_get_shares
3630+
3631         def call_open(fname, mode):
3632             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3633             return fobj
3634}
3635[checkpoint12 TestServerFSBackend no longer mocks filesystem
3636wilcoxjg@gmail.com**20110711193357
3637 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3638] {
3639hunk ./src/allmydata/storage/backends/das/core.py 23
3640      create_mutable_sharefile
3641 from allmydata.storage.immutable import BucketWriter, BucketReader
3642 from allmydata.storage.crawler import FSBucketCountingCrawler
3643+from allmydata.util.hashutil import constant_time_compare
3644 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3645 
3646 from zope.interface import implements
3647hunk ./src/allmydata/storage/backends/das/core.py 28
3648 
3649+# storage/
3650+# storage/shares/incoming
3651+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3652+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3653+# storage/shares/$START/$STORAGEINDEX
3654+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3655+
3656+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3657+# base-32 chars).
3658 # $SHARENUM matches this regex:
3659 NUM_RE=re.compile("^[0-9]+$")
3660 
3661hunk ./src/allmydata/test/test_backends.py 126
3662         testbackend = DASCore(tempdir, expiration_policy)
3663         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3664 
3665-    @mock.patch('allmydata.util.fileutil.rename')
3666-    @mock.patch('allmydata.util.fileutil.make_dirs')
3667-    @mock.patch('os.path.exists')
3668-    @mock.patch('os.stat')
3669-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3670-    @mock.patch('allmydata.util.fileutil.get_available_space')
3671     @mock.patch('time.time')
3672hunk ./src/allmydata/test/test_backends.py 127
3673-    @mock.patch('os.mkdir')
3674-    @mock.patch('__builtin__.open')
3675-    @mock.patch('os.listdir')
3676-    @mock.patch('os.path.isdir')
3677-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3678-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3679-                             mockmake_dirs, mockrename):
3680+    def test_write_share(self, mocktime):
3681         """ Write a new share. """
3682 
3683         class MockShare:
3684hunk ./src/allmydata/test/test_backends.py 143
3685 
3686         share = MockShare()
3687 
3688-        class MockFile:
3689-            def __init__(self):
3690-                self.buffer = ''
3691-                self.pos = 0
3692-            def write(self, instring):
3693-                begin = self.pos
3694-                padlen = begin - len(self.buffer)
3695-                if padlen > 0:
3696-                    self.buffer += '\x00' * padlen
3697-                end = self.pos + len(instring)
3698-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3699-                self.pos = end
3700-            def close(self):
3701-                pass
3702-            def seek(self, pos):
3703-                self.pos = pos
3704-            def read(self, numberbytes):
3705-                return self.buffer[self.pos:self.pos+numberbytes]
3706-            def tell(self):
3707-                return self.pos
3708-
3709-        fobj = MockFile()
3710-
3711-        directories = {}
3712-        def call_listdir(dirname):
3713-            if dirname not in directories:
3714-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3715-            else:
3716-                return directories[dirname].get_contents()
3717-
3718-        mocklistdir.side_effect = call_listdir
3719-
3720-        class MockDir:
3721-            def __init__(self, dirname):
3722-                self.name = dirname
3723-                self.contents = []
3724-   
3725-            def get_contents(self):
3726-                return self.contents
3727-
3728-        def call_isdir(dirname):
3729-            #XXX Should there be any other tests here?
3730-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3731-            return True
3732-
3733-        mockisdir.side_effect = call_isdir
3734-
3735-        def call_mkdir(dirname, permissions):
3736-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3737-                self.Fail
3738-            if dirname in directories:
3739-                raise OSError(17, "File exists: '%s'" % dirname)
3740-                self.Fail
3741-            elif dirname not in directories:
3742-                directories[dirname] = MockDir(dirname)
3743-                return True
3744-
3745-        mockmkdir.side_effect = call_mkdir
3746-
3747-        def call_get_available_space(storedir, reserved_space):
3748-            self.failUnlessReallyEqual(storedir, tempdir)
3749-            return 1
3750-
3751-        mockget_available_space.side_effect = call_get_available_space
3752-
3753-        mocktime.return_value = 0
3754-        def call_get_shares(storageindex):
3755-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3756-            return []#share]
3757-
3758-        mockget_shares.side_effect = call_get_shares
3759-
3760-        def call_open(fname, mode):
3761-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3762-            return fobj
3763-
3764-        mockopen.side_effect = call_open
3765-
3766-        def call_make_dirs(dname):
3767-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3768-           
3769-        mockmake_dirs.side_effect = call_make_dirs
3770-
3771-        def call_rename(src, dst):
3772-            self.failUnlessReallyEqual(src, shareincomingname)
3773-            self.failUnlessReallyEqual(dst, sharefname)
3774-           
3775-        mockrename.side_effect = call_rename
3776-
3777-        def call_exists(fname):
3778-            self.failUnlessReallyEqual(fname, sharefname)
3779-
3780-        mockexists.side_effect = call_exists
3781-
3782         # Now begin the test.
3783 
3784         # XXX (0) ???  Fail unless something is not properly set-up?
3785}
3786[JACP
3787wilcoxjg@gmail.com**20110711194407
3788 Ignore-this: b54745de777c4bb58d68d708f010bbb
3789] {
3790hunk ./src/allmydata/storage/backends/das/core.py 86
3791 
3792     def get_incoming(self, storageindex):
3793         """Return the set of incoming shnums."""
3794-        return set(os.listdir(self.incomingdir))
3795+        try:
3796+            incominglist = os.listdir(self.incomingdir)
3797+            print "incominglist: ", incominglist
3798+            return set(incominglist)
3799+        except OSError:
3800+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3801+            pass
3802 
3803     def get_shares(self, storage_index):
3804         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3805hunk ./src/allmydata/storage/server.py 17
3806 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3807      create_mutable_sharefile
3808 
3809-# storage/
3810-# storage/shares/incoming
3811-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3812-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3813-# storage/shares/$START/$STORAGEINDEX
3814-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3815-
3816-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3817-# base-32 chars).
3818-
3819-
3820 class StorageServer(service.MultiService, Referenceable):
3821     implements(RIStorageServer, IStatsProducer)
3822     name = 'storage'
3823}
3824[testing get incoming
3825wilcoxjg@gmail.com**20110711210224
3826 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3827] {
3828hunk ./src/allmydata/storage/backends/das/core.py 87
3829     def get_incoming(self, storageindex):
3830         """Return the set of incoming shnums."""
3831         try:
3832-            incominglist = os.listdir(self.incomingdir)
3833+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3834+            incominglist = os.listdir(incomingsharesdir)
3835             print "incominglist: ", incominglist
3836             return set(incominglist)
3837         except OSError:
3838hunk ./src/allmydata/storage/backends/das/core.py 92
3839-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3840-            pass
3841-
3842+            # XXX I'd like to make this more specific. If there are no shares at all.
3843+            return set()
3844+           
3845     def get_shares(self, storage_index):
3846         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3847         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3848hunk ./src/allmydata/test/test_backends.py 149
3849         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3850 
3851         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3852+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3853         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3854 
3855hunk ./src/allmydata/test/test_backends.py 152
3856-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3857         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3858         # with the same si, until BucketWriter.remote_close() has been called.
3859         # self.failIf(bsa)
3860}
3861[ImmutableShareFile does not know its StorageIndex
3862wilcoxjg@gmail.com**20110711211424
3863 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3864] {
3865hunk ./src/allmydata/storage/backends/das/core.py 112
3866             return 0
3867         return fileutil.get_available_space(self.storedir, self.reserved_space)
3868 
3869-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3870-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3871+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3872+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3873+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3874+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3875         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3876         return bw
3877 
3878hunk ./src/allmydata/storage/backends/das/core.py 155
3879     LEASE_SIZE = struct.calcsize(">L32s32sL")
3880     sharetype = "immutable"
3881 
3882-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3883+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3884         """ If max_size is not None then I won't allow more than
3885         max_size to be written to me. If create=True then max_size
3886         must not be None. """
3887}
3888[get_incoming correctly reports the 0 share after it has arrived
3889wilcoxjg@gmail.com**20110712025157
3890 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3891] {
3892hunk ./src/allmydata/storage/backends/das/core.py 1
3893+import os, re, weakref, struct, time, stat
3894+
3895 from allmydata.interfaces import IStorageBackend
3896 from allmydata.storage.backends.base import Backend
3897 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3898hunk ./src/allmydata/storage/backends/das/core.py 8
3899 from allmydata.util.assertutil import precondition
3900 
3901-import os, re, weakref, struct, time
3902-
3903 #from foolscap.api import Referenceable
3904 from twisted.application import service
3905 
3906hunk ./src/allmydata/storage/backends/das/core.py 89
3907         try:
3908             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3909             incominglist = os.listdir(incomingsharesdir)
3910-            print "incominglist: ", incominglist
3911-            return set(incominglist)
3912+            incomingshnums = [int(x) for x in incominglist]
3913+            return set(incomingshnums)
3914         except OSError:
3915             # XXX I'd like to make this more specific. If there are no shares at all.
3916             return set()
3917hunk ./src/allmydata/storage/backends/das/core.py 113
3918         return fileutil.get_available_space(self.storedir, self.reserved_space)
3919 
3920     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3921-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3922-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3923-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3924+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3925+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3926+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3927         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3928         return bw
3929 
3930hunk ./src/allmydata/storage/backends/das/core.py 160
3931         max_size to be written to me. If create=True then max_size
3932         must not be None. """
3933         precondition((max_size is not None) or (not create), max_size, create)
3934-        self.shnum = shnum
3935-        self.storage_index = storageindex
3936-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3937         self._max_size = max_size
3938hunk ./src/allmydata/storage/backends/das/core.py 161
3939-        self.incomingdir = os.path.join(sharedir, 'incoming')
3940-        si_dir = storage_index_to_dir(storageindex)
3941-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3942-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3943-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3944+        self.incominghome = incominghome
3945+        self.finalhome = finalhome
3946         if create:
3947             # touch the file, so later callers will see that we're working on
3948             # it. Also construct the metadata.
3949hunk ./src/allmydata/storage/backends/das/core.py 166
3950-            assert not os.path.exists(self.fname)
3951-            fileutil.make_dirs(os.path.dirname(self.fname))
3952-            f = open(self.fname, 'wb')
3953+            assert not os.path.exists(self.finalhome)
3954+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3955+            f = open(self.incominghome, 'wb')
3956             # The second field -- the four-byte share data length -- is no
3957             # longer used as of Tahoe v1.3.0, but we continue to write it in
3958             # there in case someone downgrades a storage server from >=
3959hunk ./src/allmydata/storage/backends/das/core.py 183
3960             self._lease_offset = max_size + 0x0c
3961             self._num_leases = 0
3962         else:
3963-            f = open(self.fname, 'rb')
3964-            filesize = os.path.getsize(self.fname)
3965+            f = open(self.finalhome, 'rb')
3966+            filesize = os.path.getsize(self.finalhome)
3967             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3968             f.close()
3969             if version != 1:
3970hunk ./src/allmydata/storage/backends/das/core.py 189
3971                 msg = "sharefile %s had version %d but we wanted 1" % \
3972-                      (self.fname, version)
3973+                      (self.finalhome, version)
3974                 raise UnknownImmutableContainerVersionError(msg)
3975             self._num_leases = num_leases
3976             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3977hunk ./src/allmydata/storage/backends/das/core.py 225
3978         pass
3979         
3980     def stat(self):
3981-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3982+        return os.stat(self.finalhome)[stat.ST_SIZE]
3983+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
3984 
3985     def get_shnum(self):
3986         return self.shnum
3987hunk ./src/allmydata/storage/backends/das/core.py 232
3988 
3989     def unlink(self):
3990-        os.unlink(self.fname)
3991+        os.unlink(self.finalhome)
3992 
3993     def read_share_data(self, offset, length):
3994         precondition(offset >= 0)
3995hunk ./src/allmydata/storage/backends/das/core.py 239
3996         # Reads beyond the end of the data are truncated. Reads that start
3997         # beyond the end of the data return an empty string.
3998         seekpos = self._data_offset+offset
3999-        fsize = os.path.getsize(self.fname)
4000+        fsize = os.path.getsize(self.finalhome)
4001         actuallength = max(0, min(length, fsize-seekpos))
4002         if actuallength == 0:
4003             return ""
4004hunk ./src/allmydata/storage/backends/das/core.py 243
4005-        f = open(self.fname, 'rb')
4006+        f = open(self.finalhome, 'rb')
4007         f.seek(seekpos)
4008         return f.read(actuallength)
4009 
4010hunk ./src/allmydata/storage/backends/das/core.py 252
4011         precondition(offset >= 0, offset)
4012         if self._max_size is not None and offset+length > self._max_size:
4013             raise DataTooLargeError(self._max_size, offset, length)
4014-        f = open(self.fname, 'rb+')
4015+        f = open(self.incominghome, 'rb+')
4016         real_offset = self._data_offset+offset
4017         f.seek(real_offset)
4018         assert f.tell() == real_offset
4019hunk ./src/allmydata/storage/backends/das/core.py 279
4020 
4021     def get_leases(self):
4022         """Yields a LeaseInfo instance for all leases."""
4023-        f = open(self.fname, 'rb')
4024+        f = open(self.finalhome, 'rb')
4025         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
4026         f.seek(self._lease_offset)
4027         for i in range(num_leases):
4028hunk ./src/allmydata/storage/backends/das/core.py 288
4029                 yield LeaseInfo().from_immutable_data(data)
4030 
4031     def add_lease(self, lease_info):
4032-        f = open(self.fname, 'rb+')
4033+        f = open(self.incominghome, 'rb+')
4034         num_leases = self._read_num_leases(f)
4035         self._write_lease_record(f, num_leases, lease_info)
4036         self._write_num_leases(f, num_leases+1)
4037hunk ./src/allmydata/storage/backends/das/core.py 301
4038                 if new_expire_time > lease.expiration_time:
4039                     # yes
4040                     lease.expiration_time = new_expire_time
4041-                    f = open(self.fname, 'rb+')
4042+                    f = open(self.finalhome, 'rb+')
4043                     self._write_lease_record(f, i, lease)
4044                     f.close()
4045                 return
4046hunk ./src/allmydata/storage/backends/das/core.py 336
4047             # the same order as they were added, so that if we crash while
4048             # doing this, we won't lose any non-cancelled leases.
4049             leases = [l for l in leases if l] # remove the cancelled leases
4050-            f = open(self.fname, 'rb+')
4051+            f = open(self.finalhome, 'rb+')
4052             for i,lease in enumerate(leases):
4053                 self._write_lease_record(f, i, lease)
4054             self._write_num_leases(f, len(leases))
4055hunk ./src/allmydata/storage/backends/das/core.py 344
4056             f.close()
4057         space_freed = self.LEASE_SIZE * num_leases_removed
4058         if not len(leases):
4059-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
4060+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
4061             self.unlink()
4062         return space_freed
4063hunk ./src/allmydata/test/test_backends.py 129
4064     @mock.patch('time.time')
4065     def test_write_share(self, mocktime):
4066         """ Write a new share. """
4067-
4068-        class MockShare:
4069-            def __init__(self):
4070-                self.shnum = 1
4071-               
4072-            def add_or_renew_lease(elf, lease_info):
4073-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4074-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4075-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4076-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4077-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4078-
4079-        share = MockShare()
4080-
4081         # Now begin the test.
4082 
4083         # XXX (0) ???  Fail unless something is not properly set-up?
4084hunk ./src/allmydata/test/test_backends.py 143
4085         # self.failIf(bsa)
4086 
4087         bs[0].remote_write(0, 'a')
4088-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4089+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4090         spaceint = self.s.allocated_size()
4091         self.failUnlessReallyEqual(spaceint, 1)
4092 
4093hunk ./src/allmydata/test/test_backends.py 161
4094         #self.failIf(mockrename.called, mockrename.call_args_list)
4095         #self.failIf(mockstat.called, mockstat.call_args_list)
4096 
4097+    def test_handle_incoming(self):
4098+        incomingset = self.s.backend.get_incoming('teststorage_index')
4099+        self.failUnlessReallyEqual(incomingset, set())
4100+
4101+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4102+       
4103+        incomingset = self.s.backend.get_incoming('teststorage_index')
4104+        self.failUnlessReallyEqual(incomingset, set((0,)))
4105+
4106+        bs[0].remote_close()
4107+        self.failUnlessReallyEqual(incomingset, set())
4108+
4109     @mock.patch('os.path.exists')
4110     @mock.patch('os.path.getsize')
4111     @mock.patch('__builtin__.open')
4112hunk ./src/allmydata/test/test_backends.py 223
4113         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4114 
4115 
4116-
4117 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4118     @mock.patch('time.time')
4119     @mock.patch('os.mkdir')
4120hunk ./src/allmydata/test/test_backends.py 271
4121         DASCore('teststoredir', expiration_policy)
4122 
4123         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4124+
4125}
4126[jacp14
4127wilcoxjg@gmail.com**20110712061211
4128 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4129] {
4130hunk ./src/allmydata/storage/backends/das/core.py 95
4131             # XXX I'd like to make this more specific. If there are no shares at all.
4132             return set()
4133             
4134-    def get_shares(self, storage_index):
4135+    def get_shares(self, storageindex):
4136         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4137hunk ./src/allmydata/storage/backends/das/core.py 97
4138-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4139+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4140         try:
4141             for f in os.listdir(finalstoragedir):
4142                 if NUM_RE.match(f):
4143hunk ./src/allmydata/storage/backends/das/core.py 102
4144                     filename = os.path.join(finalstoragedir, f)
4145-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4146+                    yield ImmutableShare(filename, storageindex, f)
4147         except OSError:
4148             # Commonly caused by there being no shares at all.
4149             pass
4150hunk ./src/allmydata/storage/backends/das/core.py 115
4151     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4152         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4153         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4154-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4155+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4156         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4157         return bw
4158 
4159hunk ./src/allmydata/storage/backends/das/core.py 155
4160     LEASE_SIZE = struct.calcsize(">L32s32sL")
4161     sharetype = "immutable"
4162 
4163-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4164+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4165         """ If max_size is not None then I won't allow more than
4166         max_size to be written to me. If create=True then max_size
4167         must not be None. """
4168hunk ./src/allmydata/storage/backends/das/core.py 160
4169         precondition((max_size is not None) or (not create), max_size, create)
4170+        self.storageindex = storageindex
4171         self._max_size = max_size
4172         self.incominghome = incominghome
4173         self.finalhome = finalhome
4174hunk ./src/allmydata/storage/backends/das/core.py 164
4175+        self.shnum = shnum
4176         if create:
4177             # touch the file, so later callers will see that we're working on
4178             # it. Also construct the metadata.
4179hunk ./src/allmydata/storage/backends/das/core.py 212
4180             # their children to know when they should do the rmdir. This
4181             # approach is simpler, but relies on os.rmdir refusing to delete
4182             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4183+            #print "os.path.dirname(self.incominghome): "
4184+            #print os.path.dirname(self.incominghome)
4185             os.rmdir(os.path.dirname(self.incominghome))
4186             # we also delete the grandparent (prefix) directory, .../ab ,
4187             # again to avoid leaving directories lying around. This might
4188hunk ./src/allmydata/storage/immutable.py 93
4189     def __init__(self, ss, share):
4190         self.ss = ss
4191         self._share_file = share
4192-        self.storage_index = share.storage_index
4193+        self.storageindex = share.storageindex
4194         self.shnum = share.shnum
4195 
4196     def __repr__(self):
4197hunk ./src/allmydata/storage/immutable.py 98
4198         return "<%s %s %s>" % (self.__class__.__name__,
4199-                               base32.b2a_l(self.storage_index[:8], 60),
4200+                               base32.b2a_l(self.storageindex[:8], 60),
4201                                self.shnum)
4202 
4203     def remote_read(self, offset, length):
4204hunk ./src/allmydata/storage/immutable.py 110
4205 
4206     def remote_advise_corrupt_share(self, reason):
4207         return self.ss.remote_advise_corrupt_share("immutable",
4208-                                                   self.storage_index,
4209+                                                   self.storageindex,
4210                                                    self.shnum,
4211                                                    reason)
4212hunk ./src/allmydata/test/test_backends.py 20
4213 # The following share file contents was generated with
4214 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4215 # with share data == 'a'.
4216-renew_secret  = 'x'*32
4217-cancel_secret = 'y'*32
4218-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4219-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4220+shareversionnumber = '\x00\x00\x00\x01'
4221+sharedatalength = '\x00\x00\x00\x01'
4222+numberofleases = '\x00\x00\x00\x01'
4223+shareinputdata = 'a'
4224+ownernumber = '\x00\x00\x00\x00'
4225+renewsecret  = 'x'*32
4226+cancelsecret = 'y'*32
4227+expirationtime = '\x00(\xde\x80'
4228+nextlease = ''
4229+containerdata = shareversionnumber + sharedatalength + numberofleases
4230+client_data = shareinputdata + ownernumber + renewsecret + \
4231+    cancelsecret + expirationtime + nextlease
4232+share_data = containerdata + client_data
4233+
4234 
4235 testnodeid = 'testnodeidxxxxxxxxxx'
4236 tempdir = 'teststoredir'
4237hunk ./src/allmydata/test/test_backends.py 52
4238 
4239 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4240     def setUp(self):
4241-        self.s = StorageServer(testnodeid, backend=NullCore())
4242+        self.ss = StorageServer(testnodeid, backend=NullCore())
4243 
4244     @mock.patch('os.mkdir')
4245     @mock.patch('__builtin__.open')
4246hunk ./src/allmydata/test/test_backends.py 62
4247         """ Write a new share. """
4248 
4249         # Now begin the test.
4250-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4251+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4252         bs[0].remote_write(0, 'a')
4253         self.failIf(mockisdir.called)
4254         self.failIf(mocklistdir.called)
4255hunk ./src/allmydata/test/test_backends.py 133
4256                 _assert(False, "The tester code doesn't recognize this case.") 
4257 
4258         mockopen.side_effect = call_open
4259-        testbackend = DASCore(tempdir, expiration_policy)
4260-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4261+        self.backend = DASCore(tempdir, expiration_policy)
4262+        self.ss = StorageServer(testnodeid, self.backend)
4263+        self.ssinf = StorageServer(testnodeid, self.backend)
4264 
4265     @mock.patch('time.time')
4266     def test_write_share(self, mocktime):
4267hunk ./src/allmydata/test/test_backends.py 142
4268         """ Write a new share. """
4269         # Now begin the test.
4270 
4271-        # XXX (0) ???  Fail unless something is not properly set-up?
4272-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4273+        mocktime.return_value = 0
4274+        # Inspect incoming and fail unless it's empty.
4275+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4276+        self.failUnlessReallyEqual(incomingset, set())
4277+       
4278+        # Among other things, populate incoming with the sharenum: 0.
4279+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4280 
4281hunk ./src/allmydata/test/test_backends.py 150
4282-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4283-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4284-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4285+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4286+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4287+       
4288+        # Attempt to create a second share writer with the same share.
4289+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4290 
4291hunk ./src/allmydata/test/test_backends.py 156
4292-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4293+        # Show that no sharewriter results from a remote_allocate_buckets
4294         # with the same si, until BucketWriter.remote_close() has been called.
4295hunk ./src/allmydata/test/test_backends.py 158
4296-        # self.failIf(bsa)
4297+        self.failIf(bsa)
4298 
4299hunk ./src/allmydata/test/test_backends.py 160
4300+        # Write 'a' to shnum 0. Only tested together with close and read.
4301         bs[0].remote_write(0, 'a')
4302hunk ./src/allmydata/test/test_backends.py 162
4303-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4304-        spaceint = self.s.allocated_size()
4305+
4306+        # Test allocated size.
4307+        spaceint = self.ss.allocated_size()
4308         self.failUnlessReallyEqual(spaceint, 1)
4309 
4310         # XXX (3) Inspect final and fail unless there's nothing there.
4311hunk ./src/allmydata/test/test_backends.py 168
4312+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4313         bs[0].remote_close()
4314         # XXX (4a) Inspect final and fail unless share 0 is there.
4315hunk ./src/allmydata/test/test_backends.py 171
4316+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4317+        #contents = sharesinfinal[0].read_share_data(0,999)
4318+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4319         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4320 
4321         # What happens when there's not enough space for the client's request?
4322hunk ./src/allmydata/test/test_backends.py 177
4323-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4324+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4325 
4326         # Now test the allocated_size method.
4327         # self.failIf(mockexists.called, mockexists.call_args_list)
4328hunk ./src/allmydata/test/test_backends.py 185
4329         #self.failIf(mockrename.called, mockrename.call_args_list)
4330         #self.failIf(mockstat.called, mockstat.call_args_list)
4331 
4332-    def test_handle_incoming(self):
4333-        incomingset = self.s.backend.get_incoming('teststorage_index')
4334-        self.failUnlessReallyEqual(incomingset, set())
4335-
4336-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4337-       
4338-        incomingset = self.s.backend.get_incoming('teststorage_index')
4339-        self.failUnlessReallyEqual(incomingset, set((0,)))
4340-
4341-        bs[0].remote_close()
4342-        self.failUnlessReallyEqual(incomingset, set())
4343-
4344     @mock.patch('os.path.exists')
4345     @mock.patch('os.path.getsize')
4346     @mock.patch('__builtin__.open')
4347hunk ./src/allmydata/test/test_backends.py 208
4348             self.failUnless('r' in mode, mode)
4349             self.failUnless('b' in mode, mode)
4350 
4351-            return StringIO(share_file_data)
4352+            return StringIO(share_data)
4353         mockopen.side_effect = call_open
4354 
4355hunk ./src/allmydata/test/test_backends.py 211
4356-        datalen = len(share_file_data)
4357+        datalen = len(share_data)
4358         def call_getsize(fname):
4359             self.failUnlessReallyEqual(fname, sharefname)
4360             return datalen
4361hunk ./src/allmydata/test/test_backends.py 223
4362         mockexists.side_effect = call_exists
4363 
4364         # Now begin the test.
4365-        bs = self.s.remote_get_buckets('teststorage_index')
4366+        bs = self.ss.remote_get_buckets('teststorage_index')
4367 
4368         self.failUnlessEqual(len(bs), 1)
4369hunk ./src/allmydata/test/test_backends.py 226
4370-        b = bs[0]
4371+        b = bs['0']
4372         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4373hunk ./src/allmydata/test/test_backends.py 228
4374-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4375+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4376         # If you try to read past the end you get the as much data as is there.
4377hunk ./src/allmydata/test/test_backends.py 230
4378-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4379+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4380         # If you start reading past the end of the file you get the empty string.
4381         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4382 
4383}
4384[jacp14 or so
4385wilcoxjg@gmail.com**20110713060346
4386 Ignore-this: 7026810f60879d65b525d450e43ff87a
4387] {
4388hunk ./src/allmydata/storage/backends/das/core.py 102
4389             for f in os.listdir(finalstoragedir):
4390                 if NUM_RE.match(f):
4391                     filename = os.path.join(finalstoragedir, f)
4392-                    yield ImmutableShare(filename, storageindex, f)
4393+                    yield ImmutableShare(filename, storageindex, int(f))
4394         except OSError:
4395             # Commonly caused by there being no shares at all.
4396             pass
4397hunk ./src/allmydata/storage/backends/null/core.py 25
4398     def set_storage_server(self, ss):
4399         self.ss = ss
4400 
4401+    def get_incoming(self, storageindex):
4402+        return set()
4403+
4404 class ImmutableShare:
4405     sharetype = "immutable"
4406 
4407hunk ./src/allmydata/storage/immutable.py 19
4408 
4409     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4410         self.ss = ss
4411-        self._max_size = max_size # don't allow the client to write more than this
4412+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4413+
4414         self._canary = canary
4415         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4416         self.closed = False
4417hunk ./src/allmydata/test/test_backends.py 135
4418         mockopen.side_effect = call_open
4419         self.backend = DASCore(tempdir, expiration_policy)
4420         self.ss = StorageServer(testnodeid, self.backend)
4421-        self.ssinf = StorageServer(testnodeid, self.backend)
4422+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4423+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4424 
4425     @mock.patch('time.time')
4426     def test_write_share(self, mocktime):
4427hunk ./src/allmydata/test/test_backends.py 161
4428         # with the same si, until BucketWriter.remote_close() has been called.
4429         self.failIf(bsa)
4430 
4431-        # Write 'a' to shnum 0. Only tested together with close and read.
4432-        bs[0].remote_write(0, 'a')
4433-
4434         # Test allocated size.
4435         spaceint = self.ss.allocated_size()
4436         self.failUnlessReallyEqual(spaceint, 1)
4437hunk ./src/allmydata/test/test_backends.py 165
4438 
4439-        # XXX (3) Inspect final and fail unless there's nothing there.
4440+        # Write 'a' to shnum 0. Only tested together with close and read.
4441+        bs[0].remote_write(0, 'a')
4442+       
4443+        # Preclose: Inspect final, failUnless nothing there.
4444         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4445         bs[0].remote_close()
4446hunk ./src/allmydata/test/test_backends.py 171
4447-        # XXX (4a) Inspect final and fail unless share 0 is there.
4448-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4449-        #contents = sharesinfinal[0].read_share_data(0,999)
4450-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4451-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4452 
4453hunk ./src/allmydata/test/test_backends.py 172
4454-        # What happens when there's not enough space for the client's request?
4455-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4456+        # Postclose: (Omnibus) failUnless written data is in final.
4457+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4458+        contents = sharesinfinal[0].read_share_data(0,73)
4459+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4460 
4461hunk ./src/allmydata/test/test_backends.py 177
4462-        # Now test the allocated_size method.
4463-        # self.failIf(mockexists.called, mockexists.call_args_list)
4464-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4465-        #self.failIf(mockrename.called, mockrename.call_args_list)
4466-        #self.failIf(mockstat.called, mockstat.call_args_list)
4467+        # Cover interior of for share in get_shares loop.
4468+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4469+       
4470+    @mock.patch('time.time')
4471+    @mock.patch('allmydata.util.fileutil.get_available_space')
4472+    def test_out_of_space(self, mockget_available_space, mocktime):
4473+        mocktime.return_value = 0
4474+       
4475+        def call_get_available_space(dir, reserve):
4476+            return 0
4477+
4478+        mockget_available_space.side_effect = call_get_available_space
4479+       
4480+       
4481+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4482 
4483     @mock.patch('os.path.exists')
4484     @mock.patch('os.path.getsize')
4485hunk ./src/allmydata/test/test_backends.py 234
4486         bs = self.ss.remote_get_buckets('teststorage_index')
4487 
4488         self.failUnlessEqual(len(bs), 1)
4489-        b = bs['0']
4490+        b = bs[0]
4491         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4492         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4493         # If you try to read past the end you get the as much data as is there.
4494}
4495[temporary work-in-progress patch to be unrecorded
4496zooko@zooko.com**20110714003008
4497 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4498 tidy up a few tests, work done in pair-programming with Zancas
4499] {
4500hunk ./src/allmydata/storage/backends/das/core.py 65
4501         self._clean_incomplete()
4502 
4503     def _clean_incomplete(self):
4504-        fileutil.rm_dir(self.incomingdir)
4505+        fileutil.rmtree(self.incomingdir)
4506         fileutil.make_dirs(self.incomingdir)
4507 
4508     def _setup_corruption_advisory(self):
4509hunk ./src/allmydata/storage/immutable.py 1
4510-import os, stat, struct, time
4511+import os, time
4512 
4513 from foolscap.api import Referenceable
4514 
4515hunk ./src/allmydata/storage/server.py 1
4516-import os, re, weakref, struct, time
4517+import os, weakref, struct, time
4518 
4519 from foolscap.api import Referenceable
4520 from twisted.application import service
4521hunk ./src/allmydata/storage/server.py 7
4522 
4523 from zope.interface import implements
4524-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4525+from allmydata.interfaces import RIStorageServer, IStatsProducer
4526 from allmydata.util import fileutil, idlib, log, time_format
4527 import allmydata # for __full_version__
4528 
4529hunk ./src/allmydata/storage/server.py 313
4530         self.add_latency("get", time.time() - start)
4531         return bucketreaders
4532 
4533-    def remote_get_incoming(self, storageindex):
4534-        incoming_share_set = self.backend.get_incoming(storageindex)
4535-        return incoming_share_set
4536-
4537     def get_leases(self, storageindex):
4538         """Provide an iterator that yields all of the leases attached to this
4539         bucket. Each lease is returned as a LeaseInfo instance.
4540hunk ./src/allmydata/test/test_backends.py 3
4541 from twisted.trial import unittest
4542 
4543+from twisted.path.filepath import FilePath
4544+
4545 from StringIO import StringIO
4546 
4547 from allmydata.test.common_util import ReallyEqualMixin
4548hunk ./src/allmydata/test/test_backends.py 38
4549 
4550 
4551 testnodeid = 'testnodeidxxxxxxxxxx'
4552-tempdir = 'teststoredir'
4553-basedir = os.path.join(tempdir, 'shares')
4554+storedir = 'teststoredir'
4555+storedirfp = FilePath(storedir)
4556+basedir = os.path.join(storedir, 'shares')
4557 baseincdir = os.path.join(basedir, 'incoming')
4558 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4559 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4560hunk ./src/allmydata/test/test_backends.py 53
4561                      'cutoff_date' : None,
4562                      'sharetypes' : None}
4563 
4564-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4565+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4566+    """ NullBackend is just for testing and executable documentation, so
4567+    this test is actually a test of StorageServer in which we're using
4568+    NullBackend as helper code for the test, rather than a test of
4569+    NullBackend. """
4570     def setUp(self):
4571         self.ss = StorageServer(testnodeid, backend=NullCore())
4572 
4573hunk ./src/allmydata/test/test_backends.py 62
4574     @mock.patch('os.mkdir')
4575+
4576     @mock.patch('__builtin__.open')
4577     @mock.patch('os.listdir')
4578     @mock.patch('os.path.isdir')
4579hunk ./src/allmydata/test/test_backends.py 69
4580     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4581         """ Write a new share. """
4582 
4583-        # Now begin the test.
4584         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4585         bs[0].remote_write(0, 'a')
4586         self.failIf(mockisdir.called)
4587hunk ./src/allmydata/test/test_backends.py 83
4588     @mock.patch('os.listdir')
4589     @mock.patch('os.path.isdir')
4590     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4591-        """ This tests whether a server instance can be constructed
4592-        with a filesystem backend. To pass the test, it has to use the
4593-        filesystem in only the prescribed ways. """
4594+        """ This tests whether a server instance can be constructed with a
4595+        filesystem backend. To pass the test, it mustn't use the filesystem
4596+        outside of its configured storedir. """
4597 
4598         def call_open(fname, mode):
4599hunk ./src/allmydata/test/test_backends.py 88
4600-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4601-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4602-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4603-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4604-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4605+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4606+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4607+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4608+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4609+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4610                 return StringIO()
4611             else:
4612hunk ./src/allmydata/test/test_backends.py 95
4613-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4614+                fnamefp = FilePath(fname)
4615+                self.failUnless(storedirfp in fnamefp.parents(),
4616+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4617         mockopen.side_effect = call_open
4618 
4619         def call_isdir(fname):
4620hunk ./src/allmydata/test/test_backends.py 101
4621-            if fname == os.path.join(tempdir,'shares'):
4622+            if fname == os.path.join(storedir, 'shares'):
4623                 return True
4624hunk ./src/allmydata/test/test_backends.py 103
4625-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4626+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4627                 return True
4628             else:
4629                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4630hunk ./src/allmydata/test/test_backends.py 109
4631         mockisdir.side_effect = call_isdir
4632 
4633+        mocklistdir.return_value = []
4634+
4635         def call_mkdir(fname, mode):
4636hunk ./src/allmydata/test/test_backends.py 112
4637-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4638             self.failUnlessEqual(0777, mode)
4639hunk ./src/allmydata/test/test_backends.py 113
4640-            if fname == tempdir:
4641-                return None
4642-            elif fname == os.path.join(tempdir,'shares'):
4643-                return None
4644-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4645-                return None
4646-            else:
4647-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4648+            self.failUnlessIn(fname,
4649+                              [storedir,
4650+                               os.path.join(storedir, 'shares'),
4651+                               os.path.join(storedir, 'shares', 'incoming')],
4652+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4653         mockmkdir.side_effect = call_mkdir
4654 
4655         # Now begin the test.
4656hunk ./src/allmydata/test/test_backends.py 121
4657-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4658+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4659 
4660         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4661 
4662hunk ./src/allmydata/test/test_backends.py 126
4663 
4664-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4665+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4666+    """ This tests both the StorageServer xyz """
4667     @mock.patch('__builtin__.open')
4668     def setUp(self, mockopen):
4669         def call_open(fname, mode):
4670hunk ./src/allmydata/test/test_backends.py 131
4671-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4672-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4673-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4674-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4675-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4676+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4677+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4678+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4679+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4680+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4681                 return StringIO()
4682             else:
4683                 _assert(False, "The tester code doesn't recognize this case.") 
4684hunk ./src/allmydata/test/test_backends.py 141
4685 
4686         mockopen.side_effect = call_open
4687-        self.backend = DASCore(tempdir, expiration_policy)
4688+        self.backend = DASCore(storedir, expiration_policy)
4689         self.ss = StorageServer(testnodeid, self.backend)
4690hunk ./src/allmydata/test/test_backends.py 143
4691-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4692+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4693         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4694 
4695     @mock.patch('time.time')
4696hunk ./src/allmydata/test/test_backends.py 147
4697-    def test_write_share(self, mocktime):
4698-        """ Write a new share. """
4699-        # Now begin the test.
4700+    def test_write_and_read_share(self, mocktime):
4701+        """
4702+        Write a new share, read it, and test the server's (and FS backend's)
4703+        handling of simultaneous and successive attempts to write the same
4704+        share.
4705+        """
4706 
4707         mocktime.return_value = 0
4708         # Inspect incoming and fail unless it's empty.
4709hunk ./src/allmydata/test/test_backends.py 159
4710         incomingset = self.ss.backend.get_incoming('teststorage_index')
4711         self.failUnlessReallyEqual(incomingset, set())
4712         
4713-        # Among other things, populate incoming with the sharenum: 0.
4714+        # Populate incoming with the sharenum: 0.
4715         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4716 
4717         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4718hunk ./src/allmydata/test/test_backends.py 163
4719-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4720+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4721         
4722hunk ./src/allmydata/test/test_backends.py 165
4723-        # Attempt to create a second share writer with the same share.
4724+        # Attempt to create a second share writer with the same sharenum.
4725         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4726 
4727         # Show that no sharewriter results from a remote_allocate_buckets
4728hunk ./src/allmydata/test/test_backends.py 169
4729-        # with the same si, until BucketWriter.remote_close() has been called.
4730+        # with the same si and sharenum, until BucketWriter.remote_close()
4731+        # has been called.
4732         self.failIf(bsa)
4733 
4734         # Test allocated size.
4735hunk ./src/allmydata/test/test_backends.py 187
4736         # Postclose: (Omnibus) failUnless written data is in final.
4737         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4738         contents = sharesinfinal[0].read_share_data(0,73)
4739-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4740+        self.failUnlessReallyEqual(contents, client_data)
4741 
4742hunk ./src/allmydata/test/test_backends.py 189
4743-        # Cover interior of for share in get_shares loop.
4744-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4745+        # Exercise the case that the share we're asking to allocate is
4746+        # already (completely) uploaded.
4747+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4748         
4749     @mock.patch('time.time')
4750     @mock.patch('allmydata.util.fileutil.get_available_space')
4751hunk ./src/allmydata/test/test_backends.py 210
4752     @mock.patch('os.path.getsize')
4753     @mock.patch('__builtin__.open')
4754     @mock.patch('os.listdir')
4755-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4756+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4757         """ This tests whether the code correctly finds and reads
4758         shares written out by old (Tahoe-LAFS <= v1.8.2)
4759         servers. There is a similar test in test_download, but that one
4760hunk ./src/allmydata/test/test_backends.py 219
4761         StorageServer object. """
4762 
4763         def call_listdir(dirname):
4764-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4765+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4766             return ['0']
4767 
4768         mocklistdir.side_effect = call_listdir
4769hunk ./src/allmydata/test/test_backends.py 226
4770 
4771         def call_open(fname, mode):
4772             self.failUnlessReallyEqual(fname, sharefname)
4773-            self.failUnless('r' in mode, mode)
4774+            self.failUnlessEqual(mode[0], 'r', mode)
4775             self.failUnless('b' in mode, mode)
4776 
4777             return StringIO(share_data)
4778hunk ./src/allmydata/test/test_backends.py 268
4779         filesystem in only the prescribed ways. """
4780 
4781         def call_open(fname, mode):
4782-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4783-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4784-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4785-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4786-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4787+            if fname == os.path.join(storedir,'bucket_counter.state'):
4788+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4789+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4790+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4791+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4792                 return StringIO()
4793             else:
4794                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4795hunk ./src/allmydata/test/test_backends.py 279
4796         mockopen.side_effect = call_open
4797 
4798         def call_isdir(fname):
4799-            if fname == os.path.join(tempdir,'shares'):
4800+            if fname == os.path.join(storedir,'shares'):
4801                 return True
4802hunk ./src/allmydata/test/test_backends.py 281
4803-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4804+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4805                 return True
4806             else:
4807                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4808hunk ./src/allmydata/test/test_backends.py 290
4809         def call_mkdir(fname, mode):
4810             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4811             self.failUnlessEqual(0777, mode)
4812-            if fname == tempdir:
4813+            if fname == storedir:
4814                 return None
4815hunk ./src/allmydata/test/test_backends.py 292
4816-            elif fname == os.path.join(tempdir,'shares'):
4817+            elif fname == os.path.join(storedir,'shares'):
4818                 return None
4819hunk ./src/allmydata/test/test_backends.py 294
4820-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4821+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4822                 return None
4823             else:
4824                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4825hunk ./src/allmydata/util/fileutil.py 5
4826 Futz with files like a pro.
4827 """
4828 
4829-import sys, exceptions, os, stat, tempfile, time, binascii
4830+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4831 
4832 from twisted.python import log
4833 
4834hunk ./src/allmydata/util/fileutil.py 186
4835             raise tx
4836         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4837 
4838-def rm_dir(dirname):
4839+def rmtree(dirname):
4840     """
4841     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4842     already gone, do nothing and return without raising an exception.  If this
4843hunk ./src/allmydata/util/fileutil.py 205
4844             else:
4845                 remove(fullname)
4846         os.rmdir(dirname)
4847-    except Exception, le:
4848-        # Ignore "No such file or directory"
4849-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4850+    except EnvironmentError, le:
4851+        # Ignore "No such file or directory", collect any other exception.
4852+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4853             excs.append(le)
4854hunk ./src/allmydata/util/fileutil.py 209
4855+    except Exception, le:
4856+        excs.append(le)
4857 
4858     # Okay, now we've recursively removed everything, ignoring any "No
4859     # such file or directory" errors, and collecting any other errors.
4860hunk ./src/allmydata/util/fileutil.py 222
4861             raise OSError, "Failed to remove dir for unknown reason."
4862         raise OSError, excs
4863 
4864+def rm_dir(dirname):
4865+    # Renamed to be like shutil.rmtree and unlike rmdir.
4866+    return rmtree(dirname)
4867 
4868 def remove_if_possible(f):
4869     try:
4870}
4871[work in progress intended to be unrecorded and never committed to trunk
4872zooko@zooko.com**20110714212139
4873 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
4874 switch from os.path.join to filepath
4875 incomplete refactoring of common "stay in your subtree" tester code into a superclass
4876 
4877] {
4878hunk ./src/allmydata/test/test_backends.py 3
4879 from twisted.trial import unittest
4880 
4881-from twisted.path.filepath import FilePath
4882+from twisted.python.filepath import FilePath
4883 
4884 from StringIO import StringIO
4885 
4886hunk ./src/allmydata/test/test_backends.py 10
4887 from allmydata.test.common_util import ReallyEqualMixin
4888 from allmydata.util.assertutil import _assert
4889 
4890-import mock, os
4891+import mock
4892 
4893 # This is the code that we're going to be testing.
4894 from allmydata.storage.server import StorageServer
4895hunk ./src/allmydata/test/test_backends.py 25
4896 shareversionnumber = '\x00\x00\x00\x01'
4897 sharedatalength = '\x00\x00\x00\x01'
4898 numberofleases = '\x00\x00\x00\x01'
4899+
4900 shareinputdata = 'a'
4901 ownernumber = '\x00\x00\x00\x00'
4902 renewsecret  = 'x'*32
4903hunk ./src/allmydata/test/test_backends.py 39
4904 
4905 
4906 testnodeid = 'testnodeidxxxxxxxxxx'
4907-storedir = 'teststoredir'
4908-storedirfp = FilePath(storedir)
4909-basedir = os.path.join(storedir, 'shares')
4910-baseincdir = os.path.join(basedir, 'incoming')
4911-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4912-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4913-shareincomingname = os.path.join(sharedirincomingname, '0')
4914-sharefname = os.path.join(sharedirfinalname, '0')
4915+
4916+class TestFilesMixin(unittest.TestCase):
4917+    def setUp(self):
4918+        self.storedir = FilePath('teststoredir')
4919+        self.basedir = self.storedir.child('shares')
4920+        self.baseincdir = self.basedir.child('incoming')
4921+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4922+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4923+        self.shareincomingname = self.sharedirincomingname.child('0')
4924+        self.sharefname = self.sharedirfinalname.child('0')
4925+
4926+    def call_open(self, fname, mode):
4927+        fnamefp = FilePath(fname)
4928+        if fnamefp == self.storedir.child('bucket_counter.state'):
4929+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
4930+        elif fnamefp == self.storedir.child('lease_checker.state'):
4931+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
4932+        elif fnamefp == self.storedir.child('lease_checker.history'):
4933+            return StringIO()
4934+        else:
4935+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4936+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
4937+
4938+    def call_isdir(self, fname):
4939+        fnamefp = FilePath(fname)
4940+        if fnamefp == self.storedir.child('shares'):
4941+            return True
4942+        elif fnamefp == self.storedir.child('shares').child('incoming'):
4943+            return True
4944+        else:
4945+            self.failUnless(self.storedir in fnamefp.parents(),
4946+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4947+
4948+    def call_mkdir(self, fname, mode):
4949+        self.failUnlessEqual(0777, mode)
4950+        fnamefp = FilePath(fname)
4951+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4952+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4953+
4954+
4955+    @mock.patch('os.mkdir')
4956+    @mock.patch('__builtin__.open')
4957+    @mock.patch('os.listdir')
4958+    @mock.patch('os.path.isdir')
4959+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4960+        mocklistdir.return_value = []
4961+        mockmkdir.side_effect = self.call_mkdir
4962+        mockisdir.side_effect = self.call_isdir
4963+        mockopen.side_effect = self.call_open
4964+        mocklistdir.return_value = []
4965+       
4966+        test_func()
4967+       
4968+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4969 
4970 expiration_policy = {'enabled' : False,
4971                      'mode' : 'age',
4972hunk ./src/allmydata/test/test_backends.py 123
4973         self.failIf(mockopen.called)
4974         self.failIf(mockmkdir.called)
4975 
4976-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
4977-    @mock.patch('time.time')
4978-    @mock.patch('os.mkdir')
4979-    @mock.patch('__builtin__.open')
4980-    @mock.patch('os.listdir')
4981-    @mock.patch('os.path.isdir')
4982-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4983+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
4984+    def test_create_server_fs_backend(self):
4985         """ This tests whether a server instance can be constructed with a
4986         filesystem backend. To pass the test, it mustn't use the filesystem
4987         outside of its configured storedir. """
4988hunk ./src/allmydata/test/test_backends.py 129
4989 
4990-        def call_open(fname, mode):
4991-            if fname == os.path.join(storedir, 'bucket_counter.state'):
4992-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4993-            elif fname == os.path.join(storedir, 'lease_checker.state'):
4994-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4995-            elif fname == os.path.join(storedir, 'lease_checker.history'):
4996-                return StringIO()
4997-            else:
4998-                fnamefp = FilePath(fname)
4999-                self.failUnless(storedirfp in fnamefp.parents(),
5000-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5001-        mockopen.side_effect = call_open
5002+        def _f():
5003+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5004 
5005hunk ./src/allmydata/test/test_backends.py 132
5006-        def call_isdir(fname):
5007-            if fname == os.path.join(storedir, 'shares'):
5008-                return True
5009-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
5010-                return True
5011-            else:
5012-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5013-        mockisdir.side_effect = call_isdir
5014-
5015-        mocklistdir.return_value = []
5016-
5017-        def call_mkdir(fname, mode):
5018-            self.failUnlessEqual(0777, mode)
5019-            self.failUnlessIn(fname,
5020-                              [storedir,
5021-                               os.path.join(storedir, 'shares'),
5022-                               os.path.join(storedir, 'shares', 'incoming')],
5023-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
5024-        mockmkdir.side_effect = call_mkdir
5025-
5026-        # Now begin the test.
5027-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5028-
5029-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5030+        self._help_test_stay_in_your_subtree(_f)
5031 
5032 
5033 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5034}
5035[another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
5036zooko@zooko.com**20110715191500
5037 Ignore-this: af33336789041800761e80510ea2f583
5038 In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
5039] {
5040hunk ./src/allmydata/storage/backends/das/core.py 59
5041                 log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5042                         umid="0wZ27w", level=log.UNUSUAL)
5043 
5044-        self.sharedir = os.path.join(self.storedir, "shares")
5045-        fileutil.make_dirs(self.sharedir)
5046-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
5047+        self.sharedir = self.storedir.child("shares")
5048+        fileutil.fp_make_dirs(self.sharedir)
5049+        self.incomingdir = self.sharedir.child('incoming')
5050         self._clean_incomplete()
5051 
5052     def _clean_incomplete(self):
5053hunk ./src/allmydata/storage/backends/das/core.py 65
5054-        fileutil.rmtree(self.incomingdir)
5055-        fileutil.make_dirs(self.incomingdir)
5056+        fileutil.fp_remove(self.incomingdir)
5057+        fileutil.fp_make_dirs(self.incomingdir)
5058 
5059     def _setup_corruption_advisory(self):
5060         # we don't actually create the corruption-advisory dir until necessary
5061hunk ./src/allmydata/storage/backends/das/core.py 70
5062-        self.corruption_advisory_dir = os.path.join(self.storedir,
5063-                                                    "corruption-advisories")
5064+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
5065 
5066     def _setup_bucket_counter(self):
5067hunk ./src/allmydata/storage/backends/das/core.py 73
5068-        statefname = os.path.join(self.storedir, "bucket_counter.state")
5069+        statefname = self.storedir.child("bucket_counter.state")
5070         self.bucket_counter = FSBucketCountingCrawler(statefname)
5071         self.bucket_counter.setServiceParent(self)
5072 
5073hunk ./src/allmydata/storage/backends/das/core.py 78
5074     def _setup_lease_checkerf(self, expiration_policy):
5075-        statefile = os.path.join(self.storedir, "lease_checker.state")
5076-        historyfile = os.path.join(self.storedir, "lease_checker.history")
5077+        statefile = self.storedir.child("lease_checker.state")
5078+        historyfile = self.storedir.child("lease_checker.history")
5079         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
5080         self.lease_checker.setServiceParent(self)
5081 
5082hunk ./src/allmydata/storage/backends/das/core.py 83
5083-    def get_incoming(self, storageindex):
5084+    def get_incoming_shnums(self, storageindex):
5085         """Return the set of incoming shnums."""
5086         try:
5087hunk ./src/allmydata/storage/backends/das/core.py 86
5088-            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
5089-            incominglist = os.listdir(incomingsharesdir)
5090-            incomingshnums = [int(x) for x in incominglist]
5091-            return set(incomingshnums)
5092-        except OSError:
5093-            # XXX I'd like to make this more specific. If there are no shares at all.
5094-            return set()
5095+           
5096+            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5097+            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5098+            return frozenset(incomingshnums)
5099+        except UnlistableError:
5100+            # There is no shares directory at all.
5101+            return frozenset()
5102             
5103     def get_shares(self, storageindex):
5104         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5105hunk ./src/allmydata/storage/backends/das/core.py 96
5106-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
5107+        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5108         try:
5109hunk ./src/allmydata/storage/backends/das/core.py 98
5110-            for f in os.listdir(finalstoragedir):
5111-                if NUM_RE.match(f):
5112-                    filename = os.path.join(finalstoragedir, f)
5113-                    yield ImmutableShare(filename, storageindex, int(f))
5114-        except OSError:
5115-            # Commonly caused by there being no shares at all.
5116+            for f in finalstoragedir.listdir():
5117+                if NUM_RE.match(f.basename):
5118+                    yield ImmutableShare(f, storageindex, int(f))
5119+        except UnlistableError:
5120+            # There is no shares directory at all.
5121             pass
5122         
5123     def get_available_space(self):
5124hunk ./src/allmydata/storage/backends/das/core.py 149
5125 # then the value stored in this field will be the actual share data length
5126 # modulo 2**32.
5127 
5128-class ImmutableShare:
5129+class ImmutableShare(object):
5130     LEASE_SIZE = struct.calcsize(">L32s32sL")
5131     sharetype = "immutable"
5132 
5133hunk ./src/allmydata/storage/backends/das/core.py 166
5134         if create:
5135             # touch the file, so later callers will see that we're working on
5136             # it. Also construct the metadata.
5137-            assert not os.path.exists(self.finalhome)
5138-            fileutil.make_dirs(os.path.dirname(self.incominghome))
5139+            assert not finalhome.exists()
5140+            fp_make_dirs(self.incominghome)
5141             f = open(self.incominghome, 'wb')
5142             # The second field -- the four-byte share data length -- is no
5143             # longer used as of Tahoe v1.3.0, but we continue to write it in
5144hunk ./src/allmydata/storage/backends/das/core.py 316
5145         except IndexError:
5146             self.add_lease(lease_info)
5147 
5148-
5149     def cancel_lease(self, cancel_secret):
5150         """Remove a lease with the given cancel_secret. If the last lease is
5151         cancelled, the file will be removed. Return the number of bytes that
5152hunk ./src/allmydata/storage/common.py 19
5153 def si_a2b(ascii_storageindex):
5154     return base32.a2b(ascii_storageindex)
5155 
5156-def storage_index_to_dir(storageindex):
5157+def storage_index_to_dir(startfp, storageindex):
5158     sia = si_b2a(storageindex)
5159     return os.path.join(sia[:2], sia)
5160hunk ./src/allmydata/storage/server.py 210
5161 
5162         # fill incoming with all shares that are incoming use a set operation
5163         # since there's no need to operate on individual pieces
5164-        incoming = self.backend.get_incoming(storageindex)
5165+        incoming = self.backend.get_incoming_shnums(storageindex)
5166 
5167         for shnum in ((sharenums - alreadygot) - incoming):
5168             if (not limited) or (remaining_space >= max_space_per_bucket):
5169hunk ./src/allmydata/test/test_backends.py 5
5170 
5171 from twisted.python.filepath import FilePath
5172 
5173+from allmydata.util.log import msg
5174+
5175 from StringIO import StringIO
5176 
5177 from allmydata.test.common_util import ReallyEqualMixin
5178hunk ./src/allmydata/test/test_backends.py 42
5179 
5180 testnodeid = 'testnodeidxxxxxxxxxx'
5181 
5182-class TestFilesMixin(unittest.TestCase):
5183-    def setUp(self):
5184-        self.storedir = FilePath('teststoredir')
5185-        self.basedir = self.storedir.child('shares')
5186-        self.baseincdir = self.basedir.child('incoming')
5187-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5188-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5189-        self.shareincomingname = self.sharedirincomingname.child('0')
5190-        self.sharefname = self.sharedirfinalname.child('0')
5191+class MockStat:
5192+    def __init__(self):
5193+        self.st_mode = None
5194 
5195hunk ./src/allmydata/test/test_backends.py 46
5196+class MockFiles(unittest.TestCase):
5197+    """ I simulate a filesystem that the code under test can use. I flag the
5198+    code under test if it reads or writes outside of its prescribed
5199+    subtree. I simulate just the parts of the filesystem that the current
5200+    implementation of DAS backend needs. """
5201     def call_open(self, fname, mode):
5202         fnamefp = FilePath(fname)
5203hunk ./src/allmydata/test/test_backends.py 53
5204+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5205+                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5206+
5207         if fnamefp == self.storedir.child('bucket_counter.state'):
5208             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
5209         elif fnamefp == self.storedir.child('lease_checker.state'):
5210hunk ./src/allmydata/test/test_backends.py 61
5211             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
5212         elif fnamefp == self.storedir.child('lease_checker.history'):
5213+            # This is separated out from the else clause below just because
5214+            # we know this particular file is going to be used by the
5215+            # current implementation of DAS backend, and we might want to
5216+            # use this information in this test in the future...
5217             return StringIO()
5218         else:
5219hunk ./src/allmydata/test/test_backends.py 67
5220-            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5221-                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5222+            # Anything else you open inside your subtree appears to be an
5223+            # empty file.
5224+            return StringIO()
5225 
5226     def call_isdir(self, fname):
5227         fnamefp = FilePath(fname)
5228hunk ./src/allmydata/test/test_backends.py 73
5229-        if fnamefp == self.storedir.child('shares'):
5230+        return fnamefp.isdir()
5231+
5232+        self.failUnless(self.storedir == self or self.storedir in self.parents(),
5233+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
5234+
5235+        # The first two cases are separate from the else clause below just
5236+        # because we know that the current implementation of the DAS backend
5237+        # inspects these two directories and we might want to make use of
5238+        # that information in the tests in the future...
5239+        if self == self.storedir.child('shares'):
5240             return True
5241hunk ./src/allmydata/test/test_backends.py 84
5242-        elif fnamefp == self.storedir.child('shares').child('incoming'):
5243+        elif self == self.storedir.child('shares').child('incoming'):
5244             return True
5245         else:
5246hunk ./src/allmydata/test/test_backends.py 87
5247-            self.failUnless(self.storedir in fnamefp.parents(),
5248-                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5249+            # Anything else you open inside your subtree appears to be a
5250+            # directory.
5251+            return True
5252 
5253     def call_mkdir(self, fname, mode):
5254hunk ./src/allmydata/test/test_backends.py 92
5255-        self.failUnlessEqual(0777, mode)
5256         fnamefp = FilePath(fname)
5257         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5258                         "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5259hunk ./src/allmydata/test/test_backends.py 95
5260+        self.failUnlessEqual(0777, mode)
5261 
5262hunk ./src/allmydata/test/test_backends.py 97
5263+    def call_listdir(self, fname):
5264+        fnamefp = FilePath(fname)
5265+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5266+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5267 
5268hunk ./src/allmydata/test/test_backends.py 102
5269-    @mock.patch('os.mkdir')
5270-    @mock.patch('__builtin__.open')
5271-    @mock.patch('os.listdir')
5272-    @mock.patch('os.path.isdir')
5273-    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5274-        mocklistdir.return_value = []
5275+    def call_stat(self, fname):
5276+        fnamefp = FilePath(fname)
5277+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5278+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5279+
5280+        msg("%s.call_stat(%s)" % (self, fname,))
5281+        mstat = MockStat()
5282+        mstat.st_mode = 16893 # a directory
5283+        return mstat
5284+
5285+    def setUp(self):
5286+        msg( "%s.setUp()" % (self,))
5287+        self.storedir = FilePath('teststoredir')
5288+        self.basedir = self.storedir.child('shares')
5289+        self.baseincdir = self.basedir.child('incoming')
5290+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5291+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5292+        self.shareincomingname = self.sharedirincomingname.child('0')
5293+        self.sharefname = self.sharedirfinalname.child('0')
5294+
5295+        self.mocklistdirp = mock.patch('os.listdir')
5296+        mocklistdir = self.mocklistdirp.__enter__()
5297+        mocklistdir.side_effect = self.call_listdir
5298+
5299+        self.mockmkdirp = mock.patch('os.mkdir')
5300+        mockmkdir = self.mockmkdirp.__enter__()
5301         mockmkdir.side_effect = self.call_mkdir
5302hunk ./src/allmydata/test/test_backends.py 129
5303+
5304+        self.mockisdirp = mock.patch('os.path.isdir')
5305+        mockisdir = self.mockisdirp.__enter__()
5306         mockisdir.side_effect = self.call_isdir
5307hunk ./src/allmydata/test/test_backends.py 133
5308+
5309+        self.mockopenp = mock.patch('__builtin__.open')
5310+        mockopen = self.mockopenp.__enter__()
5311         mockopen.side_effect = self.call_open
5312hunk ./src/allmydata/test/test_backends.py 137
5313-        mocklistdir.return_value = []
5314-       
5315-        test_func()
5316-       
5317-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5318+
5319+        self.mockstatp = mock.patch('os.stat')
5320+        mockstat = self.mockstatp.__enter__()
5321+        mockstat.side_effect = self.call_stat
5322+
5323+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
5324+        mockfpstat = self.mockfpstatp.__enter__()
5325+        mockfpstat.side_effect = self.call_stat
5326+
5327+    def tearDown(self):
5328+        msg( "%s.tearDown()" % (self,))
5329+        self.mockfpstatp.__exit__()
5330+        self.mockstatp.__exit__()
5331+        self.mockopenp.__exit__()
5332+        self.mockisdirp.__exit__()
5333+        self.mockmkdirp.__exit__()
5334+        self.mocklistdirp.__exit__()
5335 
5336 expiration_policy = {'enabled' : False,
5337                      'mode' : 'age',
5338hunk ./src/allmydata/test/test_backends.py 184
5339         self.failIf(mockopen.called)
5340         self.failIf(mockmkdir.called)
5341 
5342-class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5343+class TestServerConstruction(MockFiles, ReallyEqualMixin):
5344     def test_create_server_fs_backend(self):
5345         """ This tests whether a server instance can be constructed with a
5346         filesystem backend. To pass the test, it mustn't use the filesystem
5347hunk ./src/allmydata/test/test_backends.py 190
5348         outside of its configured storedir. """
5349 
5350-        def _f():
5351-            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5352+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5353 
5354hunk ./src/allmydata/test/test_backends.py 192
5355-        self._help_test_stay_in_your_subtree(_f)
5356-
5357-
5358-class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5359-    """ This tests both the StorageServer xyz """
5360-    @mock.patch('__builtin__.open')
5361-    def setUp(self, mockopen):
5362-        def call_open(fname, mode):
5363-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5364-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5365-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5366-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5367-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5368-                return StringIO()
5369-            else:
5370-                _assert(False, "The tester code doesn't recognize this case.") 
5371-
5372-        mockopen.side_effect = call_open
5373-        self.backend = DASCore(storedir, expiration_policy)
5374-        self.ss = StorageServer(testnodeid, self.backend)
5375-        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
5376-        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5377+class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
5378+    """ This tests both the StorageServer and the DAS backend together. """
5379+    def setUp(self):
5380+        MockFiles.setUp(self)
5381+        try:
5382+            self.backend = DASCore(self.storedir, expiration_policy)
5383+            self.ss = StorageServer(testnodeid, self.backend)
5384+            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
5385+            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5386+        except:
5387+            MockFiles.tearDown(self)
5388+            raise
5389 
5390     @mock.patch('time.time')
5391     def test_write_and_read_share(self, mocktime):
5392hunk ./src/allmydata/util/fileutil.py 8
5393 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5394 
5395 from twisted.python import log
5396+from twisted.python.filepath import UnlistableError
5397 
5398 from pycryptopp.cipher.aes import AES
5399 
5400hunk ./src/allmydata/util/fileutil.py 187
5401             raise tx
5402         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5403 
5404+def fp_make_dirs(dirfp):
5405+    """
5406+    An idempotent version of FilePath.makedirs().  If the dir already
5407+    exists, do nothing and return without raising an exception.  If this
5408+    call creates the dir, return without raising an exception.  If there is
5409+    an error that prevents creation or if the directory gets deleted after
5410+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
5411+    exists, raise an exception.
5412+    """
5413+    log.msg( "xxx 0 %s" % (dirfp,))
5414+    tx = None
5415+    try:
5416+        dirfp.makedirs()
5417+    except OSError, x:
5418+        tx = x
5419+
5420+    if not dirfp.isdir():
5421+        if tx:
5422+            raise tx
5423+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5424+
5425 def rmtree(dirname):
5426     """
5427     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5428hunk ./src/allmydata/util/fileutil.py 244
5429             raise OSError, "Failed to remove dir for unknown reason."
5430         raise OSError, excs
5431 
5432+def fp_remove(dirfp):
5433+    try:
5434+        dirfp.remove()
5435+    except UnlistableError, e:
5436+        if e.originalException.errno != errno.ENOENT:
5437+            raise
5438+
5439 def rm_dir(dirname):
5440     # Renamed to be like shutil.rmtree and unlike rmdir.
5441     return rmtree(dirname)
5442}
5443[another temporary patch for sharing work-in-progress
5444zooko@zooko.com**20110720055918
5445 Ignore-this: dfa0270476cbc6511cdb54c5c9a55a8e
5446 A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
5447 There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
5448 (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
5449 
5450] {
5451hunk ./src/allmydata/storage/backends/das/core.py 5
5452 
5453 from allmydata.interfaces import IStorageBackend
5454 from allmydata.storage.backends.base import Backend
5455-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5456+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5457 from allmydata.util.assertutil import precondition
5458 
5459 #from foolscap.api import Referenceable
5460hunk ./src/allmydata/storage/backends/das/core.py 10
5461 from twisted.application import service
5462+from twisted.python.filepath import UnlistableError
5463 
5464 from zope.interface import implements
5465 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
5466hunk ./src/allmydata/storage/backends/das/core.py 17
5467 from allmydata.util import fileutil, idlib, log, time_format
5468 import allmydata # for __full_version__
5469 
5470-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5471-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5472+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5473+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5474 from allmydata.storage.lease import LeaseInfo
5475 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5476      create_mutable_sharefile
5477hunk ./src/allmydata/storage/backends/das/core.py 41
5478 # $SHARENUM matches this regex:
5479 NUM_RE=re.compile("^[0-9]+$")
5480 
5481+def is_num(fp):
5482+    return NUM_RE.match(fp.basename)
5483+
5484 class DASCore(Backend):
5485     implements(IStorageBackend)
5486     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
5487hunk ./src/allmydata/storage/backends/das/core.py 58
5488         self.storedir = storedir
5489         self.readonly = readonly
5490         self.reserved_space = int(reserved_space)
5491-        if self.reserved_space:
5492-            if self.get_available_space() is None:
5493-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5494-                        umid="0wZ27w", level=log.UNUSUAL)
5495-
5496         self.sharedir = self.storedir.child("shares")
5497         fileutil.fp_make_dirs(self.sharedir)
5498         self.incomingdir = self.sharedir.child('incoming')
5499hunk ./src/allmydata/storage/backends/das/core.py 62
5500         self._clean_incomplete()
5501+        if self.reserved_space and (self.get_available_space() is None):
5502+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5503+                    umid="0wZ27w", level=log.UNUSUAL)
5504+
5505 
5506     def _clean_incomplete(self):
5507         fileutil.fp_remove(self.incomingdir)
5508hunk ./src/allmydata/storage/backends/das/core.py 87
5509         self.lease_checker.setServiceParent(self)
5510 
5511     def get_incoming_shnums(self, storageindex):
5512-        """Return the set of incoming shnums."""
5513+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
5514+        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
5515         try:
5516hunk ./src/allmydata/storage/backends/das/core.py 90
5517-           
5518-            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5519-            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5520-            return frozenset(incomingshnums)
5521+            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
5522+            shnums = [ int(fp.basename) for fp in childfps ]
5523+            return frozenset(shnums)
5524         except UnlistableError:
5525             # There is no shares directory at all.
5526             return frozenset()
5527hunk ./src/allmydata/storage/backends/das/core.py 98
5528             
5529     def get_shares(self, storageindex):
5530-        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5531+        """ Generate ImmutableShare objects for shares we have for this
5532+        storageindex. ("Shares we have" means completed ones, excluding
5533+        incoming ones.)"""
5534         finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5535         try:
5536hunk ./src/allmydata/storage/backends/das/core.py 103
5537-            for f in finalstoragedir.listdir():
5538-                if NUM_RE.match(f.basename):
5539-                    yield ImmutableShare(f, storageindex, int(f))
5540+            for fp in finalstoragedir.children():
5541+                if is_num(fp):
5542+                    yield ImmutableShare(fp, storageindex)
5543         except UnlistableError:
5544             # There is no shares directory at all.
5545             pass
5546hunk ./src/allmydata/storage/backends/das/core.py 116
5547         return fileutil.get_available_space(self.storedir, self.reserved_space)
5548 
5549     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
5550-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
5551-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
5552+        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
5553+        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
5554         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
5555         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
5556         return bw
5557hunk ./src/allmydata/storage/backends/das/expirer.py 50
5558     slow_start = 360 # wait 6 minutes after startup
5559     minimum_cycle_time = 12*60*60 # not more than twice per day
5560 
5561-    def __init__(self, statefile, historyfile, expiration_policy):
5562-        self.historyfile = historyfile
5563+    def __init__(self, statefile, historyfp, expiration_policy):
5564+        self.historyfp = historyfp
5565         self.expiration_enabled = expiration_policy['enabled']
5566         self.mode = expiration_policy['mode']
5567         self.override_lease_duration = None
5568hunk ./src/allmydata/storage/backends/das/expirer.py 80
5569             self.state["cycle-to-date"].setdefault(k, so_far[k])
5570 
5571         # initialize history
5572-        if not os.path.exists(self.historyfile):
5573+        if not self.historyfp.exists():
5574             history = {} # cyclenum -> dict
5575hunk ./src/allmydata/storage/backends/das/expirer.py 82
5576-            f = open(self.historyfile, "wb")
5577-            pickle.dump(history, f)
5578-            f.close()
5579+            self.historyfp.setContent(pickle.dumps(history))
5580 
5581     def create_empty_cycle_dict(self):
5582         recovered = self.create_empty_recovered_dict()
5583hunk ./src/allmydata/storage/backends/das/expirer.py 305
5584         # copy() needs to become a deepcopy
5585         h["space-recovered"] = s["space-recovered"].copy()
5586 
5587-        history = pickle.load(open(self.historyfile, "rb"))
5588+        history = pickle.load(self.historyfp.getContent())
5589         history[cycle] = h
5590         while len(history) > 10:
5591             oldcycles = sorted(history.keys())
5592hunk ./src/allmydata/storage/backends/das/expirer.py 310
5593             del history[oldcycles[0]]
5594-        f = open(self.historyfile, "wb")
5595-        pickle.dump(history, f)
5596-        f.close()
5597+        self.historyfp.setContent(pickle.dumps(history))
5598 
5599     def get_state(self):
5600         """In addition to the crawler state described in
5601hunk ./src/allmydata/storage/backends/das/expirer.py 379
5602         progress = self.get_progress()
5603 
5604         state = ShareCrawler.get_state(self) # does a shallow copy
5605-        history = pickle.load(open(self.historyfile, "rb"))
5606+        history = pickle.load(self.historyfp.getContent())
5607         state["history"] = history
5608 
5609         if not progress["cycle-in-progress"]:
5610hunk ./src/allmydata/storage/common.py 19
5611 def si_a2b(ascii_storageindex):
5612     return base32.a2b(ascii_storageindex)
5613 
5614-def storage_index_to_dir(startfp, storageindex):
5615+def si_dir(startfp, storageindex):
5616     sia = si_b2a(storageindex)
5617hunk ./src/allmydata/storage/common.py 21
5618-    return os.path.join(sia[:2], sia)
5619+    return startfp.child(sia[:2]).child(sia)
5620hunk ./src/allmydata/storage/crawler.py 68
5621     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
5622     minimum_cycle_time = 300 # don't run a cycle faster than this
5623 
5624-    def __init__(self, statefname, allowed_cpu_percentage=None):
5625+    def __init__(self, statefp, allowed_cpu_percentage=None):
5626         service.MultiService.__init__(self)
5627         if allowed_cpu_percentage is not None:
5628             self.allowed_cpu_percentage = allowed_cpu_percentage
5629hunk ./src/allmydata/storage/crawler.py 72
5630-        self.statefname = statefname
5631+        self.statefp = statefp
5632         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
5633                          for i in range(2**10)]
5634         self.prefixes.sort()
5635hunk ./src/allmydata/storage/crawler.py 192
5636         #                            of the last bucket to be processed, or
5637         #                            None if we are sleeping between cycles
5638         try:
5639-            f = open(self.statefname, "rb")
5640-            state = pickle.load(f)
5641-            f.close()
5642+            state = pickle.loads(self.statefp.getContent())
5643         except EnvironmentError:
5644             state = {"version": 1,
5645                      "last-cycle-finished": None,
5646hunk ./src/allmydata/storage/crawler.py 228
5647         else:
5648             last_complete_prefix = self.prefixes[lcpi]
5649         self.state["last-complete-prefix"] = last_complete_prefix
5650-        tmpfile = self.statefname + ".tmp"
5651-        f = open(tmpfile, "wb")
5652-        pickle.dump(self.state, f)
5653-        f.close()
5654-        fileutil.move_into_place(tmpfile, self.statefname)
5655+        self.statefp.setContent(pickle.dumps(self.state))
5656 
5657     def startService(self):
5658         # arrange things to look like we were just sleeping, so
5659hunk ./src/allmydata/storage/crawler.py 440
5660 
5661     minimum_cycle_time = 60*60 # we don't need this more than once an hour
5662 
5663-    def __init__(self, statefname, num_sample_prefixes=1):
5664-        FSShareCrawler.__init__(self, statefname)
5665+    def __init__(self, statefp, num_sample_prefixes=1):
5666+        FSShareCrawler.__init__(self, statefp)
5667         self.num_sample_prefixes = num_sample_prefixes
5668 
5669     def add_initial_state(self):
5670hunk ./src/allmydata/storage/server.py 11
5671 from allmydata.util import fileutil, idlib, log, time_format
5672 import allmydata # for __full_version__
5673 
5674-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5675-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5676+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5677+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5678 from allmydata.storage.lease import LeaseInfo
5679 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5680      create_mutable_sharefile
5681hunk ./src/allmydata/storage/server.py 173
5682         # to a particular owner.
5683         start = time.time()
5684         self.count("allocate")
5685-        alreadygot = set()
5686         incoming = set()
5687         bucketwriters = {} # k: shnum, v: BucketWriter
5688 
5689hunk ./src/allmydata/storage/server.py 199
5690             remaining_space -= self.allocated_size()
5691         # self.readonly_storage causes remaining_space <= 0
5692 
5693-        # fill alreadygot with all shares that we have, not just the ones
5694+        # Fill alreadygot with all shares that we have, not just the ones
5695         # they asked about: this will save them a lot of work. Add or update
5696         # leases for all of them: if they want us to hold shares for this
5697hunk ./src/allmydata/storage/server.py 202
5698-        # file, they'll want us to hold leases for this file.
5699+        # file, they'll want us to hold leases for all the shares of it.
5700+        alreadygot = set()
5701         for share in self.backend.get_shares(storageindex):
5702hunk ./src/allmydata/storage/server.py 205
5703-            alreadygot.add(share.shnum)
5704             share.add_or_renew_lease(lease_info)
5705hunk ./src/allmydata/storage/server.py 206
5706+            alreadygot.add(share.shnum)
5707 
5708hunk ./src/allmydata/storage/server.py 208
5709-        # fill incoming with all shares that are incoming use a set operation
5710-        # since there's no need to operate on individual pieces
5711+        # all share numbers that are incoming
5712         incoming = self.backend.get_incoming_shnums(storageindex)
5713 
5714         for shnum in ((sharenums - alreadygot) - incoming):
5715hunk ./src/allmydata/storage/server.py 282
5716             total_space_freed += sf.cancel_lease(cancel_secret)
5717 
5718         if found_buckets:
5719-            storagedir = os.path.join(self.sharedir,
5720-                                      storage_index_to_dir(storageindex))
5721-            if not os.listdir(storagedir):
5722-                os.rmdir(storagedir)
5723+            storagedir = si_dir(self.sharedir, storageindex)
5724+            fp_rmdir_if_empty(storagedir)
5725 
5726         if self.stats_provider:
5727             self.stats_provider.count('storage_server.bytes_freed',
5728hunk ./src/allmydata/test/test_backends.py 52
5729     subtree. I simulate just the parts of the filesystem that the current
5730     implementation of DAS backend needs. """
5731     def call_open(self, fname, mode):
5732+        assert isinstance(fname, basestring), fname
5733         fnamefp = FilePath(fname)
5734         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5735                         "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5736hunk ./src/allmydata/test/test_backends.py 104
5737                         "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5738 
5739     def call_stat(self, fname):
5740+        assert isinstance(fname, basestring), fname
5741         fnamefp = FilePath(fname)
5742         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5743                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5744hunk ./src/allmydata/test/test_backends.py 217
5745 
5746         mocktime.return_value = 0
5747         # Inspect incoming and fail unless it's empty.
5748-        incomingset = self.ss.backend.get_incoming('teststorage_index')
5749-        self.failUnlessReallyEqual(incomingset, set())
5750+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
5751+        self.failUnlessReallyEqual(incomingset, frozenset())
5752         
5753         # Populate incoming with the sharenum: 0.
5754hunk ./src/allmydata/test/test_backends.py 221
5755-        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5756+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5757 
5758         # Inspect incoming and fail unless the sharenum: 0 is listed there.
5759hunk ./src/allmydata/test/test_backends.py 224
5760-        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
5761+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
5762         
5763         # Attempt to create a second share writer with the same sharenum.
5764hunk ./src/allmydata/test/test_backends.py 227
5765-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5766+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5767 
5768         # Show that no sharewriter results from a remote_allocate_buckets
5769         # with the same si and sharenum, until BucketWriter.remote_close()
5770hunk ./src/allmydata/test/test_backends.py 280
5771         StorageServer object. """
5772 
5773         def call_listdir(dirname):
5774+            precondition(isinstance(dirname, basestring), dirname)
5775             self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
5776             return ['0']
5777 
5778hunk ./src/allmydata/test/test_backends.py 287
5779         mocklistdir.side_effect = call_listdir
5780 
5781         def call_open(fname, mode):
5782+            precondition(isinstance(fname, basestring), fname)
5783             self.failUnlessReallyEqual(fname, sharefname)
5784             self.failUnlessEqual(mode[0], 'r', mode)
5785             self.failUnless('b' in mode, mode)
5786hunk ./src/allmydata/test/test_backends.py 297
5787 
5788         datalen = len(share_data)
5789         def call_getsize(fname):
5790+            precondition(isinstance(fname, basestring), fname)
5791             self.failUnlessReallyEqual(fname, sharefname)
5792             return datalen
5793         mockgetsize.side_effect = call_getsize
5794hunk ./src/allmydata/test/test_backends.py 303
5795 
5796         def call_exists(fname):
5797+            precondition(isinstance(fname, basestring), fname)
5798             self.failUnlessReallyEqual(fname, sharefname)
5799             return True
5800         mockexists.side_effect = call_exists
5801hunk ./src/allmydata/test/test_backends.py 321
5802         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
5803 
5804 
5805-class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
5806-    @mock.patch('time.time')
5807-    @mock.patch('os.mkdir')
5808-    @mock.patch('__builtin__.open')
5809-    @mock.patch('os.listdir')
5810-    @mock.patch('os.path.isdir')
5811-    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5812+class TestBackendConstruction(MockFiles, ReallyEqualMixin):
5813+    def test_create_fs_backend(self):
5814         """ This tests whether a file system backend instance can be
5815         constructed. To pass the test, it has to use the
5816         filesystem in only the prescribed ways. """
5817hunk ./src/allmydata/test/test_backends.py 327
5818 
5819-        def call_open(fname, mode):
5820-            if fname == os.path.join(storedir,'bucket_counter.state'):
5821-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5822-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5823-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5824-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5825-                return StringIO()
5826-            else:
5827-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5828-        mockopen.side_effect = call_open
5829-
5830-        def call_isdir(fname):
5831-            if fname == os.path.join(storedir,'shares'):
5832-                return True
5833-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5834-                return True
5835-            else:
5836-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5837-        mockisdir.side_effect = call_isdir
5838-
5839-        def call_mkdir(fname, mode):
5840-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
5841-            self.failUnlessEqual(0777, mode)
5842-            if fname == storedir:
5843-                return None
5844-            elif fname == os.path.join(storedir,'shares'):
5845-                return None
5846-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5847-                return None
5848-            else:
5849-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
5850-        mockmkdir.side_effect = call_mkdir
5851-
5852         # Now begin the test.
5853hunk ./src/allmydata/test/test_backends.py 328
5854-        DASCore('teststoredir', expiration_policy)
5855-
5856-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5857-
5858+        DASCore(self.storedir, expiration_policy)
5859hunk ./src/allmydata/util/fileutil.py 7
5860 
5861 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5862 
5863+from allmydata.util.assertutil import precondition
5864+
5865 from twisted.python import log
5866hunk ./src/allmydata/util/fileutil.py 10
5867-from twisted.python.filepath import UnlistableError
5868+from twisted.python.filepath import FilePath, UnlistableError
5869 
5870 from pycryptopp.cipher.aes import AES
5871 
5872hunk ./src/allmydata/util/fileutil.py 210
5873             raise tx
5874         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5875 
5876+def fp_rmdir_if_empty(dirfp):
5877+    """ Remove the directory if it is empty. """
5878+    try:
5879+        os.rmdir(dirfp.path)
5880+    except OSError, e:
5881+        if e.errno != errno.ENOTEMPTY:
5882+            raise
5883+    else:
5884+        dirfp.changed()
5885+
5886 def rmtree(dirname):
5887     """
5888     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5889hunk ./src/allmydata/util/fileutil.py 257
5890         raise OSError, excs
5891 
5892 def fp_remove(dirfp):
5893+    """
5894+    An idempotent version of shutil.rmtree().  If the dir is already gone,
5895+    do nothing and return without raising an exception.  If this call
5896+    removes the dir, return without raising an exception.  If there is an
5897+    error that prevents removal or if the directory gets created again by
5898+    someone else after this deletes it and before this checks that it is
5899+    gone, raise an exception.
5900+    """
5901     try:
5902         dirfp.remove()
5903     except UnlistableError, e:
5904hunk ./src/allmydata/util/fileutil.py 270
5905         if e.originalException.errno != errno.ENOENT:
5906             raise
5907+    except OSError, e:
5908+        if e.errno != errno.ENOENT:
5909+            raise
5910 
5911 def rm_dir(dirname):
5912     # Renamed to be like shutil.rmtree and unlike rmdir.
5913hunk ./src/allmydata/util/fileutil.py 387
5914         import traceback
5915         traceback.print_exc()
5916 
5917-def get_disk_stats(whichdir, reserved_space=0):
5918+def get_disk_stats(whichdirfp, reserved_space=0):
5919     """Return disk statistics for the storage disk, in the form of a dict
5920     with the following fields.
5921       total:            total bytes on disk
5922hunk ./src/allmydata/util/fileutil.py 408
5923     you can pass how many bytes you would like to leave unused on this
5924     filesystem as reserved_space.
5925     """
5926+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5927 
5928     if have_GetDiskFreeSpaceExW:
5929         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
5930hunk ./src/allmydata/util/fileutil.py 419
5931         n_free_for_nonroot = c_ulonglong(0)
5932         n_total            = c_ulonglong(0)
5933         n_free_for_root    = c_ulonglong(0)
5934-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
5935+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
5936                                                byref(n_total),
5937                                                byref(n_free_for_root))
5938         if retval == 0:
5939hunk ./src/allmydata/util/fileutil.py 424
5940             raise OSError("Windows error %d attempting to get disk statistics for %r"
5941-                          % (GetLastError(), whichdir))
5942+                          % (GetLastError(), whichdirfp.path))
5943         free_for_nonroot = n_free_for_nonroot.value
5944         total            = n_total.value
5945         free_for_root    = n_free_for_root.value
5946hunk ./src/allmydata/util/fileutil.py 433
5947         # <http://docs.python.org/library/os.html#os.statvfs>
5948         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
5949         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
5950-        s = os.statvfs(whichdir)
5951+        s = os.statvfs(whichdirfp.path)
5952 
5953         # on my mac laptop:
5954         #  statvfs(2) is a wrapper around statfs(2).
5955hunk ./src/allmydata/util/fileutil.py 460
5956              'avail': avail,
5957            }
5958 
5959-def get_available_space(whichdir, reserved_space):
5960+def get_available_space(whichdirfp, reserved_space):
5961     """Returns available space for share storage in bytes, or None if no
5962     API to get this information is available.
5963 
5964hunk ./src/allmydata/util/fileutil.py 472
5965     you can pass how many bytes you would like to leave unused on this
5966     filesystem as reserved_space.
5967     """
5968+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5969     try:
5970hunk ./src/allmydata/util/fileutil.py 474
5971-        return get_disk_stats(whichdir, reserved_space)['avail']
5972+        return get_disk_stats(whichdirfp, reserved_space)['avail']
5973     except AttributeError:
5974         return None
5975hunk ./src/allmydata/util/fileutil.py 477
5976-    except EnvironmentError:
5977-        log.msg("OS call to get disk statistics failed")
5978-        return 0
5979}
5980[jacp16 or so
5981wilcoxjg@gmail.com**20110722070036
5982 Ignore-this: 7548785cad146056eede9a16b93b569f
5983] {
5984merger 0.0 (
5985hunk ./src/allmydata/_auto_deps.py 21
5986-    "Twisted >= 2.4.0",
5987+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
5988+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
5989+    # support asynchronous close.
5990+    "Twisted >= 10.1.0",
5991hunk ./src/allmydata/_auto_deps.py 21
5992-    "Twisted >= 2.4.0",
5993+    "Twisted >= 11.0",
5994)
5995hunk ./src/allmydata/storage/backends/das/core.py 2
5996 import os, re, weakref, struct, time, stat
5997+from twisted.application import service
5998+from twisted.python.filepath import UnlistableError
5999+from twisted.python.filepath import FilePath
6000+from zope.interface import implements
6001 
6002hunk ./src/allmydata/storage/backends/das/core.py 7
6003+import allmydata # for __full_version__
6004 from allmydata.interfaces import IStorageBackend
6005 from allmydata.storage.backends.base import Backend
6006hunk ./src/allmydata/storage/backends/das/core.py 10
6007-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6008+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6009 from allmydata.util.assertutil import precondition
6010hunk ./src/allmydata/storage/backends/das/core.py 12
6011-
6012-#from foolscap.api import Referenceable
6013-from twisted.application import service
6014-from twisted.python.filepath import UnlistableError
6015-
6016-from zope.interface import implements
6017 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6018 from allmydata.util import fileutil, idlib, log, time_format
6019hunk ./src/allmydata/storage/backends/das/core.py 14
6020-import allmydata # for __full_version__
6021-
6022-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6023-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6024 from allmydata.storage.lease import LeaseInfo
6025 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6026      create_mutable_sharefile
6027hunk ./src/allmydata/storage/backends/das/core.py 21
6028 from allmydata.storage.crawler import FSBucketCountingCrawler
6029 from allmydata.util.hashutil import constant_time_compare
6030 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6031-
6032-from zope.interface import implements
6033+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6034 
6035 # storage/
6036 # storage/shares/incoming
6037hunk ./src/allmydata/storage/backends/das/core.py 49
6038         self._setup_lease_checkerf(expiration_policy)
6039 
6040     def _setup_storage(self, storedir, readonly, reserved_space):
6041+        precondition(isinstance(storedir, FilePath)) 
6042         self.storedir = storedir
6043         self.readonly = readonly
6044         self.reserved_space = int(reserved_space)
6045hunk ./src/allmydata/storage/backends/das/core.py 83
6046 
6047     def get_incoming_shnums(self, storageindex):
6048         """ Return a frozenset of the shnum (as ints) of incoming shares. """
6049-        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
6050+        incomingdir = si_si2dir(self.incomingdir, storageindex)
6051         try:
6052             childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
6053             shnums = [ int(fp.basename) for fp in childfps ]
6054hunk ./src/allmydata/storage/backends/das/core.py 96
6055         """ Generate ImmutableShare objects for shares we have for this
6056         storageindex. ("Shares we have" means completed ones, excluding
6057         incoming ones.)"""
6058-        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
6059+        finalstoragedir = si_si2dir(self.sharedir, storageindex)
6060         try:
6061             for fp in finalstoragedir.children():
6062                 if is_num(fp):
6063hunk ./src/allmydata/storage/backends/das/core.py 111
6064         return fileutil.get_available_space(self.storedir, self.reserved_space)
6065 
6066     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6067-        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
6068-        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6069+        finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
6070+        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6071         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
6072         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
6073         return bw
6074hunk ./src/allmydata/storage/backends/null/core.py 18
6075         return None
6076 
6077     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6078-       
6079-        immutableshare = ImmutableShare()
6080+        immutableshare = ImmutableShare()
6081         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
6082 
6083     def set_storage_server(self, ss):
6084hunk ./src/allmydata/storage/backends/null/core.py 24
6085         self.ss = ss
6086 
6087-    def get_incoming(self, storageindex):
6088-        return set()
6089+    def get_incoming_shnums(self, storageindex):
6090+        return frozenset()
6091 
6092 class ImmutableShare:
6093     sharetype = "immutable"
6094hunk ./src/allmydata/storage/common.py 19
6095 def si_a2b(ascii_storageindex):
6096     return base32.a2b(ascii_storageindex)
6097 
6098-def si_dir(startfp, storageindex):
6099+def si_si2dir(startfp, storageindex):
6100     sia = si_b2a(storageindex)
6101     return startfp.child(sia[:2]).child(sia)
6102hunk ./src/allmydata/storage/immutable.py 20
6103     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
6104         self.ss = ss
6105         self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
6106-
6107         self._canary = canary
6108         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
6109         self.closed = False
6110hunk ./src/allmydata/storage/lease.py 17
6111 
6112     def get_expiration_time(self):
6113         return self.expiration_time
6114+
6115     def get_grant_renew_time_time(self):
6116         # hack, based upon fixed 31day expiration period
6117         return self.expiration_time - 31*24*60*60
6118hunk ./src/allmydata/storage/lease.py 21
6119+
6120     def get_age(self):
6121         return time.time() - self.get_grant_renew_time_time()
6122 
6123hunk ./src/allmydata/storage/lease.py 32
6124          self.expiration_time) = struct.unpack(">L32s32sL", data)
6125         self.nodeid = None
6126         return self
6127+
6128     def to_immutable_data(self):
6129         return struct.pack(">L32s32sL",
6130                            self.owner_num,
6131hunk ./src/allmydata/storage/lease.py 45
6132                            int(self.expiration_time),
6133                            self.renew_secret, self.cancel_secret,
6134                            self.nodeid)
6135+
6136     def from_mutable_data(self, data):
6137         (self.owner_num,
6138          self.expiration_time,
6139hunk ./src/allmydata/storage/server.py 11
6140 from allmydata.util import fileutil, idlib, log, time_format
6141 import allmydata # for __full_version__
6142 
6143-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6144-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6145+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6146+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6147 from allmydata.storage.lease import LeaseInfo
6148 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6149      create_mutable_sharefile
6150hunk ./src/allmydata/storage/server.py 88
6151             else:
6152                 stats["mean"] = None
6153 
6154-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
6155-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
6156-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
6157+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
6158+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
6159+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
6160                              (0.999, "99_9_percentile", 1000)]
6161 
6162             for percentile, percentilestring, minnumtoobserve in orderstatlist:
6163hunk ./src/allmydata/storage/server.py 231
6164             header = f.read(32)
6165             f.close()
6166             if header[:32] == MutableShareFile.MAGIC:
6167+                # XXX  Can I exploit this code?
6168                 sf = MutableShareFile(filename, self)
6169                 # note: if the share has been migrated, the renew_lease()
6170                 # call will throw an exception, with information to help the
6171hunk ./src/allmydata/storage/server.py 237
6172                 # client update the lease.
6173             elif header[:4] == struct.pack(">L", 1):
6174+                # Check if version number is "1".
6175+                # XXX WHAT ABOUT OTHER VERSIONS!!!!!!!?
6176                 sf = ShareFile(filename)
6177             else:
6178                 continue # non-sharefile
6179hunk ./src/allmydata/storage/server.py 285
6180             total_space_freed += sf.cancel_lease(cancel_secret)
6181 
6182         if found_buckets:
6183-            storagedir = si_dir(self.sharedir, storageindex)
6184+            # XXX  Yikes looks like code that shouldn't be in the server!
6185+            storagedir = si_si2dir(self.sharedir, storageindex)
6186             fp_rmdir_if_empty(storagedir)
6187 
6188         if self.stats_provider:
6189hunk ./src/allmydata/storage/server.py 301
6190             self.stats_provider.count('storage_server.bytes_added', consumed_size)
6191         del self._active_writers[bw]
6192 
6193-
6194     def remote_get_buckets(self, storageindex):
6195         start = time.time()
6196         self.count("get")
6197hunk ./src/allmydata/storage/server.py 329
6198         except StopIteration:
6199             return iter([])
6200 
6201+    #  XXX  As far as Zancas' grockery has gotten.
6202     def remote_slot_testv_and_readv_and_writev(self, storageindex,
6203                                                secrets,
6204                                                test_and_write_vectors,
6205hunk ./src/allmydata/storage/server.py 338
6206         self.count("writev")
6207         si_s = si_b2a(storageindex)
6208         log.msg("storage: slot_writev %s" % si_s)
6209-        si_dir = storage_index_to_dir(storageindex)
6210+       
6211         (write_enabler, renew_secret, cancel_secret) = secrets
6212         # shares exist if there is a file for them
6213hunk ./src/allmydata/storage/server.py 341
6214-        bucketdir = os.path.join(self.sharedir, si_dir)
6215+        bucketdir = si_si2dir(self.sharedir, storageindex)
6216         shares = {}
6217         if os.path.isdir(bucketdir):
6218             for sharenum_s in os.listdir(bucketdir):
6219hunk ./src/allmydata/storage/server.py 430
6220         si_s = si_b2a(storageindex)
6221         lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
6222                      facility="tahoe.storage", level=log.OPERATIONAL)
6223-        si_dir = storage_index_to_dir(storageindex)
6224         # shares exist if there is a file for them
6225hunk ./src/allmydata/storage/server.py 431
6226-        bucketdir = os.path.join(self.sharedir, si_dir)
6227+        bucketdir = si_si2dir(self.sharedir, storageindex)
6228         if not os.path.isdir(bucketdir):
6229             self.add_latency("readv", time.time() - start)
6230             return {}
6231hunk ./src/allmydata/test/test_backends.py 2
6232 from twisted.trial import unittest
6233-
6234 from twisted.python.filepath import FilePath
6235hunk ./src/allmydata/test/test_backends.py 3
6236-
6237 from allmydata.util.log import msg
6238hunk ./src/allmydata/test/test_backends.py 4
6239-
6240 from StringIO import StringIO
6241hunk ./src/allmydata/test/test_backends.py 5
6242-
6243 from allmydata.test.common_util import ReallyEqualMixin
6244 from allmydata.util.assertutil import _assert
6245hunk ./src/allmydata/test/test_backends.py 7
6246-
6247 import mock
6248 
6249 # This is the code that we're going to be testing.
6250hunk ./src/allmydata/test/test_backends.py 11
6251 from allmydata.storage.server import StorageServer
6252-
6253 from allmydata.storage.backends.das.core import DASCore
6254 from allmydata.storage.backends.null.core import NullCore
6255 
6256hunk ./src/allmydata/test/test_backends.py 14
6257-
6258-# The following share file contents was generated with
6259+# The following share file content was generated with
6260 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
6261hunk ./src/allmydata/test/test_backends.py 16
6262-# with share data == 'a'.
6263+# with share data == 'a'. The total size of this input
6264+# is 85 bytes.
6265 shareversionnumber = '\x00\x00\x00\x01'
6266 sharedatalength = '\x00\x00\x00\x01'
6267 numberofleases = '\x00\x00\x00\x01'
6268hunk ./src/allmydata/test/test_backends.py 21
6269-
6270 shareinputdata = 'a'
6271 ownernumber = '\x00\x00\x00\x00'
6272 renewsecret  = 'x'*32
6273hunk ./src/allmydata/test/test_backends.py 31
6274 client_data = shareinputdata + ownernumber + renewsecret + \
6275     cancelsecret + expirationtime + nextlease
6276 share_data = containerdata + client_data
6277-
6278-
6279 testnodeid = 'testnodeidxxxxxxxxxx'
6280 
6281 class MockStat:
6282hunk ./src/allmydata/test/test_backends.py 105
6283         mstat.st_mode = 16893 # a directory
6284         return mstat
6285 
6286+    def call_get_available_space(self, storedir, reservedspace):
6287+        # The input vector has an input size of 85.
6288+        return 85 - reservedspace
6289+
6290+    def call_exists(self):
6291+        # I'm only called in the ImmutableShareFile constructor.
6292+        return False
6293+
6294     def setUp(self):
6295         msg( "%s.setUp()" % (self,))
6296         self.storedir = FilePath('teststoredir')
6297hunk ./src/allmydata/test/test_backends.py 147
6298         mockfpstat = self.mockfpstatp.__enter__()
6299         mockfpstat.side_effect = self.call_stat
6300 
6301+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6302+        mockget_available_space = self.mockget_available_space.__enter__()
6303+        mockget_available_space.side_effect = self.call_get_available_space
6304+
6305+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6306+        mockfpexists = self.mockfpexists.__enter__()
6307+        mockfpexists.side_effect = self.call_exists
6308+
6309     def tearDown(self):
6310         msg( "%s.tearDown()" % (self,))
6311hunk ./src/allmydata/test/test_backends.py 157
6312+        self.mockfpexists.__exit__()
6313+        self.mockget_available_space.__exit__()
6314         self.mockfpstatp.__exit__()
6315         self.mockstatp.__exit__()
6316         self.mockopenp.__exit__()
6317hunk ./src/allmydata/test/test_backends.py 166
6318         self.mockmkdirp.__exit__()
6319         self.mocklistdirp.__exit__()
6320 
6321+
6322 expiration_policy = {'enabled' : False,
6323                      'mode' : 'age',
6324                      'override_lease_duration' : None,
6325hunk ./src/allmydata/test/test_backends.py 182
6326         self.ss = StorageServer(testnodeid, backend=NullCore())
6327 
6328     @mock.patch('os.mkdir')
6329-
6330     @mock.patch('__builtin__.open')
6331     @mock.patch('os.listdir')
6332     @mock.patch('os.path.isdir')
6333hunk ./src/allmydata/test/test_backends.py 201
6334         filesystem backend. To pass the test, it mustn't use the filesystem
6335         outside of its configured storedir. """
6336 
6337-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
6338+        StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
6339 
6340 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
6341     """ This tests both the StorageServer and the DAS backend together. """
6342hunk ./src/allmydata/test/test_backends.py 205
6343+   
6344     def setUp(self):
6345         MockFiles.setUp(self)
6346         try:
6347hunk ./src/allmydata/test/test_backends.py 211
6348             self.backend = DASCore(self.storedir, expiration_policy)
6349             self.ss = StorageServer(testnodeid, self.backend)
6350-            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6351-            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
6352+            self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6353+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
6354         except:
6355             MockFiles.tearDown(self)
6356             raise
6357hunk ./src/allmydata/test/test_backends.py 233
6358         # Populate incoming with the sharenum: 0.
6359         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6360 
6361-        # Inspect incoming and fail unless the sharenum: 0 is listed there.
6362-        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6363+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
6364+        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6365         
6366         # Attempt to create a second share writer with the same sharenum.
6367         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6368hunk ./src/allmydata/test/test_backends.py 257
6369 
6370         # Postclose: (Omnibus) failUnless written data is in final.
6371         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
6372-        contents = sharesinfinal[0].read_share_data(0,73)
6373+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
6374+        contents = sharesinfinal[0].read_share_data(0, 73)
6375         self.failUnlessReallyEqual(contents, client_data)
6376 
6377         # Exercise the case that the share we're asking to allocate is
6378hunk ./src/allmydata/test/test_backends.py 276
6379         mockget_available_space.side_effect = call_get_available_space
6380         
6381         
6382-        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6383+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6384 
6385     @mock.patch('os.path.exists')
6386     @mock.patch('os.path.getsize')
6387}
6388[jacp17
6389wilcoxjg@gmail.com**20110722203244
6390 Ignore-this: e79a5924fb2eb786ee4e9737a8228f87
6391] {
6392hunk ./src/allmydata/storage/backends/das/core.py 14
6393 from allmydata.util.assertutil import precondition
6394 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6395 from allmydata.util import fileutil, idlib, log, time_format
6396+from allmydata.util.fileutil import fp_make_dirs
6397 from allmydata.storage.lease import LeaseInfo
6398 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6399      create_mutable_sharefile
6400hunk ./src/allmydata/storage/backends/das/core.py 19
6401 from allmydata.storage.immutable import BucketWriter, BucketReader
6402-from allmydata.storage.crawler import FSBucketCountingCrawler
6403+from allmydata.storage.crawler import BucketCountingCrawler
6404 from allmydata.util.hashutil import constant_time_compare
6405hunk ./src/allmydata/storage/backends/das/core.py 21
6406-from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6407+from allmydata.storage.backends.das.expirer import LeaseCheckingCrawler
6408 _pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6409 
6410 # storage/
6411hunk ./src/allmydata/storage/backends/das/core.py 43
6412     implements(IStorageBackend)
6413     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
6414         Backend.__init__(self)
6415-
6416         self._setup_storage(storedir, readonly, reserved_space)
6417         self._setup_corruption_advisory()
6418         self._setup_bucket_counter()
6419hunk ./src/allmydata/storage/backends/das/core.py 72
6420 
6421     def _setup_bucket_counter(self):
6422         statefname = self.storedir.child("bucket_counter.state")
6423-        self.bucket_counter = FSBucketCountingCrawler(statefname)
6424+        self.bucket_counter = BucketCountingCrawler(statefname)
6425         self.bucket_counter.setServiceParent(self)
6426 
6427     def _setup_lease_checkerf(self, expiration_policy):
6428hunk ./src/allmydata/storage/backends/das/core.py 78
6429         statefile = self.storedir.child("lease_checker.state")
6430         historyfile = self.storedir.child("lease_checker.history")
6431-        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6432+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6433         self.lease_checker.setServiceParent(self)
6434 
6435     def get_incoming_shnums(self, storageindex):
6436hunk ./src/allmydata/storage/backends/das/core.py 168
6437             # it. Also construct the metadata.
6438             assert not finalhome.exists()
6439             fp_make_dirs(self.incominghome)
6440-            f = open(self.incominghome, 'wb')
6441+            f = self.incominghome.child(str(self.shnum))
6442             # The second field -- the four-byte share data length -- is no
6443             # longer used as of Tahoe v1.3.0, but we continue to write it in
6444             # there in case someone downgrades a storage server from >=
6445hunk ./src/allmydata/storage/backends/das/core.py 178
6446             # the largest length that can fit into the field. That way, even
6447             # if this does happen, the old < v1.3.0 server will still allow
6448             # clients to read the first part of the share.
6449-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6450-            f.close()
6451+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6452+            #f.close()
6453             self._lease_offset = max_size + 0x0c
6454             self._num_leases = 0
6455         else:
6456hunk ./src/allmydata/storage/backends/das/core.py 261
6457         f.write(data)
6458         f.close()
6459 
6460-    def _write_lease_record(self, f, lease_number, lease_info):
6461+    def _write_lease_record(self, lease_number, lease_info):
6462         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6463         f.seek(offset)
6464         assert f.tell() == offset
6465hunk ./src/allmydata/storage/backends/das/core.py 290
6466                 yield LeaseInfo().from_immutable_data(data)
6467 
6468     def add_lease(self, lease_info):
6469-        f = open(self.incominghome, 'rb+')
6470+        self.incominghome, 'rb+')
6471         num_leases = self._read_num_leases(f)
6472         self._write_lease_record(f, num_leases, lease_info)
6473         self._write_num_leases(f, num_leases+1)
6474hunk ./src/allmydata/storage/backends/das/expirer.py 1
6475-import time, os, pickle, struct
6476-from allmydata.storage.crawler import FSShareCrawler
6477+import time, os, pickle, struct # os, pickle, and struct will almost certainly be migrated to the backend...
6478+from allmydata.storage.crawler import ShareCrawler
6479 from allmydata.storage.common import UnknownMutableContainerVersionError, \
6480      UnknownImmutableContainerVersionError
6481 from twisted.python import log as twlog
6482hunk ./src/allmydata/storage/backends/das/expirer.py 7
6483 
6484-class FSLeaseCheckingCrawler(FSShareCrawler):
6485+class LeaseCheckingCrawler(ShareCrawler):
6486     """I examine the leases on all shares, determining which are still valid
6487     and which have expired. I can remove the expired leases (if so
6488     configured), and the share will be deleted when the last lease is
6489hunk ./src/allmydata/storage/backends/das/expirer.py 66
6490         else:
6491             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
6492         self.sharetypes_to_expire = expiration_policy['sharetypes']
6493-        FSShareCrawler.__init__(self, statefile)
6494+        ShareCrawler.__init__(self, statefile)
6495 
6496     def add_initial_state(self):
6497         # we fill ["cycle-to-date"] here (even though they will be reset in
6498hunk ./src/allmydata/storage/crawler.py 1
6499-
6500 import os, time, struct
6501 import cPickle as pickle
6502 from twisted.internet import reactor
6503hunk ./src/allmydata/storage/crawler.py 11
6504 class TimeSliceExceeded(Exception):
6505     pass
6506 
6507-class FSShareCrawler(service.MultiService):
6508-    """A subcless of ShareCrawler is attached to a StorageServer, and
6509+class ShareCrawler(service.MultiService):
6510+    """A subclass of ShareCrawler is attached to a StorageServer, and
6511     periodically walks all of its shares, processing each one in some
6512     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
6513     since large servers can easily have a terabyte of shares, in several
6514hunk ./src/allmydata/storage/crawler.py 426
6515         pass
6516 
6517 
6518-class FSBucketCountingCrawler(FSShareCrawler):
6519+class BucketCountingCrawler(ShareCrawler):
6520     """I keep track of how many buckets are being managed by this server.
6521     This is equivalent to the number of distributed files and directories for
6522     which I am providing storage. The actual number of files+directories in
6523hunk ./src/allmydata/storage/crawler.py 440
6524     minimum_cycle_time = 60*60 # we don't need this more than once an hour
6525 
6526     def __init__(self, statefp, num_sample_prefixes=1):
6527-        FSShareCrawler.__init__(self, statefp)
6528+        ShareCrawler.__init__(self, statefp)
6529         self.num_sample_prefixes = num_sample_prefixes
6530 
6531     def add_initial_state(self):
6532hunk ./src/allmydata/test/test_backends.py 113
6533         # I'm only called in the ImmutableShareFile constructor.
6534         return False
6535 
6536+    def call_setContent(self, inputstring):
6537+        # XXX Good enough for expirer, not sure about elsewhere...
6538+        return True
6539+
6540     def setUp(self):
6541         msg( "%s.setUp()" % (self,))
6542         self.storedir = FilePath('teststoredir')
6543hunk ./src/allmydata/test/test_backends.py 159
6544         mockfpexists = self.mockfpexists.__enter__()
6545         mockfpexists.side_effect = self.call_exists
6546 
6547+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6548+        mocksetContent = self.mocksetContent.__enter__()
6549+        mocksetContent.side_effect = self.call_setContent
6550+
6551     def tearDown(self):
6552         msg( "%s.tearDown()" % (self,))
6553hunk ./src/allmydata/test/test_backends.py 165
6554+        self.mocksetContent.__exit__()
6555         self.mockfpexists.__exit__()
6556         self.mockget_available_space.__exit__()
6557         self.mockfpstatp.__exit__()
6558}
6559[jacp18
6560wilcoxjg@gmail.com**20110723031915
6561 Ignore-this: 21e7f22ac20e3f8af22ea2e9b755d6a5
6562] {
6563hunk ./src/allmydata/_auto_deps.py 21
6564     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6565     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6566 
6567-    "Twisted >= 2.4.0",
6568+v v v v v v v
6569+    "Twisted >= 11.0",
6570+*************
6571+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6572+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6573+    # support asynchronous close.
6574+    "Twisted >= 10.1.0",
6575+^ ^ ^ ^ ^ ^ ^
6576 
6577     # foolscap < 0.5.1 had a performance bug which spent
6578     # O(N**2) CPU for transferring large mutable files
6579hunk ./src/allmydata/storage/backends/das/core.py 168
6580             # it. Also construct the metadata.
6581             assert not finalhome.exists()
6582             fp_make_dirs(self.incominghome)
6583-            f = self.incominghome.child(str(self.shnum))
6584+            f = self.incominghome
6585             # The second field -- the four-byte share data length -- is no
6586             # longer used as of Tahoe v1.3.0, but we continue to write it in
6587             # there in case someone downgrades a storage server from >=
6588hunk ./src/allmydata/storage/backends/das/core.py 178
6589             # the largest length that can fit into the field. That way, even
6590             # if this does happen, the old < v1.3.0 server will still allow
6591             # clients to read the first part of the share.
6592-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6593-            #f.close()
6594+            print 'f: ',f
6595+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6596             self._lease_offset = max_size + 0x0c
6597             self._num_leases = 0
6598         else:
6599hunk ./src/allmydata/storage/backends/das/core.py 263
6600 
6601     def _write_lease_record(self, lease_number, lease_info):
6602         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6603-        f.seek(offset)
6604-        assert f.tell() == offset
6605-        f.write(lease_info.to_immutable_data())
6606+        fh = f.open()
6607+        try:
6608+            fh.seek(offset)
6609+            assert fh.tell() == offset
6610+            fh.write(lease_info.to_immutable_data())
6611+        finally:
6612+            fh.close()
6613 
6614     def _read_num_leases(self, f):
6615hunk ./src/allmydata/storage/backends/das/core.py 272
6616-        f.seek(0x08)
6617-        (num_leases,) = struct.unpack(">L", f.read(4))
6618+        fh = f.open()
6619+        try:
6620+            fh.seek(0x08)
6621+            ro = fh.read(4)
6622+            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6623+            (num_leases,) = struct.unpack(">L", ro)
6624+        finally:
6625+            fh.close()
6626         return num_leases
6627 
6628     def _write_num_leases(self, f, num_leases):
6629hunk ./src/allmydata/storage/backends/das/core.py 283
6630-        f.seek(0x08)
6631-        f.write(struct.pack(">L", num_leases))
6632+        fh = f.open()
6633+        try:
6634+            fh.seek(0x08)
6635+            fh.write(struct.pack(">L", num_leases))
6636+        finally:
6637+            fh.close()
6638 
6639     def _truncate_leases(self, f, num_leases):
6640         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
6641hunk ./src/allmydata/storage/backends/das/core.py 304
6642                 yield LeaseInfo().from_immutable_data(data)
6643 
6644     def add_lease(self, lease_info):
6645-        self.incominghome, 'rb+')
6646-        num_leases = self._read_num_leases(f)
6647+        f = self.incominghome
6648+        num_leases = self._read_num_leases(self.incominghome)
6649         self._write_lease_record(f, num_leases, lease_info)
6650         self._write_num_leases(f, num_leases+1)
6651hunk ./src/allmydata/storage/backends/das/core.py 308
6652-        f.close()
6653-
6654+       
6655     def renew_lease(self, renew_secret, new_expire_time):
6656         for i,lease in enumerate(self.get_leases()):
6657             if constant_time_compare(lease.renew_secret, renew_secret):
6658hunk ./src/allmydata/test/test_backends.py 33
6659 share_data = containerdata + client_data
6660 testnodeid = 'testnodeidxxxxxxxxxx'
6661 
6662+
6663 class MockStat:
6664     def __init__(self):
6665         self.st_mode = None
6666hunk ./src/allmydata/test/test_backends.py 43
6667     code under test if it reads or writes outside of its prescribed
6668     subtree. I simulate just the parts of the filesystem that the current
6669     implementation of DAS backend needs. """
6670+
6671+    def setUp(self):
6672+        msg( "%s.setUp()" % (self,))
6673+        self.storedir = FilePath('teststoredir')
6674+        self.basedir = self.storedir.child('shares')
6675+        self.baseincdir = self.basedir.child('incoming')
6676+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6677+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6678+        self.shareincomingname = self.sharedirincomingname.child('0')
6679+        self.sharefilename = self.sharedirfinalname.child('0')
6680+        self.sharefilecontents = StringIO(share_data)
6681+
6682+        self.mocklistdirp = mock.patch('os.listdir')
6683+        mocklistdir = self.mocklistdirp.__enter__()
6684+        mocklistdir.side_effect = self.call_listdir
6685+
6686+        self.mockmkdirp = mock.patch('os.mkdir')
6687+        mockmkdir = self.mockmkdirp.__enter__()
6688+        mockmkdir.side_effect = self.call_mkdir
6689+
6690+        self.mockisdirp = mock.patch('os.path.isdir')
6691+        mockisdir = self.mockisdirp.__enter__()
6692+        mockisdir.side_effect = self.call_isdir
6693+
6694+        self.mockopenp = mock.patch('__builtin__.open')
6695+        mockopen = self.mockopenp.__enter__()
6696+        mockopen.side_effect = self.call_open
6697+
6698+        self.mockstatp = mock.patch('os.stat')
6699+        mockstat = self.mockstatp.__enter__()
6700+        mockstat.side_effect = self.call_stat
6701+
6702+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6703+        mockfpstat = self.mockfpstatp.__enter__()
6704+        mockfpstat.side_effect = self.call_stat
6705+
6706+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6707+        mockget_available_space = self.mockget_available_space.__enter__()
6708+        mockget_available_space.side_effect = self.call_get_available_space
6709+
6710+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6711+        mockfpexists = self.mockfpexists.__enter__()
6712+        mockfpexists.side_effect = self.call_exists
6713+
6714+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6715+        mocksetContent = self.mocksetContent.__enter__()
6716+        mocksetContent.side_effect = self.call_setContent
6717+
6718     def call_open(self, fname, mode):
6719         assert isinstance(fname, basestring), fname
6720         fnamefp = FilePath(fname)
6721hunk ./src/allmydata/test/test_backends.py 107
6722             # current implementation of DAS backend, and we might want to
6723             # use this information in this test in the future...
6724             return StringIO()
6725+        elif fnamefp == self.shareincomingname:
6726+            print "repr(fnamefp): ", repr(fnamefp)
6727         else:
6728             # Anything else you open inside your subtree appears to be an
6729             # empty file.
6730hunk ./src/allmydata/test/test_backends.py 168
6731         # XXX Good enough for expirer, not sure about elsewhere...
6732         return True
6733 
6734-    def setUp(self):
6735-        msg( "%s.setUp()" % (self,))
6736-        self.storedir = FilePath('teststoredir')
6737-        self.basedir = self.storedir.child('shares')
6738-        self.baseincdir = self.basedir.child('incoming')
6739-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6740-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6741-        self.shareincomingname = self.sharedirincomingname.child('0')
6742-        self.sharefname = self.sharedirfinalname.child('0')
6743-
6744-        self.mocklistdirp = mock.patch('os.listdir')
6745-        mocklistdir = self.mocklistdirp.__enter__()
6746-        mocklistdir.side_effect = self.call_listdir
6747-
6748-        self.mockmkdirp = mock.patch('os.mkdir')
6749-        mockmkdir = self.mockmkdirp.__enter__()
6750-        mockmkdir.side_effect = self.call_mkdir
6751-
6752-        self.mockisdirp = mock.patch('os.path.isdir')
6753-        mockisdir = self.mockisdirp.__enter__()
6754-        mockisdir.side_effect = self.call_isdir
6755-
6756-        self.mockopenp = mock.patch('__builtin__.open')
6757-        mockopen = self.mockopenp.__enter__()
6758-        mockopen.side_effect = self.call_open
6759-
6760-        self.mockstatp = mock.patch('os.stat')
6761-        mockstat = self.mockstatp.__enter__()
6762-        mockstat.side_effect = self.call_stat
6763-
6764-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6765-        mockfpstat = self.mockfpstatp.__enter__()
6766-        mockfpstat.side_effect = self.call_stat
6767-
6768-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6769-        mockget_available_space = self.mockget_available_space.__enter__()
6770-        mockget_available_space.side_effect = self.call_get_available_space
6771-
6772-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6773-        mockfpexists = self.mockfpexists.__enter__()
6774-        mockfpexists.side_effect = self.call_exists
6775-
6776-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6777-        mocksetContent = self.mocksetContent.__enter__()
6778-        mocksetContent.side_effect = self.call_setContent
6779 
6780     def tearDown(self):
6781         msg( "%s.tearDown()" % (self,))
6782hunk ./src/allmydata/test/test_backends.py 239
6783         handling of simultaneous and successive attempts to write the same
6784         share.
6785         """
6786-
6787         mocktime.return_value = 0
6788         # Inspect incoming and fail unless it's empty.
6789         incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
6790}
6791
6792Context:
6793
6794[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
6795david-sarah@jacaranda.org**20110721234941
6796 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
6797]
6798[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
6799david-sarah@jacaranda.org**20110722000320
6800 Ignore-this: 55cd558b791526113db3f83c00ec328a
6801]
6802[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
6803david-sarah@jacaranda.org**20110721233658
6804 Ignore-this: 81b41745477163c9b39c0b59db91cc62
6805]
6806[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
6807david-sarah@jacaranda.org**20110722035402
6808 Ignore-this: 5d03f544c4154f088e26c7107494bf39
6809]
6810[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
6811david-sarah@jacaranda.org**20110722024907
6812 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
6813]
6814[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
6815david-sarah@jacaranda.org**20110718005949
6816 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
6817]
6818[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
6819david-sarah@jacaranda.org**20110717194315
6820 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
6821]
6822[README.txt: say that quickstart.rst is in the docs directory.
6823david-sarah@jacaranda.org**20110717192400
6824 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
6825]
6826[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
6827zooko@zooko.com**20110717114226
6828 Ignore-this: df222120d41447ce4102616921626c82
6829 fixes #1383
6830]
6831[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
6832david-sarah@jacaranda.org**20110716181813
6833 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
6834]
6835[docs: add missing link in NEWS.rst
6836zooko@zooko.com**20110712153307
6837 Ignore-this: be7b7eb81c03700b739daa1027d72b35
6838]
6839[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
6840zooko@zooko.com**20110712153229
6841 Ignore-this: 723c4f9e2211027c79d711715d972c5
6842 Also remove a couple of vestigial references to figleaf, which is long gone.
6843 fixes #1409 (remove contrib/fuse)
6844]
6845[add Protovis.js-based download-status timeline visualization
6846Brian Warner <warner@lothar.com>**20110629222606
6847 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
6848 
6849 provide status overlap info on the webapi t=json output, add decode/decrypt
6850 rate tooltips, add zoomin/zoomout buttons
6851]
6852[add more download-status data, fix tests
6853Brian Warner <warner@lothar.com>**20110629222555
6854 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
6855]
6856[prepare for viz: improve DownloadStatus events
6857Brian Warner <warner@lothar.com>**20110629222542
6858 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
6859 
6860 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
6861]
6862[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
6863zooko@zooko.com**20110629185711
6864 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
6865]
6866[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
6867david-sarah@jacaranda.org**20110130235809
6868 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
6869]
6870[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
6871david-sarah@jacaranda.org**20110626054124
6872 Ignore-this: abb864427a1b91bd10d5132b4589fd90
6873]
6874[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
6875david-sarah@jacaranda.org**20110623205528
6876 Ignore-this: c63e23146c39195de52fb17c7c49b2da
6877]
6878[Rename test_package_initialization.py to (much shorter) test_import.py .
6879Brian Warner <warner@lothar.com>**20110611190234
6880 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
6881 
6882 The former name was making my 'ls' listings hard to read, by forcing them
6883 down to just two columns.
6884]
6885[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
6886zooko@zooko.com**20110611163741
6887 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
6888 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
6889 fixes #1412
6890]
6891[wui: right-align the size column in the WUI
6892zooko@zooko.com**20110611153758
6893 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
6894 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
6895 fixes #1412
6896]
6897[docs: three minor fixes
6898zooko@zooko.com**20110610121656
6899 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
6900 CREDITS for arc for stats tweak
6901 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
6902 English usage tweak
6903]
6904[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
6905david-sarah@jacaranda.org**20110609223719
6906 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
6907]
6908[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
6909wilcoxjg@gmail.com**20110527120135
6910 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
6911 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
6912 NEWS.rst, stats.py: documentation of change to get_latencies
6913 stats.rst: now documents percentile modification in get_latencies
6914 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
6915 fixes #1392
6916]
6917[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
6918david-sarah@jacaranda.org**20110517011214
6919 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
6920]
6921[docs: convert NEWS to NEWS.rst and change all references to it.
6922david-sarah@jacaranda.org**20110517010255
6923 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
6924]
6925[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
6926david-sarah@jacaranda.org**20110512140559
6927 Ignore-this: 784548fc5367fac5450df1c46890876d
6928]
6929[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
6930david-sarah@jacaranda.org**20110130164923
6931 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
6932]
6933[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
6934zooko@zooko.com**20110128142006
6935 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
6936 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
6937]
6938[M-x whitespace-cleanup
6939zooko@zooko.com**20110510193653
6940 Ignore-this: dea02f831298c0f65ad096960e7df5c7
6941]
6942[docs: fix typo in running.rst, thanks to arch_o_median
6943zooko@zooko.com**20110510193633
6944 Ignore-this: ca06de166a46abbc61140513918e79e8
6945]
6946[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
6947david-sarah@jacaranda.org**20110204204902
6948 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
6949]
6950[relnotes.txt: forseeable -> foreseeable. refs #1342
6951david-sarah@jacaranda.org**20110204204116
6952 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
6953]
6954[replace remaining .html docs with .rst docs
6955zooko@zooko.com**20110510191650
6956 Ignore-this: d557d960a986d4ac8216d1677d236399
6957 Remove install.html (long since deprecated).
6958 Also replace some obsolete references to install.html with references to quickstart.rst.
6959 Fix some broken internal references within docs/historical/historical_known_issues.txt.
6960 Thanks to Ravi Pinjala and Patrick McDonald.
6961 refs #1227
6962]
6963[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
6964zooko@zooko.com**20110428055232
6965 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
6966]
6967[munin tahoe_files plugin: fix incorrect file count
6968francois@ctrlaltdel.ch**20110428055312
6969 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
6970 fixes #1391
6971]
6972[corrected "k must never be smaller than N" to "k must never be greater than N"
6973secorp@allmydata.org**20110425010308
6974 Ignore-this: 233129505d6c70860087f22541805eac
6975]
6976[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
6977david-sarah@jacaranda.org**20110411190738
6978 Ignore-this: 7847d26bc117c328c679f08a7baee519
6979]
6980[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
6981david-sarah@jacaranda.org**20110410155844
6982 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
6983]
6984[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
6985david-sarah@jacaranda.org**20110410155705
6986 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
6987]
6988[remove unused variable detected by pyflakes
6989zooko@zooko.com**20110407172231
6990 Ignore-this: 7344652d5e0720af822070d91f03daf9
6991]
6992[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
6993david-sarah@jacaranda.org**20110401202750
6994 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
6995]
6996[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
6997Brian Warner <warner@lothar.com>**20110325232511
6998 Ignore-this: d5307faa6900f143193bfbe14e0f01a
6999]
7000[control.py: remove all uses of s.get_serverid()
7001warner@lothar.com**20110227011203
7002 Ignore-this: f80a787953bd7fa3d40e828bde00e855
7003]
7004[web: remove some uses of s.get_serverid(), not all
7005warner@lothar.com**20110227011159
7006 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
7007]
7008[immutable/downloader/fetcher.py: remove all get_serverid() calls
7009warner@lothar.com**20110227011156
7010 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
7011]
7012[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
7013warner@lothar.com**20110227011153
7014 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
7015 
7016 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
7017 _shares_from_server dict was being popped incorrectly (using shnum as the
7018 index instead of serverid). I'm still thinking through the consequences of
7019 this bug. It was probably benign and really hard to detect. I think it would
7020 cause us to incorrectly believe that we're pulling too many shares from a
7021 server, and thus prefer a different server rather than asking for a second
7022 share from the first server. The diversity code is intended to spread out the
7023 number of shares simultaneously being requested from each server, but with
7024 this bug, it might be spreading out the total number of shares requested at
7025 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
7026 segment, so the effect doesn't last very long).
7027]
7028[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
7029warner@lothar.com**20110227011150
7030 Ignore-this: d8d56dd8e7b280792b40105e13664554
7031 
7032 test_download.py: create+check MyShare instances better, make sure they share
7033 Server objects, now that finder.py cares
7034]
7035[immutable/downloader/finder.py: reduce use of get_serverid(), one left
7036warner@lothar.com**20110227011146
7037 Ignore-this: 5785be173b491ae8a78faf5142892020
7038]
7039[immutable/offloaded.py: reduce use of get_serverid() a bit more
7040warner@lothar.com**20110227011142
7041 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
7042]
7043[immutable/upload.py: reduce use of get_serverid()
7044warner@lothar.com**20110227011138
7045 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
7046]
7047[immutable/checker.py: remove some uses of s.get_serverid(), not all
7048warner@lothar.com**20110227011134
7049 Ignore-this: e480a37efa9e94e8016d826c492f626e
7050]
7051[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
7052warner@lothar.com**20110227011132
7053 Ignore-this: 6078279ddf42b179996a4b53bee8c421
7054 MockIServer stubs
7055]
7056[upload.py: rearrange _make_trackers a bit, no behavior changes
7057warner@lothar.com**20110227011128
7058 Ignore-this: 296d4819e2af452b107177aef6ebb40f
7059]
7060[happinessutil.py: finally rename merge_peers to merge_servers
7061warner@lothar.com**20110227011124
7062 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
7063]
7064[test_upload.py: factor out FakeServerTracker
7065warner@lothar.com**20110227011120
7066 Ignore-this: 6c182cba90e908221099472cc159325b
7067]
7068[test_upload.py: server-vs-tracker cleanup
7069warner@lothar.com**20110227011115
7070 Ignore-this: 2915133be1a3ba456e8603885437e03
7071]
7072[happinessutil.py: server-vs-tracker cleanup
7073warner@lothar.com**20110227011111
7074 Ignore-this: b856c84033562d7d718cae7cb01085a9
7075]
7076[upload.py: more tracker-vs-server cleanup
7077warner@lothar.com**20110227011107
7078 Ignore-this: bb75ed2afef55e47c085b35def2de315
7079]
7080[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
7081warner@lothar.com**20110227011103
7082 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
7083]
7084[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
7085warner@lothar.com**20110227011100
7086 Ignore-this: 7ea858755cbe5896ac212a925840fe68
7087 
7088 No behavioral changes, just updating variable/method names and log messages.
7089 The effects outside these three files should be minimal: some exception
7090 messages changed (to say "server" instead of "peer"), and some internal class
7091 names were changed. A few things still use "peer" to minimize external
7092 changes, like UploadResults.timings["peer_selection"] and
7093 happinessutil.merge_peers, which can be changed later.
7094]
7095[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
7096warner@lothar.com**20110227011056
7097 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
7098]
7099[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
7100warner@lothar.com**20110227011051
7101 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
7102]
7103[test: increase timeout on a network test because Francois's ARM machine hit that timeout
7104zooko@zooko.com**20110317165909
7105 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
7106 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
7107]
7108[docs/configuration.rst: add a "Frontend Configuration" section
7109Brian Warner <warner@lothar.com>**20110222014323
7110 Ignore-this: 657018aa501fe4f0efef9851628444ca
7111 
7112 this points to docs/frontends/*.rst, which were previously underlinked
7113]
7114[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
7115"Brian Warner <warner@lothar.com>"**20110221061544
7116 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
7117]
7118[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
7119david-sarah@jacaranda.org**20110221015817
7120 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
7121]
7122[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
7123david-sarah@jacaranda.org**20110221020125
7124 Ignore-this: b0744ed58f161bf188e037bad077fc48
7125]
7126[Refactor StorageFarmBroker handling of servers
7127Brian Warner <warner@lothar.com>**20110221015804
7128 Ignore-this: 842144ed92f5717699b8f580eab32a51
7129 
7130 Pass around IServer instance instead of (peerid, rref) tuple. Replace
7131 "descriptor" with "server". Other replacements:
7132 
7133  get_all_servers -> get_connected_servers/get_known_servers
7134  get_servers_for_index -> get_servers_for_psi (now returns IServers)
7135 
7136 This change still needs to be pushed further down: lots of code is now
7137 getting the IServer and then distributing (peerid, rref) internally.
7138 Instead, it ought to distribute the IServer internally and delay
7139 extracting a serverid or rref until the last moment.
7140 
7141 no_network.py was updated to retain parallelism.
7142]
7143[TAG allmydata-tahoe-1.8.2
7144warner@lothar.com**20110131020101]
7145Patch bundle hash:
7146b252cbe0d88fe766eb11b708df50638546605a38