Ticket #999: jacp16Zancas20110722.darcs.patch

File jacp16Zancas20110722.darcs.patch, 294.8 KB (added by arch_o_median, at 2011-07-22T07:03:25Z)
Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
50  * adding comments to clarify what I'm about to do.
51
52Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
53  * branching back, no longer attempting to mock inside TestServerFSBackend
54
55Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
56  * checkpoint12 TestServerFSBackend no longer mocks filesystem
57
58Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
59  * JACP
60
61Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
62  * testing get incoming
63
64Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
65  * ImmutableShareFile does not know its StorageIndex
66
67Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
68  * get_incoming correctly reports the 0 share after it has arrived
69
70Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
71  * jacp14
72
73Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
74  * jacp14 or so
75
76Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
77  * temporary work-in-progress patch to be unrecorded
78  tidy up a few tests, work done in pair-programming with Zancas
79
80Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
81  * work in progress intended to be unrecorded and never committed to trunk
82  switch from os.path.join to filepath
83  incomplete refactoring of common "stay in your subtree" tester code into a superclass
84 
85
86Fri Jul 15 13:15:00 MDT 2011  zooko@zooko.com
87  * another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
88  In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
89
90Tue Jul 19 23:59:18 MDT 2011  zooko@zooko.com
91  * another temporary patch for sharing work-in-progress
92  A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
93  There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
94  (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
95 
96
97Fri Jul 22 01:00:36 MDT 2011  wilcoxjg@gmail.com
98  * jacp16 or so
99
100New patches:
101
102[storage: new mocking tests of storage server read and write
103wilcoxjg@gmail.com**20110325203514
104 Ignore-this: df65c3c4f061dd1516f88662023fdb41
105 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
106] {
107addfile ./src/allmydata/test/test_server.py
108hunk ./src/allmydata/test/test_server.py 1
109+from twisted.trial import unittest
110+
111+from StringIO import StringIO
112+
113+from allmydata.test.common_util import ReallyEqualMixin
114+
115+import mock
116+
117+# This is the code that we're going to be testing.
118+from allmydata.storage.server import StorageServer
119+
120+# The following share file contents was generated with
121+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
122+# with share data == 'a'.
123+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
124+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
125+
126+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
127+
128+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
129+    @mock.patch('__builtin__.open')
130+    def test_create_server(self, mockopen):
131+        """ This tests whether a server instance can be constructed. """
132+
133+        def call_open(fname, mode):
134+            if fname == 'testdir/bucket_counter.state':
135+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
136+            elif fname == 'testdir/lease_checker.state':
137+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
138+            elif fname == 'testdir/lease_checker.history':
139+                return StringIO()
140+        mockopen.side_effect = call_open
141+
142+        # Now begin the test.
143+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
144+
145+        # You passed!
146+
147+class TestServer(unittest.TestCase, ReallyEqualMixin):
148+    @mock.patch('__builtin__.open')
149+    def setUp(self, mockopen):
150+        def call_open(fname, mode):
151+            if fname == 'testdir/bucket_counter.state':
152+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
153+            elif fname == 'testdir/lease_checker.state':
154+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
155+            elif fname == 'testdir/lease_checker.history':
156+                return StringIO()
157+        mockopen.side_effect = call_open
158+
159+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
160+
161+
162+    @mock.patch('time.time')
163+    @mock.patch('os.mkdir')
164+    @mock.patch('__builtin__.open')
165+    @mock.patch('os.listdir')
166+    @mock.patch('os.path.isdir')
167+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
168+        """Handle a report of corruption."""
169+
170+        def call_listdir(dirname):
171+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
172+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
173+
174+        mocklistdir.side_effect = call_listdir
175+
176+        class MockFile:
177+            def __init__(self):
178+                self.buffer = ''
179+                self.pos = 0
180+            def write(self, instring):
181+                begin = self.pos
182+                padlen = begin - len(self.buffer)
183+                if padlen > 0:
184+                    self.buffer += '\x00' * padlen
185+                end = self.pos + len(instring)
186+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
187+                self.pos = end
188+            def close(self):
189+                pass
190+            def seek(self, pos):
191+                self.pos = pos
192+            def read(self, numberbytes):
193+                return self.buffer[self.pos:self.pos+numberbytes]
194+            def tell(self):
195+                return self.pos
196+
197+        mocktime.return_value = 0
198+
199+        sharefile = MockFile()
200+        def call_open(fname, mode):
201+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
202+            return sharefile
203+
204+        mockopen.side_effect = call_open
205+        # Now begin the test.
206+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
207+        print bs
208+        bs[0].remote_write(0, 'a')
209+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
210+
211+
212+    @mock.patch('os.path.exists')
213+    @mock.patch('os.path.getsize')
214+    @mock.patch('__builtin__.open')
215+    @mock.patch('os.listdir')
216+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
217+        """ This tests whether the code correctly finds and reads
218+        shares written out by old (Tahoe-LAFS <= v1.8.2)
219+        servers. There is a similar test in test_download, but that one
220+        is from the perspective of the client and exercises a deeper
221+        stack of code. This one is for exercising just the
222+        StorageServer object. """
223+
224+        def call_listdir(dirname):
225+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
226+            return ['0']
227+
228+        mocklistdir.side_effect = call_listdir
229+
230+        def call_open(fname, mode):
231+            self.failUnlessReallyEqual(fname, sharefname)
232+            self.failUnless('r' in mode, mode)
233+            self.failUnless('b' in mode, mode)
234+
235+            return StringIO(share_file_data)
236+        mockopen.side_effect = call_open
237+
238+        datalen = len(share_file_data)
239+        def call_getsize(fname):
240+            self.failUnlessReallyEqual(fname, sharefname)
241+            return datalen
242+        mockgetsize.side_effect = call_getsize
243+
244+        def call_exists(fname):
245+            self.failUnlessReallyEqual(fname, sharefname)
246+            return True
247+        mockexists.side_effect = call_exists
248+
249+        # Now begin the test.
250+        bs = self.s.remote_get_buckets('teststorage_index')
251+
252+        self.failUnlessEqual(len(bs), 1)
253+        b = bs[0]
254+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
255+        # If you try to read past the end you get the as much data as is there.
256+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
257+        # If you start reading past the end of the file you get the empty string.
258+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
259}
260[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
261wilcoxjg@gmail.com**20110624202850
262 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
263 sloppy not for production
264] {
265move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
266hunk ./src/allmydata/storage/crawler.py 13
267     pass
268 
269 class ShareCrawler(service.MultiService):
270-    """A ShareCrawler subclass is attached to a StorageServer, and
271+    """A subcless of ShareCrawler is attached to a StorageServer, and
272     periodically walks all of its shares, processing each one in some
273     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
274     since large servers can easily have a terabyte of shares, in several
275hunk ./src/allmydata/storage/crawler.py 31
276     We assume that the normal upload/download/get_buckets traffic of a tahoe
277     grid will cause the prefixdir contents to be mostly cached in the kernel,
278     or that the number of buckets in each prefixdir will be small enough to
279-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
280+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
281     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
282     prefix. On this server, each prefixdir took 130ms-200ms to list the first
283     time, and 17ms to list the second time.
284hunk ./src/allmydata/storage/crawler.py 68
285     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
286     minimum_cycle_time = 300 # don't run a cycle faster than this
287 
288-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
289+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
290         service.MultiService.__init__(self)
291         if allowed_cpu_percentage is not None:
292             self.allowed_cpu_percentage = allowed_cpu_percentage
293hunk ./src/allmydata/storage/crawler.py 72
294-        self.server = server
295-        self.sharedir = server.sharedir
296-        self.statefile = statefile
297+        self.backend = backend
298         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
299                          for i in range(2**10)]
300         self.prefixes.sort()
301hunk ./src/allmydata/storage/crawler.py 446
302 
303     minimum_cycle_time = 60*60 # we don't need this more than once an hour
304 
305-    def __init__(self, server, statefile, num_sample_prefixes=1):
306-        ShareCrawler.__init__(self, server, statefile)
307+    def __init__(self, statefile, num_sample_prefixes=1):
308+        ShareCrawler.__init__(self, statefile)
309         self.num_sample_prefixes = num_sample_prefixes
310 
311     def add_initial_state(self):
312hunk ./src/allmydata/storage/expirer.py 15
313     removed.
314 
315     I collect statistics on the leases and make these available to a web
316-    status page, including::
317+    status page, including:
318 
319     Space recovered during this cycle-so-far:
320      actual (only if expiration_enabled=True):
321hunk ./src/allmydata/storage/expirer.py 51
322     slow_start = 360 # wait 6 minutes after startup
323     minimum_cycle_time = 12*60*60 # not more than twice per day
324 
325-    def __init__(self, server, statefile, historyfile,
326+    def __init__(self, statefile, historyfile,
327                  expiration_enabled, mode,
328                  override_lease_duration, # used if expiration_mode=="age"
329                  cutoff_date, # used if expiration_mode=="cutoff-date"
330hunk ./src/allmydata/storage/expirer.py 71
331         else:
332             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
333         self.sharetypes_to_expire = sharetypes
334-        ShareCrawler.__init__(self, server, statefile)
335+        ShareCrawler.__init__(self, statefile)
336 
337     def add_initial_state(self):
338         # we fill ["cycle-to-date"] here (even though they will be reset in
339hunk ./src/allmydata/storage/immutable.py 44
340     sharetype = "immutable"
341 
342     def __init__(self, filename, max_size=None, create=False):
343-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
344+        """ If max_size is not None then I won't allow more than
345+        max_size to be written to me. If create=True then max_size
346+        must not be None. """
347         precondition((max_size is not None) or (not create), max_size, create)
348         self.home = filename
349         self._max_size = max_size
350hunk ./src/allmydata/storage/immutable.py 87
351 
352     def read_share_data(self, offset, length):
353         precondition(offset >= 0)
354-        # reads beyond the end of the data are truncated. Reads that start
355-        # beyond the end of the data return an empty string. I wonder why
356-        # Python doesn't do the following computation for me?
357+        # Reads beyond the end of the data are truncated. Reads that start
358+        # beyond the end of the data return an empty string.
359         seekpos = self._data_offset+offset
360         fsize = os.path.getsize(self.home)
361         actuallength = max(0, min(length, fsize-seekpos))
362hunk ./src/allmydata/storage/immutable.py 198
363             space_freed += os.stat(self.home)[stat.ST_SIZE]
364             self.unlink()
365         return space_freed
366+class NullBucketWriter(Referenceable):
367+    implements(RIBucketWriter)
368 
369hunk ./src/allmydata/storage/immutable.py 201
370+    def remote_write(self, offset, data):
371+        return
372 
373 class BucketWriter(Referenceable):
374     implements(RIBucketWriter)
375hunk ./src/allmydata/storage/server.py 7
376 from twisted.application import service
377 
378 from zope.interface import implements
379-from allmydata.interfaces import RIStorageServer, IStatsProducer
380+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
381 from allmydata.util import fileutil, idlib, log, time_format
382 import allmydata # for __full_version__
383 
384hunk ./src/allmydata/storage/server.py 16
385 from allmydata.storage.lease import LeaseInfo
386 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
387      create_mutable_sharefile
388-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
389+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
390 from allmydata.storage.crawler import BucketCountingCrawler
391 from allmydata.storage.expirer import LeaseCheckingCrawler
392 
393hunk ./src/allmydata/storage/server.py 20
394+from zope.interface import implements
395+
396+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
397+# be started and stopped.
398+class Backend(service.MultiService):
399+    implements(IStatsProducer)
400+    def __init__(self):
401+        service.MultiService.__init__(self)
402+
403+    def get_bucket_shares(self):
404+        """XXX"""
405+        raise NotImplementedError
406+
407+    def get_share(self):
408+        """XXX"""
409+        raise NotImplementedError
410+
411+    def make_bucket_writer(self):
412+        """XXX"""
413+        raise NotImplementedError
414+
415+class NullBackend(Backend):
416+    def __init__(self):
417+        Backend.__init__(self)
418+
419+    def get_available_space(self):
420+        return None
421+
422+    def get_bucket_shares(self, storage_index):
423+        return set()
424+
425+    def get_share(self, storage_index, sharenum):
426+        return None
427+
428+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
429+        return NullBucketWriter()
430+
431+class FSBackend(Backend):
432+    def __init__(self, storedir, readonly=False, reserved_space=0):
433+        Backend.__init__(self)
434+
435+        self._setup_storage(storedir, readonly, reserved_space)
436+        self._setup_corruption_advisory()
437+        self._setup_bucket_counter()
438+        self._setup_lease_checkerf()
439+
440+    def _setup_storage(self, storedir, readonly, reserved_space):
441+        self.storedir = storedir
442+        self.readonly = readonly
443+        self.reserved_space = int(reserved_space)
444+        if self.reserved_space:
445+            if self.get_available_space() is None:
446+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
447+                        umid="0wZ27w", level=log.UNUSUAL)
448+
449+        self.sharedir = os.path.join(self.storedir, "shares")
450+        fileutil.make_dirs(self.sharedir)
451+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
452+        self._clean_incomplete()
453+
454+    def _clean_incomplete(self):
455+        fileutil.rm_dir(self.incomingdir)
456+        fileutil.make_dirs(self.incomingdir)
457+
458+    def _setup_corruption_advisory(self):
459+        # we don't actually create the corruption-advisory dir until necessary
460+        self.corruption_advisory_dir = os.path.join(self.storedir,
461+                                                    "corruption-advisories")
462+
463+    def _setup_bucket_counter(self):
464+        statefile = os.path.join(self.storedir, "bucket_counter.state")
465+        self.bucket_counter = BucketCountingCrawler(statefile)
466+        self.bucket_counter.setServiceParent(self)
467+
468+    def _setup_lease_checkerf(self):
469+        statefile = os.path.join(self.storedir, "lease_checker.state")
470+        historyfile = os.path.join(self.storedir, "lease_checker.history")
471+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
472+                                   expiration_enabled, expiration_mode,
473+                                   expiration_override_lease_duration,
474+                                   expiration_cutoff_date,
475+                                   expiration_sharetypes)
476+        self.lease_checker.setServiceParent(self)
477+
478+    def get_available_space(self):
479+        if self.readonly:
480+            return 0
481+        return fileutil.get_available_space(self.storedir, self.reserved_space)
482+
483+    def get_bucket_shares(self, storage_index):
484+        """Return a list of (shnum, pathname) tuples for files that hold
485+        shares for this storage_index. In each tuple, 'shnum' will always be
486+        the integer form of the last component of 'pathname'."""
487+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
488+        try:
489+            for f in os.listdir(storagedir):
490+                if NUM_RE.match(f):
491+                    filename = os.path.join(storagedir, f)
492+                    yield (int(f), filename)
493+        except OSError:
494+            # Commonly caused by there being no buckets at all.
495+            pass
496+
497 # storage/
498 # storage/shares/incoming
499 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
500hunk ./src/allmydata/storage/server.py 143
501     name = 'storage'
502     LeaseCheckerClass = LeaseCheckingCrawler
503 
504-    def __init__(self, storedir, nodeid, reserved_space=0,
505-                 discard_storage=False, readonly_storage=False,
506+    def __init__(self, nodeid, backend, reserved_space=0,
507+                 readonly_storage=False,
508                  stats_provider=None,
509                  expiration_enabled=False,
510                  expiration_mode="age",
511hunk ./src/allmydata/storage/server.py 155
512         assert isinstance(nodeid, str)
513         assert len(nodeid) == 20
514         self.my_nodeid = nodeid
515-        self.storedir = storedir
516-        sharedir = os.path.join(storedir, "shares")
517-        fileutil.make_dirs(sharedir)
518-        self.sharedir = sharedir
519-        # we don't actually create the corruption-advisory dir until necessary
520-        self.corruption_advisory_dir = os.path.join(storedir,
521-                                                    "corruption-advisories")
522-        self.reserved_space = int(reserved_space)
523-        self.no_storage = discard_storage
524-        self.readonly_storage = readonly_storage
525         self.stats_provider = stats_provider
526         if self.stats_provider:
527             self.stats_provider.register_producer(self)
528hunk ./src/allmydata/storage/server.py 158
529-        self.incomingdir = os.path.join(sharedir, 'incoming')
530-        self._clean_incomplete()
531-        fileutil.make_dirs(self.incomingdir)
532         self._active_writers = weakref.WeakKeyDictionary()
533hunk ./src/allmydata/storage/server.py 159
534+        self.backend = backend
535+        self.backend.setServiceParent(self)
536         log.msg("StorageServer created", facility="tahoe.storage")
537 
538hunk ./src/allmydata/storage/server.py 163
539-        if reserved_space:
540-            if self.get_available_space() is None:
541-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
542-                        umin="0wZ27w", level=log.UNUSUAL)
543-
544         self.latencies = {"allocate": [], # immutable
545                           "write": [],
546                           "close": [],
547hunk ./src/allmydata/storage/server.py 174
548                           "renew": [],
549                           "cancel": [],
550                           }
551-        self.add_bucket_counter()
552-
553-        statefile = os.path.join(self.storedir, "lease_checker.state")
554-        historyfile = os.path.join(self.storedir, "lease_checker.history")
555-        klass = self.LeaseCheckerClass
556-        self.lease_checker = klass(self, statefile, historyfile,
557-                                   expiration_enabled, expiration_mode,
558-                                   expiration_override_lease_duration,
559-                                   expiration_cutoff_date,
560-                                   expiration_sharetypes)
561-        self.lease_checker.setServiceParent(self)
562 
563     def __repr__(self):
564         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
565hunk ./src/allmydata/storage/server.py 178
566 
567-    def add_bucket_counter(self):
568-        statefile = os.path.join(self.storedir, "bucket_counter.state")
569-        self.bucket_counter = BucketCountingCrawler(self, statefile)
570-        self.bucket_counter.setServiceParent(self)
571-
572     def count(self, name, delta=1):
573         if self.stats_provider:
574             self.stats_provider.count("storage_server." + name, delta)
575hunk ./src/allmydata/storage/server.py 233
576             kwargs["facility"] = "tahoe.storage"
577         return log.msg(*args, **kwargs)
578 
579-    def _clean_incomplete(self):
580-        fileutil.rm_dir(self.incomingdir)
581-
582     def get_stats(self):
583         # remember: RIStatsProvider requires that our return dict
584         # contains numeric values.
585hunk ./src/allmydata/storage/server.py 269
586             stats['storage_server.total_bucket_count'] = bucket_count
587         return stats
588 
589-    def get_available_space(self):
590-        """Returns available space for share storage in bytes, or None if no
591-        API to get this information is available."""
592-
593-        if self.readonly_storage:
594-            return 0
595-        return fileutil.get_available_space(self.storedir, self.reserved_space)
596-
597     def allocated_size(self):
598         space = 0
599         for bw in self._active_writers:
600hunk ./src/allmydata/storage/server.py 276
601         return space
602 
603     def remote_get_version(self):
604-        remaining_space = self.get_available_space()
605+        remaining_space = self.backend.get_available_space()
606         if remaining_space is None:
607             # We're on a platform that has no API to get disk stats.
608             remaining_space = 2**64
609hunk ./src/allmydata/storage/server.py 301
610         self.count("allocate")
611         alreadygot = set()
612         bucketwriters = {} # k: shnum, v: BucketWriter
613-        si_dir = storage_index_to_dir(storage_index)
614-        si_s = si_b2a(storage_index)
615 
616hunk ./src/allmydata/storage/server.py 302
617+        si_s = si_b2a(storage_index)
618         log.msg("storage: allocate_buckets %s" % si_s)
619 
620         # in this implementation, the lease information (including secrets)
621hunk ./src/allmydata/storage/server.py 316
622 
623         max_space_per_bucket = allocated_size
624 
625-        remaining_space = self.get_available_space()
626+        remaining_space = self.backend.get_available_space()
627         limited = remaining_space is not None
628         if limited:
629             # this is a bit conservative, since some of this allocated_size()
630hunk ./src/allmydata/storage/server.py 329
631         # they asked about: this will save them a lot of work. Add or update
632         # leases for all of them: if they want us to hold shares for this
633         # file, they'll want us to hold leases for this file.
634-        for (shnum, fn) in self._get_bucket_shares(storage_index):
635+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
636             alreadygot.add(shnum)
637             sf = ShareFile(fn)
638             sf.add_or_renew_lease(lease_info)
639hunk ./src/allmydata/storage/server.py 335
640 
641         for shnum in sharenums:
642-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
643-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
644-            if os.path.exists(finalhome):
645+            share = self.backend.get_share(storage_index, shnum)
646+
647+            if not share:
648+                if (not limited) or (remaining_space >= max_space_per_bucket):
649+                    # ok! we need to create the new share file.
650+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
651+                                      max_space_per_bucket, lease_info, canary)
652+                    bucketwriters[shnum] = bw
653+                    self._active_writers[bw] = 1
654+                    if limited:
655+                        remaining_space -= max_space_per_bucket
656+                else:
657+                    # bummer! not enough space to accept this bucket
658+                    pass
659+
660+            elif share.is_complete():
661                 # great! we already have it. easy.
662                 pass
663hunk ./src/allmydata/storage/server.py 353
664-            elif os.path.exists(incominghome):
665+            elif not share.is_complete():
666                 # Note that we don't create BucketWriters for shnums that
667                 # have a partial share (in incoming/), so if a second upload
668                 # occurs while the first is still in progress, the second
669hunk ./src/allmydata/storage/server.py 359
670                 # uploader will use different storage servers.
671                 pass
672-            elif (not limited) or (remaining_space >= max_space_per_bucket):
673-                # ok! we need to create the new share file.
674-                bw = BucketWriter(self, incominghome, finalhome,
675-                                  max_space_per_bucket, lease_info, canary)
676-                if self.no_storage:
677-                    bw.throw_out_all_data = True
678-                bucketwriters[shnum] = bw
679-                self._active_writers[bw] = 1
680-                if limited:
681-                    remaining_space -= max_space_per_bucket
682-            else:
683-                # bummer! not enough space to accept this bucket
684-                pass
685-
686-        if bucketwriters:
687-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
688 
689         self.add_latency("allocate", time.time() - start)
690         return alreadygot, bucketwriters
691hunk ./src/allmydata/storage/server.py 437
692             self.stats_provider.count('storage_server.bytes_added', consumed_size)
693         del self._active_writers[bw]
694 
695-    def _get_bucket_shares(self, storage_index):
696-        """Return a list of (shnum, pathname) tuples for files that hold
697-        shares for this storage_index. In each tuple, 'shnum' will always be
698-        the integer form of the last component of 'pathname'."""
699-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
700-        try:
701-            for f in os.listdir(storagedir):
702-                if NUM_RE.match(f):
703-                    filename = os.path.join(storagedir, f)
704-                    yield (int(f), filename)
705-        except OSError:
706-            # Commonly caused by there being no buckets at all.
707-            pass
708 
709     def remote_get_buckets(self, storage_index):
710         start = time.time()
711hunk ./src/allmydata/storage/server.py 444
712         si_s = si_b2a(storage_index)
713         log.msg("storage: get_buckets %s" % si_s)
714         bucketreaders = {} # k: sharenum, v: BucketReader
715-        for shnum, filename in self._get_bucket_shares(storage_index):
716+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
717             bucketreaders[shnum] = BucketReader(self, filename,
718                                                 storage_index, shnum)
719         self.add_latency("get", time.time() - start)
720hunk ./src/allmydata/test/test_backends.py 10
721 import mock
722 
723 # This is the code that we're going to be testing.
724-from allmydata.storage.server import StorageServer
725+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
726 
727 # The following share file contents was generated with
728 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
729hunk ./src/allmydata/test/test_backends.py 21
730 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
731 
732 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
733+    @mock.patch('time.time')
734+    @mock.patch('os.mkdir')
735+    @mock.patch('__builtin__.open')
736+    @mock.patch('os.listdir')
737+    @mock.patch('os.path.isdir')
738+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
739+        """ This tests whether a server instance can be constructed
740+        with a null backend. The server instance fails the test if it
741+        tries to read or write to the file system. """
742+
743+        # Now begin the test.
744+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
745+
746+        self.failIf(mockisdir.called)
747+        self.failIf(mocklistdir.called)
748+        self.failIf(mockopen.called)
749+        self.failIf(mockmkdir.called)
750+
751+        # You passed!
752+
753+    @mock.patch('time.time')
754+    @mock.patch('os.mkdir')
755     @mock.patch('__builtin__.open')
756hunk ./src/allmydata/test/test_backends.py 44
757-    def test_create_server(self, mockopen):
758-        """ This tests whether a server instance can be constructed. """
759+    @mock.patch('os.listdir')
760+    @mock.patch('os.path.isdir')
761+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
762+        """ This tests whether a server instance can be constructed
763+        with a filesystem backend. To pass the test, it has to use the
764+        filesystem in only the prescribed ways. """
765 
766         def call_open(fname, mode):
767             if fname == 'testdir/bucket_counter.state':
768hunk ./src/allmydata/test/test_backends.py 58
769                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
770             elif fname == 'testdir/lease_checker.history':
771                 return StringIO()
772+            else:
773+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
774         mockopen.side_effect = call_open
775 
776         # Now begin the test.
777hunk ./src/allmydata/test/test_backends.py 63
778-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
779+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
780+
781+        self.failIf(mockisdir.called)
782+        self.failIf(mocklistdir.called)
783+        self.failIf(mockopen.called)
784+        self.failIf(mockmkdir.called)
785+        self.failIf(mocktime.called)
786 
787         # You passed!
788 
789hunk ./src/allmydata/test/test_backends.py 73
790-class TestServer(unittest.TestCase, ReallyEqualMixin):
791+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
792+    def setUp(self):
793+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
794+
795+    @mock.patch('os.mkdir')
796+    @mock.patch('__builtin__.open')
797+    @mock.patch('os.listdir')
798+    @mock.patch('os.path.isdir')
799+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
800+        """ Write a new share. """
801+
802+        # Now begin the test.
803+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
804+        bs[0].remote_write(0, 'a')
805+        self.failIf(mockisdir.called)
806+        self.failIf(mocklistdir.called)
807+        self.failIf(mockopen.called)
808+        self.failIf(mockmkdir.called)
809+
810+    @mock.patch('os.path.exists')
811+    @mock.patch('os.path.getsize')
812+    @mock.patch('__builtin__.open')
813+    @mock.patch('os.listdir')
814+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
815+        """ This tests whether the code correctly finds and reads
816+        shares written out by old (Tahoe-LAFS <= v1.8.2)
817+        servers. There is a similar test in test_download, but that one
818+        is from the perspective of the client and exercises a deeper
819+        stack of code. This one is for exercising just the
820+        StorageServer object. """
821+
822+        # Now begin the test.
823+        bs = self.s.remote_get_buckets('teststorage_index')
824+
825+        self.failUnlessEqual(len(bs), 0)
826+        self.failIf(mocklistdir.called)
827+        self.failIf(mockopen.called)
828+        self.failIf(mockgetsize.called)
829+        self.failIf(mockexists.called)
830+
831+
832+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
833     @mock.patch('__builtin__.open')
834     def setUp(self, mockopen):
835         def call_open(fname, mode):
836hunk ./src/allmydata/test/test_backends.py 126
837                 return StringIO()
838         mockopen.side_effect = call_open
839 
840-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
841-
842+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
843 
844     @mock.patch('time.time')
845     @mock.patch('os.mkdir')
846hunk ./src/allmydata/test/test_backends.py 134
847     @mock.patch('os.listdir')
848     @mock.patch('os.path.isdir')
849     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
850-        """Handle a report of corruption."""
851+        """ Write a new share. """
852 
853         def call_listdir(dirname):
854             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
855hunk ./src/allmydata/test/test_backends.py 173
856         mockopen.side_effect = call_open
857         # Now begin the test.
858         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
859-        print bs
860         bs[0].remote_write(0, 'a')
861         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
862 
863hunk ./src/allmydata/test/test_backends.py 176
864-
865     @mock.patch('os.path.exists')
866     @mock.patch('os.path.getsize')
867     @mock.patch('__builtin__.open')
868hunk ./src/allmydata/test/test_backends.py 218
869 
870         self.failUnlessEqual(len(bs), 1)
871         b = bs[0]
872+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
873         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
874         # If you try to read past the end you get the as much data as is there.
875         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
876hunk ./src/allmydata/test/test_backends.py 224
877         # If you start reading past the end of the file you get the empty string.
878         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
879+
880+
881}
882[a temp patch used as a snapshot
883wilcoxjg@gmail.com**20110626052732
884 Ignore-this: 95f05e314eaec870afa04c76d979aa44
885] {
886hunk ./docs/configuration.rst 637
887   [storage]
888   enabled = True
889   readonly = True
890-  sizelimit = 10000000000
891 
892 
893   [helper]
894hunk ./docs/garbage-collection.rst 16
895 
896 When a file or directory in the virtual filesystem is no longer referenced,
897 the space that its shares occupied on each storage server can be freed,
898-making room for other shares. Tahoe currently uses a garbage collection
899+making room for other shares. Tahoe uses a garbage collection
900 ("GC") mechanism to implement this space-reclamation process. Each share has
901 one or more "leases", which are managed by clients who want the
902 file/directory to be retained. The storage server accepts each share for a
903hunk ./docs/garbage-collection.rst 34
904 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
905 If lease renewal occurs quickly and with 100% reliability, than any renewal
906 time that is shorter than the lease duration will suffice, but a larger ratio
907-of duration-over-renewal-time will be more robust in the face of occasional
908+of lease duration to renewal time will be more robust in the face of occasional
909 delays or failures.
910 
911 The current recommended values for a small Tahoe grid are to renew the leases
912replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
913hunk ./src/allmydata/client.py 260
914             sharetypes.append("mutable")
915         expiration_sharetypes = tuple(sharetypes)
916 
917+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
918+            xyz
919+        xyz
920         ss = StorageServer(storedir, self.nodeid,
921                            reserved_space=reserved,
922                            discard_storage=discard,
923hunk ./src/allmydata/storage/crawler.py 234
924         f = open(tmpfile, "wb")
925         pickle.dump(self.state, f)
926         f.close()
927-        fileutil.move_into_place(tmpfile, self.statefile)
928+        fileutil.move_into_place(tmpfile, self.statefname)
929 
930     def startService(self):
931         # arrange things to look like we were just sleeping, so
932}
933[snapshot of progress on backend implementation (not suitable for trunk)
934wilcoxjg@gmail.com**20110626053244
935 Ignore-this: 50c764af791c2b99ada8289546806a0a
936] {
937adddir ./src/allmydata/storage/backends
938adddir ./src/allmydata/storage/backends/das
939move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
940adddir ./src/allmydata/storage/backends/null
941hunk ./src/allmydata/interfaces.py 270
942         store that on disk.
943         """
944 
945+class IStorageBackend(Interface):
946+    """
947+    Objects of this kind live on the server side and are used by the
948+    storage server object.
949+    """
950+    def get_available_space(self, reserved_space):
951+        """ Returns available space for share storage in bytes, or
952+        None if this information is not available or if the available
953+        space is unlimited.
954+
955+        If the backend is configured for read-only mode then this will
956+        return 0.
957+
958+        reserved_space is how many bytes to subtract from the answer, so
959+        you can pass how many bytes you would like to leave unused on this
960+        filesystem as reserved_space. """
961+
962+    def get_bucket_shares(self):
963+        """XXX"""
964+
965+    def get_share(self):
966+        """XXX"""
967+
968+    def make_bucket_writer(self):
969+        """XXX"""
970+
971+class IStorageBackendShare(Interface):
972+    """
973+    This object contains as much as all of the share data.  It is intended
974+    for lazy evaluation such that in many use cases substantially less than
975+    all of the share data will be accessed.
976+    """
977+    def is_complete(self):
978+        """
979+        Returns the share state, or None if the share does not exist.
980+        """
981+
982 class IStorageBucketWriter(Interface):
983     """
984     Objects of this kind live on the client side.
985hunk ./src/allmydata/interfaces.py 2492
986 
987 class EmptyPathnameComponentError(Exception):
988     """The webapi disallows empty pathname components."""
989+
990+class IShareStore(Interface):
991+    pass
992+
993addfile ./src/allmydata/storage/backends/__init__.py
994addfile ./src/allmydata/storage/backends/das/__init__.py
995addfile ./src/allmydata/storage/backends/das/core.py
996hunk ./src/allmydata/storage/backends/das/core.py 1
997+from allmydata.interfaces import IStorageBackend
998+from allmydata.storage.backends.base import Backend
999+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1000+from allmydata.util.assertutil import precondition
1001+
1002+import os, re, weakref, struct, time
1003+
1004+from foolscap.api import Referenceable
1005+from twisted.application import service
1006+
1007+from zope.interface import implements
1008+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
1009+from allmydata.util import fileutil, idlib, log, time_format
1010+import allmydata # for __full_version__
1011+
1012+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1013+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
1014+from allmydata.storage.lease import LeaseInfo
1015+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1016+     create_mutable_sharefile
1017+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1018+from allmydata.storage.crawler import FSBucketCountingCrawler
1019+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1020+
1021+from zope.interface import implements
1022+
1023+class DASCore(Backend):
1024+    implements(IStorageBackend)
1025+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1026+        Backend.__init__(self)
1027+
1028+        self._setup_storage(storedir, readonly, reserved_space)
1029+        self._setup_corruption_advisory()
1030+        self._setup_bucket_counter()
1031+        self._setup_lease_checkerf(expiration_policy)
1032+
1033+    def _setup_storage(self, storedir, readonly, reserved_space):
1034+        self.storedir = storedir
1035+        self.readonly = readonly
1036+        self.reserved_space = int(reserved_space)
1037+        if self.reserved_space:
1038+            if self.get_available_space() is None:
1039+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1040+                        umid="0wZ27w", level=log.UNUSUAL)
1041+
1042+        self.sharedir = os.path.join(self.storedir, "shares")
1043+        fileutil.make_dirs(self.sharedir)
1044+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1045+        self._clean_incomplete()
1046+
1047+    def _clean_incomplete(self):
1048+        fileutil.rm_dir(self.incomingdir)
1049+        fileutil.make_dirs(self.incomingdir)
1050+
1051+    def _setup_corruption_advisory(self):
1052+        # we don't actually create the corruption-advisory dir until necessary
1053+        self.corruption_advisory_dir = os.path.join(self.storedir,
1054+                                                    "corruption-advisories")
1055+
1056+    def _setup_bucket_counter(self):
1057+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1058+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1059+        self.bucket_counter.setServiceParent(self)
1060+
1061+    def _setup_lease_checkerf(self, expiration_policy):
1062+        statefile = os.path.join(self.storedir, "lease_checker.state")
1063+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1064+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1065+        self.lease_checker.setServiceParent(self)
1066+
1067+    def get_available_space(self):
1068+        if self.readonly:
1069+            return 0
1070+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1071+
1072+    def get_shares(self, storage_index):
1073+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1074+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1075+        try:
1076+            for f in os.listdir(finalstoragedir):
1077+                if NUM_RE.match(f):
1078+                    filename = os.path.join(finalstoragedir, f)
1079+                    yield FSBShare(filename, int(f))
1080+        except OSError:
1081+            # Commonly caused by there being no buckets at all.
1082+            pass
1083+       
1084+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1085+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1086+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1087+        return bw
1088+       
1089+
1090+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1091+# and share data. The share data is accessed by RIBucketWriter.write and
1092+# RIBucketReader.read . The lease information is not accessible through these
1093+# interfaces.
1094+
1095+# The share file has the following layout:
1096+#  0x00: share file version number, four bytes, current version is 1
1097+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1098+#  0x08: number of leases, four bytes big-endian
1099+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1100+#  A+0x0c = B: first lease. Lease format is:
1101+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1102+#   B+0x04: renew secret, 32 bytes (SHA256)
1103+#   B+0x24: cancel secret, 32 bytes (SHA256)
1104+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1105+#   B+0x48: next lease, or end of record
1106+
1107+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1108+# but it is still filled in by storage servers in case the storage server
1109+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1110+# share file is moved from one storage server to another. The value stored in
1111+# this field is truncated, so if the actual share data length is >= 2**32,
1112+# then the value stored in this field will be the actual share data length
1113+# modulo 2**32.
1114+
1115+class ImmutableShare:
1116+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1117+    sharetype = "immutable"
1118+
1119+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1120+        """ If max_size is not None then I won't allow more than
1121+        max_size to be written to me. If create=True then max_size
1122+        must not be None. """
1123+        precondition((max_size is not None) or (not create), max_size, create)
1124+        self.shnum = shnum
1125+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1126+        self._max_size = max_size
1127+        if create:
1128+            # touch the file, so later callers will see that we're working on
1129+            # it. Also construct the metadata.
1130+            assert not os.path.exists(self.fname)
1131+            fileutil.make_dirs(os.path.dirname(self.fname))
1132+            f = open(self.fname, 'wb')
1133+            # The second field -- the four-byte share data length -- is no
1134+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1135+            # there in case someone downgrades a storage server from >=
1136+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1137+            # server to another, etc. We do saturation -- a share data length
1138+            # larger than 2**32-1 (what can fit into the field) is marked as
1139+            # the largest length that can fit into the field. That way, even
1140+            # if this does happen, the old < v1.3.0 server will still allow
1141+            # clients to read the first part of the share.
1142+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1143+            f.close()
1144+            self._lease_offset = max_size + 0x0c
1145+            self._num_leases = 0
1146+        else:
1147+            f = open(self.fname, 'rb')
1148+            filesize = os.path.getsize(self.fname)
1149+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1150+            f.close()
1151+            if version != 1:
1152+                msg = "sharefile %s had version %d but we wanted 1" % \
1153+                      (self.fname, version)
1154+                raise UnknownImmutableContainerVersionError(msg)
1155+            self._num_leases = num_leases
1156+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1157+        self._data_offset = 0xc
1158+
1159+    def unlink(self):
1160+        os.unlink(self.fname)
1161+
1162+    def read_share_data(self, offset, length):
1163+        precondition(offset >= 0)
1164+        # Reads beyond the end of the data are truncated. Reads that start
1165+        # beyond the end of the data return an empty string.
1166+        seekpos = self._data_offset+offset
1167+        fsize = os.path.getsize(self.fname)
1168+        actuallength = max(0, min(length, fsize-seekpos))
1169+        if actuallength == 0:
1170+            return ""
1171+        f = open(self.fname, 'rb')
1172+        f.seek(seekpos)
1173+        return f.read(actuallength)
1174+
1175+    def write_share_data(self, offset, data):
1176+        length = len(data)
1177+        precondition(offset >= 0, offset)
1178+        if self._max_size is not None and offset+length > self._max_size:
1179+            raise DataTooLargeError(self._max_size, offset, length)
1180+        f = open(self.fname, 'rb+')
1181+        real_offset = self._data_offset+offset
1182+        f.seek(real_offset)
1183+        assert f.tell() == real_offset
1184+        f.write(data)
1185+        f.close()
1186+
1187+    def _write_lease_record(self, f, lease_number, lease_info):
1188+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1189+        f.seek(offset)
1190+        assert f.tell() == offset
1191+        f.write(lease_info.to_immutable_data())
1192+
1193+    def _read_num_leases(self, f):
1194+        f.seek(0x08)
1195+        (num_leases,) = struct.unpack(">L", f.read(4))
1196+        return num_leases
1197+
1198+    def _write_num_leases(self, f, num_leases):
1199+        f.seek(0x08)
1200+        f.write(struct.pack(">L", num_leases))
1201+
1202+    def _truncate_leases(self, f, num_leases):
1203+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1204+
1205+    def get_leases(self):
1206+        """Yields a LeaseInfo instance for all leases."""
1207+        f = open(self.fname, 'rb')
1208+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1209+        f.seek(self._lease_offset)
1210+        for i in range(num_leases):
1211+            data = f.read(self.LEASE_SIZE)
1212+            if data:
1213+                yield LeaseInfo().from_immutable_data(data)
1214+
1215+    def add_lease(self, lease_info):
1216+        f = open(self.fname, 'rb+')
1217+        num_leases = self._read_num_leases(f)
1218+        self._write_lease_record(f, num_leases, lease_info)
1219+        self._write_num_leases(f, num_leases+1)
1220+        f.close()
1221+
1222+    def renew_lease(self, renew_secret, new_expire_time):
1223+        for i,lease in enumerate(self.get_leases()):
1224+            if constant_time_compare(lease.renew_secret, renew_secret):
1225+                # yup. See if we need to update the owner time.
1226+                if new_expire_time > lease.expiration_time:
1227+                    # yes
1228+                    lease.expiration_time = new_expire_time
1229+                    f = open(self.fname, 'rb+')
1230+                    self._write_lease_record(f, i, lease)
1231+                    f.close()
1232+                return
1233+        raise IndexError("unable to renew non-existent lease")
1234+
1235+    def add_or_renew_lease(self, lease_info):
1236+        try:
1237+            self.renew_lease(lease_info.renew_secret,
1238+                             lease_info.expiration_time)
1239+        except IndexError:
1240+            self.add_lease(lease_info)
1241+
1242+
1243+    def cancel_lease(self, cancel_secret):
1244+        """Remove a lease with the given cancel_secret. If the last lease is
1245+        cancelled, the file will be removed. Return the number of bytes that
1246+        were freed (by truncating the list of leases, and possibly by
1247+        deleting the file. Raise IndexError if there was no lease with the
1248+        given cancel_secret.
1249+        """
1250+
1251+        leases = list(self.get_leases())
1252+        num_leases_removed = 0
1253+        for i,lease in enumerate(leases):
1254+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1255+                leases[i] = None
1256+                num_leases_removed += 1
1257+        if not num_leases_removed:
1258+            raise IndexError("unable to find matching lease to cancel")
1259+        if num_leases_removed:
1260+            # pack and write out the remaining leases. We write these out in
1261+            # the same order as they were added, so that if we crash while
1262+            # doing this, we won't lose any non-cancelled leases.
1263+            leases = [l for l in leases if l] # remove the cancelled leases
1264+            f = open(self.fname, 'rb+')
1265+            for i,lease in enumerate(leases):
1266+                self._write_lease_record(f, i, lease)
1267+            self._write_num_leases(f, len(leases))
1268+            self._truncate_leases(f, len(leases))
1269+            f.close()
1270+        space_freed = self.LEASE_SIZE * num_leases_removed
1271+        if not len(leases):
1272+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1273+            self.unlink()
1274+        return space_freed
1275hunk ./src/allmydata/storage/backends/das/expirer.py 2
1276 import time, os, pickle, struct
1277-from allmydata.storage.crawler import ShareCrawler
1278-from allmydata.storage.shares import get_share_file
1279+from allmydata.storage.crawler import FSShareCrawler
1280 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1281      UnknownImmutableContainerVersionError
1282 from twisted.python import log as twlog
1283hunk ./src/allmydata/storage/backends/das/expirer.py 7
1284 
1285-class LeaseCheckingCrawler(ShareCrawler):
1286+class FSLeaseCheckingCrawler(FSShareCrawler):
1287     """I examine the leases on all shares, determining which are still valid
1288     and which have expired. I can remove the expired leases (if so
1289     configured), and the share will be deleted when the last lease is
1290hunk ./src/allmydata/storage/backends/das/expirer.py 50
1291     slow_start = 360 # wait 6 minutes after startup
1292     minimum_cycle_time = 12*60*60 # not more than twice per day
1293 
1294-    def __init__(self, statefile, historyfile,
1295-                 expiration_enabled, mode,
1296-                 override_lease_duration, # used if expiration_mode=="age"
1297-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1298-                 sharetypes):
1299+    def __init__(self, statefile, historyfile, expiration_policy):
1300         self.historyfile = historyfile
1301hunk ./src/allmydata/storage/backends/das/expirer.py 52
1302-        self.expiration_enabled = expiration_enabled
1303-        self.mode = mode
1304+        self.expiration_enabled = expiration_policy['enabled']
1305+        self.mode = expiration_policy['mode']
1306         self.override_lease_duration = None
1307         self.cutoff_date = None
1308         if self.mode == "age":
1309hunk ./src/allmydata/storage/backends/das/expirer.py 57
1310-            assert isinstance(override_lease_duration, (int, type(None)))
1311-            self.override_lease_duration = override_lease_duration # seconds
1312+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1313+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1314         elif self.mode == "cutoff-date":
1315hunk ./src/allmydata/storage/backends/das/expirer.py 60
1316-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1317+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1318             assert cutoff_date is not None
1319hunk ./src/allmydata/storage/backends/das/expirer.py 62
1320-            self.cutoff_date = cutoff_date
1321+            self.cutoff_date = expiration_policy['cutoff_date']
1322         else:
1323hunk ./src/allmydata/storage/backends/das/expirer.py 64
1324-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1325-        self.sharetypes_to_expire = sharetypes
1326-        ShareCrawler.__init__(self, statefile)
1327+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1328+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1329+        FSShareCrawler.__init__(self, statefile)
1330 
1331     def add_initial_state(self):
1332         # we fill ["cycle-to-date"] here (even though they will be reset in
1333hunk ./src/allmydata/storage/backends/das/expirer.py 156
1334 
1335     def process_share(self, sharefilename):
1336         # first, find out what kind of a share it is
1337-        sf = get_share_file(sharefilename)
1338+        f = open(sharefilename, "rb")
1339+        prefix = f.read(32)
1340+        f.close()
1341+        if prefix == MutableShareFile.MAGIC:
1342+            sf = MutableShareFile(sharefilename)
1343+        else:
1344+            # otherwise assume it's immutable
1345+            sf = FSBShare(sharefilename)
1346         sharetype = sf.sharetype
1347         now = time.time()
1348         s = self.stat(sharefilename)
1349addfile ./src/allmydata/storage/backends/null/__init__.py
1350addfile ./src/allmydata/storage/backends/null/core.py
1351hunk ./src/allmydata/storage/backends/null/core.py 1
1352+from allmydata.storage.backends.base import Backend
1353+
1354+class NullCore(Backend):
1355+    def __init__(self):
1356+        Backend.__init__(self)
1357+
1358+    def get_available_space(self):
1359+        return None
1360+
1361+    def get_shares(self, storage_index):
1362+        return set()
1363+
1364+    def get_share(self, storage_index, sharenum):
1365+        return None
1366+
1367+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1368+        return NullBucketWriter()
1369hunk ./src/allmydata/storage/crawler.py 12
1370 class TimeSliceExceeded(Exception):
1371     pass
1372 
1373-class ShareCrawler(service.MultiService):
1374+class FSShareCrawler(service.MultiService):
1375     """A subcless of ShareCrawler is attached to a StorageServer, and
1376     periodically walks all of its shares, processing each one in some
1377     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1378hunk ./src/allmydata/storage/crawler.py 68
1379     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1380     minimum_cycle_time = 300 # don't run a cycle faster than this
1381 
1382-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1383+    def __init__(self, statefname, allowed_cpu_percentage=None):
1384         service.MultiService.__init__(self)
1385         if allowed_cpu_percentage is not None:
1386             self.allowed_cpu_percentage = allowed_cpu_percentage
1387hunk ./src/allmydata/storage/crawler.py 72
1388-        self.backend = backend
1389+        self.statefname = statefname
1390         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1391                          for i in range(2**10)]
1392         self.prefixes.sort()
1393hunk ./src/allmydata/storage/crawler.py 192
1394         #                            of the last bucket to be processed, or
1395         #                            None if we are sleeping between cycles
1396         try:
1397-            f = open(self.statefile, "rb")
1398+            f = open(self.statefname, "rb")
1399             state = pickle.load(f)
1400             f.close()
1401         except EnvironmentError:
1402hunk ./src/allmydata/storage/crawler.py 230
1403         else:
1404             last_complete_prefix = self.prefixes[lcpi]
1405         self.state["last-complete-prefix"] = last_complete_prefix
1406-        tmpfile = self.statefile + ".tmp"
1407+        tmpfile = self.statefname + ".tmp"
1408         f = open(tmpfile, "wb")
1409         pickle.dump(self.state, f)
1410         f.close()
1411hunk ./src/allmydata/storage/crawler.py 433
1412         pass
1413 
1414 
1415-class BucketCountingCrawler(ShareCrawler):
1416+class FSBucketCountingCrawler(FSShareCrawler):
1417     """I keep track of how many buckets are being managed by this server.
1418     This is equivalent to the number of distributed files and directories for
1419     which I am providing storage. The actual number of files+directories in
1420hunk ./src/allmydata/storage/crawler.py 446
1421 
1422     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1423 
1424-    def __init__(self, statefile, num_sample_prefixes=1):
1425-        ShareCrawler.__init__(self, statefile)
1426+    def __init__(self, statefname, num_sample_prefixes=1):
1427+        FSShareCrawler.__init__(self, statefname)
1428         self.num_sample_prefixes = num_sample_prefixes
1429 
1430     def add_initial_state(self):
1431hunk ./src/allmydata/storage/immutable.py 14
1432 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1433      DataTooLargeError
1434 
1435-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1436-# and share data. The share data is accessed by RIBucketWriter.write and
1437-# RIBucketReader.read . The lease information is not accessible through these
1438-# interfaces.
1439-
1440-# The share file has the following layout:
1441-#  0x00: share file version number, four bytes, current version is 1
1442-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1443-#  0x08: number of leases, four bytes big-endian
1444-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1445-#  A+0x0c = B: first lease. Lease format is:
1446-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1447-#   B+0x04: renew secret, 32 bytes (SHA256)
1448-#   B+0x24: cancel secret, 32 bytes (SHA256)
1449-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1450-#   B+0x48: next lease, or end of record
1451-
1452-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1453-# but it is still filled in by storage servers in case the storage server
1454-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1455-# share file is moved from one storage server to another. The value stored in
1456-# this field is truncated, so if the actual share data length is >= 2**32,
1457-# then the value stored in this field will be the actual share data length
1458-# modulo 2**32.
1459-
1460-class ShareFile:
1461-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1462-    sharetype = "immutable"
1463-
1464-    def __init__(self, filename, max_size=None, create=False):
1465-        """ If max_size is not None then I won't allow more than
1466-        max_size to be written to me. If create=True then max_size
1467-        must not be None. """
1468-        precondition((max_size is not None) or (not create), max_size, create)
1469-        self.home = filename
1470-        self._max_size = max_size
1471-        if create:
1472-            # touch the file, so later callers will see that we're working on
1473-            # it. Also construct the metadata.
1474-            assert not os.path.exists(self.home)
1475-            fileutil.make_dirs(os.path.dirname(self.home))
1476-            f = open(self.home, 'wb')
1477-            # The second field -- the four-byte share data length -- is no
1478-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1479-            # there in case someone downgrades a storage server from >=
1480-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1481-            # server to another, etc. We do saturation -- a share data length
1482-            # larger than 2**32-1 (what can fit into the field) is marked as
1483-            # the largest length that can fit into the field. That way, even
1484-            # if this does happen, the old < v1.3.0 server will still allow
1485-            # clients to read the first part of the share.
1486-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1487-            f.close()
1488-            self._lease_offset = max_size + 0x0c
1489-            self._num_leases = 0
1490-        else:
1491-            f = open(self.home, 'rb')
1492-            filesize = os.path.getsize(self.home)
1493-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1494-            f.close()
1495-            if version != 1:
1496-                msg = "sharefile %s had version %d but we wanted 1" % \
1497-                      (filename, version)
1498-                raise UnknownImmutableContainerVersionError(msg)
1499-            self._num_leases = num_leases
1500-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1501-        self._data_offset = 0xc
1502-
1503-    def unlink(self):
1504-        os.unlink(self.home)
1505-
1506-    def read_share_data(self, offset, length):
1507-        precondition(offset >= 0)
1508-        # Reads beyond the end of the data are truncated. Reads that start
1509-        # beyond the end of the data return an empty string.
1510-        seekpos = self._data_offset+offset
1511-        fsize = os.path.getsize(self.home)
1512-        actuallength = max(0, min(length, fsize-seekpos))
1513-        if actuallength == 0:
1514-            return ""
1515-        f = open(self.home, 'rb')
1516-        f.seek(seekpos)
1517-        return f.read(actuallength)
1518-
1519-    def write_share_data(self, offset, data):
1520-        length = len(data)
1521-        precondition(offset >= 0, offset)
1522-        if self._max_size is not None and offset+length > self._max_size:
1523-            raise DataTooLargeError(self._max_size, offset, length)
1524-        f = open(self.home, 'rb+')
1525-        real_offset = self._data_offset+offset
1526-        f.seek(real_offset)
1527-        assert f.tell() == real_offset
1528-        f.write(data)
1529-        f.close()
1530-
1531-    def _write_lease_record(self, f, lease_number, lease_info):
1532-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1533-        f.seek(offset)
1534-        assert f.tell() == offset
1535-        f.write(lease_info.to_immutable_data())
1536-
1537-    def _read_num_leases(self, f):
1538-        f.seek(0x08)
1539-        (num_leases,) = struct.unpack(">L", f.read(4))
1540-        return num_leases
1541-
1542-    def _write_num_leases(self, f, num_leases):
1543-        f.seek(0x08)
1544-        f.write(struct.pack(">L", num_leases))
1545-
1546-    def _truncate_leases(self, f, num_leases):
1547-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1548-
1549-    def get_leases(self):
1550-        """Yields a LeaseInfo instance for all leases."""
1551-        f = open(self.home, 'rb')
1552-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1553-        f.seek(self._lease_offset)
1554-        for i in range(num_leases):
1555-            data = f.read(self.LEASE_SIZE)
1556-            if data:
1557-                yield LeaseInfo().from_immutable_data(data)
1558-
1559-    def add_lease(self, lease_info):
1560-        f = open(self.home, 'rb+')
1561-        num_leases = self._read_num_leases(f)
1562-        self._write_lease_record(f, num_leases, lease_info)
1563-        self._write_num_leases(f, num_leases+1)
1564-        f.close()
1565-
1566-    def renew_lease(self, renew_secret, new_expire_time):
1567-        for i,lease in enumerate(self.get_leases()):
1568-            if constant_time_compare(lease.renew_secret, renew_secret):
1569-                # yup. See if we need to update the owner time.
1570-                if new_expire_time > lease.expiration_time:
1571-                    # yes
1572-                    lease.expiration_time = new_expire_time
1573-                    f = open(self.home, 'rb+')
1574-                    self._write_lease_record(f, i, lease)
1575-                    f.close()
1576-                return
1577-        raise IndexError("unable to renew non-existent lease")
1578-
1579-    def add_or_renew_lease(self, lease_info):
1580-        try:
1581-            self.renew_lease(lease_info.renew_secret,
1582-                             lease_info.expiration_time)
1583-        except IndexError:
1584-            self.add_lease(lease_info)
1585-
1586-
1587-    def cancel_lease(self, cancel_secret):
1588-        """Remove a lease with the given cancel_secret. If the last lease is
1589-        cancelled, the file will be removed. Return the number of bytes that
1590-        were freed (by truncating the list of leases, and possibly by
1591-        deleting the file. Raise IndexError if there was no lease with the
1592-        given cancel_secret.
1593-        """
1594-
1595-        leases = list(self.get_leases())
1596-        num_leases_removed = 0
1597-        for i,lease in enumerate(leases):
1598-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1599-                leases[i] = None
1600-                num_leases_removed += 1
1601-        if not num_leases_removed:
1602-            raise IndexError("unable to find matching lease to cancel")
1603-        if num_leases_removed:
1604-            # pack and write out the remaining leases. We write these out in
1605-            # the same order as they were added, so that if we crash while
1606-            # doing this, we won't lose any non-cancelled leases.
1607-            leases = [l for l in leases if l] # remove the cancelled leases
1608-            f = open(self.home, 'rb+')
1609-            for i,lease in enumerate(leases):
1610-                self._write_lease_record(f, i, lease)
1611-            self._write_num_leases(f, len(leases))
1612-            self._truncate_leases(f, len(leases))
1613-            f.close()
1614-        space_freed = self.LEASE_SIZE * num_leases_removed
1615-        if not len(leases):
1616-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1617-            self.unlink()
1618-        return space_freed
1619-class NullBucketWriter(Referenceable):
1620-    implements(RIBucketWriter)
1621-
1622-    def remote_write(self, offset, data):
1623-        return
1624-
1625 class BucketWriter(Referenceable):
1626     implements(RIBucketWriter)
1627 
1628hunk ./src/allmydata/storage/immutable.py 17
1629-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1630+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1631         self.ss = ss
1632hunk ./src/allmydata/storage/immutable.py 19
1633-        self.incominghome = incominghome
1634-        self.finalhome = finalhome
1635         self._max_size = max_size # don't allow the client to write more than this
1636         self._canary = canary
1637         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1638hunk ./src/allmydata/storage/immutable.py 24
1639         self.closed = False
1640         self.throw_out_all_data = False
1641-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1642+        self._sharefile = immutableshare
1643         # also, add our lease to the file now, so that other ones can be
1644         # added by simultaneous uploaders
1645         self._sharefile.add_lease(lease_info)
1646hunk ./src/allmydata/storage/server.py 16
1647 from allmydata.storage.lease import LeaseInfo
1648 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1649      create_mutable_sharefile
1650-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1651-from allmydata.storage.crawler import BucketCountingCrawler
1652-from allmydata.storage.expirer import LeaseCheckingCrawler
1653 
1654 from zope.interface import implements
1655 
1656hunk ./src/allmydata/storage/server.py 19
1657-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1658-# be started and stopped.
1659-class Backend(service.MultiService):
1660-    implements(IStatsProducer)
1661-    def __init__(self):
1662-        service.MultiService.__init__(self)
1663-
1664-    def get_bucket_shares(self):
1665-        """XXX"""
1666-        raise NotImplementedError
1667-
1668-    def get_share(self):
1669-        """XXX"""
1670-        raise NotImplementedError
1671-
1672-    def make_bucket_writer(self):
1673-        """XXX"""
1674-        raise NotImplementedError
1675-
1676-class NullBackend(Backend):
1677-    def __init__(self):
1678-        Backend.__init__(self)
1679-
1680-    def get_available_space(self):
1681-        return None
1682-
1683-    def get_bucket_shares(self, storage_index):
1684-        return set()
1685-
1686-    def get_share(self, storage_index, sharenum):
1687-        return None
1688-
1689-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1690-        return NullBucketWriter()
1691-
1692-class FSBackend(Backend):
1693-    def __init__(self, storedir, readonly=False, reserved_space=0):
1694-        Backend.__init__(self)
1695-
1696-        self._setup_storage(storedir, readonly, reserved_space)
1697-        self._setup_corruption_advisory()
1698-        self._setup_bucket_counter()
1699-        self._setup_lease_checkerf()
1700-
1701-    def _setup_storage(self, storedir, readonly, reserved_space):
1702-        self.storedir = storedir
1703-        self.readonly = readonly
1704-        self.reserved_space = int(reserved_space)
1705-        if self.reserved_space:
1706-            if self.get_available_space() is None:
1707-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1708-                        umid="0wZ27w", level=log.UNUSUAL)
1709-
1710-        self.sharedir = os.path.join(self.storedir, "shares")
1711-        fileutil.make_dirs(self.sharedir)
1712-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1713-        self._clean_incomplete()
1714-
1715-    def _clean_incomplete(self):
1716-        fileutil.rm_dir(self.incomingdir)
1717-        fileutil.make_dirs(self.incomingdir)
1718-
1719-    def _setup_corruption_advisory(self):
1720-        # we don't actually create the corruption-advisory dir until necessary
1721-        self.corruption_advisory_dir = os.path.join(self.storedir,
1722-                                                    "corruption-advisories")
1723-
1724-    def _setup_bucket_counter(self):
1725-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1726-        self.bucket_counter = BucketCountingCrawler(statefile)
1727-        self.bucket_counter.setServiceParent(self)
1728-
1729-    def _setup_lease_checkerf(self):
1730-        statefile = os.path.join(self.storedir, "lease_checker.state")
1731-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1732-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1733-                                   expiration_enabled, expiration_mode,
1734-                                   expiration_override_lease_duration,
1735-                                   expiration_cutoff_date,
1736-                                   expiration_sharetypes)
1737-        self.lease_checker.setServiceParent(self)
1738-
1739-    def get_available_space(self):
1740-        if self.readonly:
1741-            return 0
1742-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1743-
1744-    def get_bucket_shares(self, storage_index):
1745-        """Return a list of (shnum, pathname) tuples for files that hold
1746-        shares for this storage_index. In each tuple, 'shnum' will always be
1747-        the integer form of the last component of 'pathname'."""
1748-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1749-        try:
1750-            for f in os.listdir(storagedir):
1751-                if NUM_RE.match(f):
1752-                    filename = os.path.join(storagedir, f)
1753-                    yield (int(f), filename)
1754-        except OSError:
1755-            # Commonly caused by there being no buckets at all.
1756-            pass
1757-
1758 # storage/
1759 # storage/shares/incoming
1760 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1761hunk ./src/allmydata/storage/server.py 32
1762 # $SHARENUM matches this regex:
1763 NUM_RE=re.compile("^[0-9]+$")
1764 
1765-
1766-
1767 class StorageServer(service.MultiService, Referenceable):
1768     implements(RIStorageServer, IStatsProducer)
1769     name = 'storage'
1770hunk ./src/allmydata/storage/server.py 35
1771-    LeaseCheckerClass = LeaseCheckingCrawler
1772 
1773     def __init__(self, nodeid, backend, reserved_space=0,
1774                  readonly_storage=False,
1775hunk ./src/allmydata/storage/server.py 38
1776-                 stats_provider=None,
1777-                 expiration_enabled=False,
1778-                 expiration_mode="age",
1779-                 expiration_override_lease_duration=None,
1780-                 expiration_cutoff_date=None,
1781-                 expiration_sharetypes=("mutable", "immutable")):
1782+                 stats_provider=None ):
1783         service.MultiService.__init__(self)
1784         assert isinstance(nodeid, str)
1785         assert len(nodeid) == 20
1786hunk ./src/allmydata/storage/server.py 217
1787         # they asked about: this will save them a lot of work. Add or update
1788         # leases for all of them: if they want us to hold shares for this
1789         # file, they'll want us to hold leases for this file.
1790-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1791-            alreadygot.add(shnum)
1792-            sf = ShareFile(fn)
1793-            sf.add_or_renew_lease(lease_info)
1794-
1795-        for shnum in sharenums:
1796-            share = self.backend.get_share(storage_index, shnum)
1797+        for share in self.backend.get_shares(storage_index):
1798+            alreadygot.add(share.shnum)
1799+            share.add_or_renew_lease(lease_info)
1800 
1801hunk ./src/allmydata/storage/server.py 221
1802-            if not share:
1803-                if (not limited) or (remaining_space >= max_space_per_bucket):
1804-                    # ok! we need to create the new share file.
1805-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1806-                                      max_space_per_bucket, lease_info, canary)
1807-                    bucketwriters[shnum] = bw
1808-                    self._active_writers[bw] = 1
1809-                    if limited:
1810-                        remaining_space -= max_space_per_bucket
1811-                else:
1812-                    # bummer! not enough space to accept this bucket
1813-                    pass
1814+        for shnum in (sharenums - alreadygot):
1815+            if (not limited) or (remaining_space >= max_space_per_bucket):
1816+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1817+                self.backend.set_storage_server(self)
1818+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1819+                                                     max_space_per_bucket, lease_info, canary)
1820+                bucketwriters[shnum] = bw
1821+                self._active_writers[bw] = 1
1822+                if limited:
1823+                    remaining_space -= max_space_per_bucket
1824 
1825hunk ./src/allmydata/storage/server.py 232
1826-            elif share.is_complete():
1827-                # great! we already have it. easy.
1828-                pass
1829-            elif not share.is_complete():
1830-                # Note that we don't create BucketWriters for shnums that
1831-                # have a partial share (in incoming/), so if a second upload
1832-                # occurs while the first is still in progress, the second
1833-                # uploader will use different storage servers.
1834-                pass
1835+        #XXX We SHOULD DOCUMENT LATER.
1836 
1837         self.add_latency("allocate", time.time() - start)
1838         return alreadygot, bucketwriters
1839hunk ./src/allmydata/storage/server.py 238
1840 
1841     def _iter_share_files(self, storage_index):
1842-        for shnum, filename in self._get_bucket_shares(storage_index):
1843+        for shnum, filename in self._get_shares(storage_index):
1844             f = open(filename, 'rb')
1845             header = f.read(32)
1846             f.close()
1847hunk ./src/allmydata/storage/server.py 318
1848         si_s = si_b2a(storage_index)
1849         log.msg("storage: get_buckets %s" % si_s)
1850         bucketreaders = {} # k: sharenum, v: BucketReader
1851-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1852+        for shnum, filename in self.backend.get_shares(storage_index):
1853             bucketreaders[shnum] = BucketReader(self, filename,
1854                                                 storage_index, shnum)
1855         self.add_latency("get", time.time() - start)
1856hunk ./src/allmydata/storage/server.py 334
1857         # since all shares get the same lease data, we just grab the leases
1858         # from the first share
1859         try:
1860-            shnum, filename = self._get_bucket_shares(storage_index).next()
1861+            shnum, filename = self._get_shares(storage_index).next()
1862             sf = ShareFile(filename)
1863             return sf.get_leases()
1864         except StopIteration:
1865hunk ./src/allmydata/storage/shares.py 1
1866-#! /usr/bin/python
1867-
1868-from allmydata.storage.mutable import MutableShareFile
1869-from allmydata.storage.immutable import ShareFile
1870-
1871-def get_share_file(filename):
1872-    f = open(filename, "rb")
1873-    prefix = f.read(32)
1874-    f.close()
1875-    if prefix == MutableShareFile.MAGIC:
1876-        return MutableShareFile(filename)
1877-    # otherwise assume it's immutable
1878-    return ShareFile(filename)
1879-
1880rmfile ./src/allmydata/storage/shares.py
1881hunk ./src/allmydata/test/common_util.py 20
1882 
1883 def flip_one_bit(s, offset=0, size=None):
1884     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1885-    than offset+size. """
1886+    than offset+size. Return the new string. """
1887     if size is None:
1888         size=len(s)-offset
1889     i = randrange(offset, offset+size)
1890hunk ./src/allmydata/test/test_backends.py 7
1891 
1892 from allmydata.test.common_util import ReallyEqualMixin
1893 
1894-import mock
1895+import mock, os
1896 
1897 # This is the code that we're going to be testing.
1898hunk ./src/allmydata/test/test_backends.py 10
1899-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1900+from allmydata.storage.server import StorageServer
1901+
1902+from allmydata.storage.backends.das.core import DASCore
1903+from allmydata.storage.backends.null.core import NullCore
1904+
1905 
1906 # The following share file contents was generated with
1907 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1908hunk ./src/allmydata/test/test_backends.py 22
1909 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1910 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1911 
1912-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1913+tempdir = 'teststoredir'
1914+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1915+sharefname = os.path.join(sharedirname, '0')
1916 
1917 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1918     @mock.patch('time.time')
1919hunk ./src/allmydata/test/test_backends.py 58
1920         filesystem in only the prescribed ways. """
1921 
1922         def call_open(fname, mode):
1923-            if fname == 'testdir/bucket_counter.state':
1924-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1925-            elif fname == 'testdir/lease_checker.state':
1926-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1927-            elif fname == 'testdir/lease_checker.history':
1928+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1929+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1930+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1931+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1932+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1933                 return StringIO()
1934             else:
1935                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1936hunk ./src/allmydata/test/test_backends.py 124
1937     @mock.patch('__builtin__.open')
1938     def setUp(self, mockopen):
1939         def call_open(fname, mode):
1940-            if fname == 'testdir/bucket_counter.state':
1941-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1942-            elif fname == 'testdir/lease_checker.state':
1943-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1944-            elif fname == 'testdir/lease_checker.history':
1945+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1946+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1947+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1948+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1949+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1950                 return StringIO()
1951         mockopen.side_effect = call_open
1952hunk ./src/allmydata/test/test_backends.py 131
1953-
1954-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1955+        expiration_policy = {'enabled' : False,
1956+                             'mode' : 'age',
1957+                             'override_lease_duration' : None,
1958+                             'cutoff_date' : None,
1959+                             'sharetypes' : None}
1960+        testbackend = DASCore(tempdir, expiration_policy)
1961+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1962 
1963     @mock.patch('time.time')
1964     @mock.patch('os.mkdir')
1965hunk ./src/allmydata/test/test_backends.py 148
1966         """ Write a new share. """
1967 
1968         def call_listdir(dirname):
1969-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1970-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1971+            self.failUnlessReallyEqual(dirname, sharedirname)
1972+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1973 
1974         mocklistdir.side_effect = call_listdir
1975 
1976hunk ./src/allmydata/test/test_backends.py 178
1977 
1978         sharefile = MockFile()
1979         def call_open(fname, mode):
1980-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1981+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1982             return sharefile
1983 
1984         mockopen.side_effect = call_open
1985hunk ./src/allmydata/test/test_backends.py 200
1986         StorageServer object. """
1987 
1988         def call_listdir(dirname):
1989-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1990+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1991             return ['0']
1992 
1993         mocklistdir.side_effect = call_listdir
1994}
1995[checkpoint patch
1996wilcoxjg@gmail.com**20110626165715
1997 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1998] {
1999hunk ./src/allmydata/storage/backends/das/core.py 21
2000 from allmydata.storage.lease import LeaseInfo
2001 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2002      create_mutable_sharefile
2003-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
2004+from allmydata.storage.immutable import BucketWriter, BucketReader
2005 from allmydata.storage.crawler import FSBucketCountingCrawler
2006 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
2007 
2008hunk ./src/allmydata/storage/backends/das/core.py 27
2009 from zope.interface import implements
2010 
2011+# $SHARENUM matches this regex:
2012+NUM_RE=re.compile("^[0-9]+$")
2013+
2014 class DASCore(Backend):
2015     implements(IStorageBackend)
2016     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
2017hunk ./src/allmydata/storage/backends/das/core.py 80
2018         return fileutil.get_available_space(self.storedir, self.reserved_space)
2019 
2020     def get_shares(self, storage_index):
2021-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
2022+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
2023         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
2024         try:
2025             for f in os.listdir(finalstoragedir):
2026hunk ./src/allmydata/storage/backends/das/core.py 86
2027                 if NUM_RE.match(f):
2028                     filename = os.path.join(finalstoragedir, f)
2029-                    yield FSBShare(filename, int(f))
2030+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
2031         except OSError:
2032             # Commonly caused by there being no buckets at all.
2033             pass
2034hunk ./src/allmydata/storage/backends/das/core.py 95
2035         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
2036         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2037         return bw
2038+
2039+    def set_storage_server(self, ss):
2040+        self.ss = ss
2041         
2042 
2043 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2044hunk ./src/allmydata/storage/server.py 29
2045 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2046 # base-32 chars).
2047 
2048-# $SHARENUM matches this regex:
2049-NUM_RE=re.compile("^[0-9]+$")
2050 
2051 class StorageServer(service.MultiService, Referenceable):
2052     implements(RIStorageServer, IStatsProducer)
2053}
2054[checkpoint4
2055wilcoxjg@gmail.com**20110628202202
2056 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2057] {
2058hunk ./src/allmydata/storage/backends/das/core.py 96
2059         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2060         return bw
2061 
2062+    def make_bucket_reader(self, share):
2063+        return BucketReader(self.ss, share)
2064+
2065     def set_storage_server(self, ss):
2066         self.ss = ss
2067         
2068hunk ./src/allmydata/storage/backends/das/core.py 138
2069         must not be None. """
2070         precondition((max_size is not None) or (not create), max_size, create)
2071         self.shnum = shnum
2072+        self.storage_index = storageindex
2073         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2074         self._max_size = max_size
2075         if create:
2076hunk ./src/allmydata/storage/backends/das/core.py 173
2077             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2078         self._data_offset = 0xc
2079 
2080+    def get_shnum(self):
2081+        return self.shnum
2082+
2083     def unlink(self):
2084         os.unlink(self.fname)
2085 
2086hunk ./src/allmydata/storage/backends/null/core.py 2
2087 from allmydata.storage.backends.base import Backend
2088+from allmydata.storage.immutable import BucketWriter, BucketReader
2089 
2090 class NullCore(Backend):
2091     def __init__(self):
2092hunk ./src/allmydata/storage/backends/null/core.py 17
2093     def get_share(self, storage_index, sharenum):
2094         return None
2095 
2096-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2097-        return NullBucketWriter()
2098+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2099+       
2100+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2101+
2102+    def set_storage_server(self, ss):
2103+        self.ss = ss
2104+
2105+class ImmutableShare:
2106+    sharetype = "immutable"
2107+
2108+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2109+        """ If max_size is not None then I won't allow more than
2110+        max_size to be written to me. If create=True then max_size
2111+        must not be None. """
2112+        precondition((max_size is not None) or (not create), max_size, create)
2113+        self.shnum = shnum
2114+        self.storage_index = storageindex
2115+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2116+        self._max_size = max_size
2117+        if create:
2118+            # touch the file, so later callers will see that we're working on
2119+            # it. Also construct the metadata.
2120+            assert not os.path.exists(self.fname)
2121+            fileutil.make_dirs(os.path.dirname(self.fname))
2122+            f = open(self.fname, 'wb')
2123+            # The second field -- the four-byte share data length -- is no
2124+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2125+            # there in case someone downgrades a storage server from >=
2126+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2127+            # server to another, etc. We do saturation -- a share data length
2128+            # larger than 2**32-1 (what can fit into the field) is marked as
2129+            # the largest length that can fit into the field. That way, even
2130+            # if this does happen, the old < v1.3.0 server will still allow
2131+            # clients to read the first part of the share.
2132+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2133+            f.close()
2134+            self._lease_offset = max_size + 0x0c
2135+            self._num_leases = 0
2136+        else:
2137+            f = open(self.fname, 'rb')
2138+            filesize = os.path.getsize(self.fname)
2139+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2140+            f.close()
2141+            if version != 1:
2142+                msg = "sharefile %s had version %d but we wanted 1" % \
2143+                      (self.fname, version)
2144+                raise UnknownImmutableContainerVersionError(msg)
2145+            self._num_leases = num_leases
2146+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2147+        self._data_offset = 0xc
2148+
2149+    def get_shnum(self):
2150+        return self.shnum
2151+
2152+    def unlink(self):
2153+        os.unlink(self.fname)
2154+
2155+    def read_share_data(self, offset, length):
2156+        precondition(offset >= 0)
2157+        # Reads beyond the end of the data are truncated. Reads that start
2158+        # beyond the end of the data return an empty string.
2159+        seekpos = self._data_offset+offset
2160+        fsize = os.path.getsize(self.fname)
2161+        actuallength = max(0, min(length, fsize-seekpos))
2162+        if actuallength == 0:
2163+            return ""
2164+        f = open(self.fname, 'rb')
2165+        f.seek(seekpos)
2166+        return f.read(actuallength)
2167+
2168+    def write_share_data(self, offset, data):
2169+        length = len(data)
2170+        precondition(offset >= 0, offset)
2171+        if self._max_size is not None and offset+length > self._max_size:
2172+            raise DataTooLargeError(self._max_size, offset, length)
2173+        f = open(self.fname, 'rb+')
2174+        real_offset = self._data_offset+offset
2175+        f.seek(real_offset)
2176+        assert f.tell() == real_offset
2177+        f.write(data)
2178+        f.close()
2179+
2180+    def _write_lease_record(self, f, lease_number, lease_info):
2181+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2182+        f.seek(offset)
2183+        assert f.tell() == offset
2184+        f.write(lease_info.to_immutable_data())
2185+
2186+    def _read_num_leases(self, f):
2187+        f.seek(0x08)
2188+        (num_leases,) = struct.unpack(">L", f.read(4))
2189+        return num_leases
2190+
2191+    def _write_num_leases(self, f, num_leases):
2192+        f.seek(0x08)
2193+        f.write(struct.pack(">L", num_leases))
2194+
2195+    def _truncate_leases(self, f, num_leases):
2196+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2197+
2198+    def get_leases(self):
2199+        """Yields a LeaseInfo instance for all leases."""
2200+        f = open(self.fname, 'rb')
2201+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2202+        f.seek(self._lease_offset)
2203+        for i in range(num_leases):
2204+            data = f.read(self.LEASE_SIZE)
2205+            if data:
2206+                yield LeaseInfo().from_immutable_data(data)
2207+
2208+    def add_lease(self, lease_info):
2209+        f = open(self.fname, 'rb+')
2210+        num_leases = self._read_num_leases(f)
2211+        self._write_lease_record(f, num_leases, lease_info)
2212+        self._write_num_leases(f, num_leases+1)
2213+        f.close()
2214+
2215+    def renew_lease(self, renew_secret, new_expire_time):
2216+        for i,lease in enumerate(self.get_leases()):
2217+            if constant_time_compare(lease.renew_secret, renew_secret):
2218+                # yup. See if we need to update the owner time.
2219+                if new_expire_time > lease.expiration_time:
2220+                    # yes
2221+                    lease.expiration_time = new_expire_time
2222+                    f = open(self.fname, 'rb+')
2223+                    self._write_lease_record(f, i, lease)
2224+                    f.close()
2225+                return
2226+        raise IndexError("unable to renew non-existent lease")
2227+
2228+    def add_or_renew_lease(self, lease_info):
2229+        try:
2230+            self.renew_lease(lease_info.renew_secret,
2231+                             lease_info.expiration_time)
2232+        except IndexError:
2233+            self.add_lease(lease_info)
2234+
2235+
2236+    def cancel_lease(self, cancel_secret):
2237+        """Remove a lease with the given cancel_secret. If the last lease is
2238+        cancelled, the file will be removed. Return the number of bytes that
2239+        were freed (by truncating the list of leases, and possibly by
2240+        deleting the file. Raise IndexError if there was no lease with the
2241+        given cancel_secret.
2242+        """
2243+
2244+        leases = list(self.get_leases())
2245+        num_leases_removed = 0
2246+        for i,lease in enumerate(leases):
2247+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2248+                leases[i] = None
2249+                num_leases_removed += 1
2250+        if not num_leases_removed:
2251+            raise IndexError("unable to find matching lease to cancel")
2252+        if num_leases_removed:
2253+            # pack and write out the remaining leases. We write these out in
2254+            # the same order as they were added, so that if we crash while
2255+            # doing this, we won't lose any non-cancelled leases.
2256+            leases = [l for l in leases if l] # remove the cancelled leases
2257+            f = open(self.fname, 'rb+')
2258+            for i,lease in enumerate(leases):
2259+                self._write_lease_record(f, i, lease)
2260+            self._write_num_leases(f, len(leases))
2261+            self._truncate_leases(f, len(leases))
2262+            f.close()
2263+        space_freed = self.LEASE_SIZE * num_leases_removed
2264+        if not len(leases):
2265+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2266+            self.unlink()
2267+        return space_freed
2268hunk ./src/allmydata/storage/immutable.py 114
2269 class BucketReader(Referenceable):
2270     implements(RIBucketReader)
2271 
2272-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2273+    def __init__(self, ss, share):
2274         self.ss = ss
2275hunk ./src/allmydata/storage/immutable.py 116
2276-        self._share_file = ShareFile(sharefname)
2277-        self.storage_index = storage_index
2278-        self.shnum = shnum
2279+        self._share_file = share
2280+        self.storage_index = share.storage_index
2281+        self.shnum = share.shnum
2282 
2283     def __repr__(self):
2284         return "<%s %s %s>" % (self.__class__.__name__,
2285hunk ./src/allmydata/storage/server.py 316
2286         si_s = si_b2a(storage_index)
2287         log.msg("storage: get_buckets %s" % si_s)
2288         bucketreaders = {} # k: sharenum, v: BucketReader
2289-        for shnum, filename in self.backend.get_shares(storage_index):
2290-            bucketreaders[shnum] = BucketReader(self, filename,
2291-                                                storage_index, shnum)
2292+        self.backend.set_storage_server(self)
2293+        for share in self.backend.get_shares(storage_index):
2294+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2295         self.add_latency("get", time.time() - start)
2296         return bucketreaders
2297 
2298hunk ./src/allmydata/test/test_backends.py 25
2299 tempdir = 'teststoredir'
2300 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2301 sharefname = os.path.join(sharedirname, '0')
2302+expiration_policy = {'enabled' : False,
2303+                     'mode' : 'age',
2304+                     'override_lease_duration' : None,
2305+                     'cutoff_date' : None,
2306+                     'sharetypes' : None}
2307 
2308 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2309     @mock.patch('time.time')
2310hunk ./src/allmydata/test/test_backends.py 43
2311         tries to read or write to the file system. """
2312 
2313         # Now begin the test.
2314-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2315+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2316 
2317         self.failIf(mockisdir.called)
2318         self.failIf(mocklistdir.called)
2319hunk ./src/allmydata/test/test_backends.py 74
2320         mockopen.side_effect = call_open
2321 
2322         # Now begin the test.
2323-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2324+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2325 
2326         self.failIf(mockisdir.called)
2327         self.failIf(mocklistdir.called)
2328hunk ./src/allmydata/test/test_backends.py 86
2329 
2330 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2331     def setUp(self):
2332-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2333+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2334 
2335     @mock.patch('os.mkdir')
2336     @mock.patch('__builtin__.open')
2337hunk ./src/allmydata/test/test_backends.py 136
2338             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2339                 return StringIO()
2340         mockopen.side_effect = call_open
2341-        expiration_policy = {'enabled' : False,
2342-                             'mode' : 'age',
2343-                             'override_lease_duration' : None,
2344-                             'cutoff_date' : None,
2345-                             'sharetypes' : None}
2346         testbackend = DASCore(tempdir, expiration_policy)
2347         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2348 
2349}
2350[checkpoint5
2351wilcoxjg@gmail.com**20110705034626
2352 Ignore-this: 255780bd58299b0aa33c027e9d008262
2353] {
2354addfile ./src/allmydata/storage/backends/base.py
2355hunk ./src/allmydata/storage/backends/base.py 1
2356+from twisted.application import service
2357+
2358+class Backend(service.MultiService):
2359+    def __init__(self):
2360+        service.MultiService.__init__(self)
2361hunk ./src/allmydata/storage/backends/null/core.py 19
2362 
2363     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2364         
2365+        immutableshare = ImmutableShare()
2366         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2367 
2368     def set_storage_server(self, ss):
2369hunk ./src/allmydata/storage/backends/null/core.py 28
2370 class ImmutableShare:
2371     sharetype = "immutable"
2372 
2373-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2374+    def __init__(self):
2375         """ If max_size is not None then I won't allow more than
2376         max_size to be written to me. If create=True then max_size
2377         must not be None. """
2378hunk ./src/allmydata/storage/backends/null/core.py 32
2379-        precondition((max_size is not None) or (not create), max_size, create)
2380-        self.shnum = shnum
2381-        self.storage_index = storageindex
2382-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2383-        self._max_size = max_size
2384-        if create:
2385-            # touch the file, so later callers will see that we're working on
2386-            # it. Also construct the metadata.
2387-            assert not os.path.exists(self.fname)
2388-            fileutil.make_dirs(os.path.dirname(self.fname))
2389-            f = open(self.fname, 'wb')
2390-            # The second field -- the four-byte share data length -- is no
2391-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2392-            # there in case someone downgrades a storage server from >=
2393-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2394-            # server to another, etc. We do saturation -- a share data length
2395-            # larger than 2**32-1 (what can fit into the field) is marked as
2396-            # the largest length that can fit into the field. That way, even
2397-            # if this does happen, the old < v1.3.0 server will still allow
2398-            # clients to read the first part of the share.
2399-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2400-            f.close()
2401-            self._lease_offset = max_size + 0x0c
2402-            self._num_leases = 0
2403-        else:
2404-            f = open(self.fname, 'rb')
2405-            filesize = os.path.getsize(self.fname)
2406-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2407-            f.close()
2408-            if version != 1:
2409-                msg = "sharefile %s had version %d but we wanted 1" % \
2410-                      (self.fname, version)
2411-                raise UnknownImmutableContainerVersionError(msg)
2412-            self._num_leases = num_leases
2413-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2414-        self._data_offset = 0xc
2415+        pass
2416 
2417     def get_shnum(self):
2418         return self.shnum
2419hunk ./src/allmydata/storage/backends/null/core.py 54
2420         return f.read(actuallength)
2421 
2422     def write_share_data(self, offset, data):
2423-        length = len(data)
2424-        precondition(offset >= 0, offset)
2425-        if self._max_size is not None and offset+length > self._max_size:
2426-            raise DataTooLargeError(self._max_size, offset, length)
2427-        f = open(self.fname, 'rb+')
2428-        real_offset = self._data_offset+offset
2429-        f.seek(real_offset)
2430-        assert f.tell() == real_offset
2431-        f.write(data)
2432-        f.close()
2433+        pass
2434 
2435     def _write_lease_record(self, f, lease_number, lease_info):
2436         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2437hunk ./src/allmydata/storage/backends/null/core.py 84
2438             if data:
2439                 yield LeaseInfo().from_immutable_data(data)
2440 
2441-    def add_lease(self, lease_info):
2442-        f = open(self.fname, 'rb+')
2443-        num_leases = self._read_num_leases(f)
2444-        self._write_lease_record(f, num_leases, lease_info)
2445-        self._write_num_leases(f, num_leases+1)
2446-        f.close()
2447+    def add_lease(self, lease):
2448+        pass
2449 
2450     def renew_lease(self, renew_secret, new_expire_time):
2451         for i,lease in enumerate(self.get_leases()):
2452hunk ./src/allmydata/test/test_backends.py 32
2453                      'sharetypes' : None}
2454 
2455 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2456-    @mock.patch('time.time')
2457-    @mock.patch('os.mkdir')
2458-    @mock.patch('__builtin__.open')
2459-    @mock.patch('os.listdir')
2460-    @mock.patch('os.path.isdir')
2461-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2462-        """ This tests whether a server instance can be constructed
2463-        with a null backend. The server instance fails the test if it
2464-        tries to read or write to the file system. """
2465-
2466-        # Now begin the test.
2467-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2468-
2469-        self.failIf(mockisdir.called)
2470-        self.failIf(mocklistdir.called)
2471-        self.failIf(mockopen.called)
2472-        self.failIf(mockmkdir.called)
2473-
2474-        # You passed!
2475-
2476     @mock.patch('time.time')
2477     @mock.patch('os.mkdir')
2478     @mock.patch('__builtin__.open')
2479hunk ./src/allmydata/test/test_backends.py 53
2480                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2481         mockopen.side_effect = call_open
2482 
2483-        # Now begin the test.
2484-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2485-
2486-        self.failIf(mockisdir.called)
2487-        self.failIf(mocklistdir.called)
2488-        self.failIf(mockopen.called)
2489-        self.failIf(mockmkdir.called)
2490-        self.failIf(mocktime.called)
2491-
2492-        # You passed!
2493-
2494-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2495-    def setUp(self):
2496-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2497-
2498-    @mock.patch('os.mkdir')
2499-    @mock.patch('__builtin__.open')
2500-    @mock.patch('os.listdir')
2501-    @mock.patch('os.path.isdir')
2502-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2503-        """ Write a new share. """
2504-
2505-        # Now begin the test.
2506-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2507-        bs[0].remote_write(0, 'a')
2508-        self.failIf(mockisdir.called)
2509-        self.failIf(mocklistdir.called)
2510-        self.failIf(mockopen.called)
2511-        self.failIf(mockmkdir.called)
2512+        def call_isdir(fname):
2513+            if fname == os.path.join(tempdir,'shares'):
2514+                return True
2515+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2516+                return True
2517+            else:
2518+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2519+        mockisdir.side_effect = call_isdir
2520 
2521hunk ./src/allmydata/test/test_backends.py 62
2522-    @mock.patch('os.path.exists')
2523-    @mock.patch('os.path.getsize')
2524-    @mock.patch('__builtin__.open')
2525-    @mock.patch('os.listdir')
2526-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2527-        """ This tests whether the code correctly finds and reads
2528-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2529-        servers. There is a similar test in test_download, but that one
2530-        is from the perspective of the client and exercises a deeper
2531-        stack of code. This one is for exercising just the
2532-        StorageServer object. """
2533+        def call_mkdir(fname, mode):
2534+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2535+            self.failUnlessEqual(0777, mode)
2536+            if fname == tempdir:
2537+                return None
2538+            elif fname == os.path.join(tempdir,'shares'):
2539+                return None
2540+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2541+                return None
2542+            else:
2543+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2544+        mockmkdir.side_effect = call_mkdir
2545 
2546         # Now begin the test.
2547hunk ./src/allmydata/test/test_backends.py 76
2548-        bs = self.s.remote_get_buckets('teststorage_index')
2549+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2550 
2551hunk ./src/allmydata/test/test_backends.py 78
2552-        self.failUnlessEqual(len(bs), 0)
2553-        self.failIf(mocklistdir.called)
2554-        self.failIf(mockopen.called)
2555-        self.failIf(mockgetsize.called)
2556-        self.failIf(mockexists.called)
2557+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2558 
2559 
2560 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2561hunk ./src/allmydata/test/test_backends.py 193
2562         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2563 
2564 
2565+
2566+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2567+    @mock.patch('time.time')
2568+    @mock.patch('os.mkdir')
2569+    @mock.patch('__builtin__.open')
2570+    @mock.patch('os.listdir')
2571+    @mock.patch('os.path.isdir')
2572+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2573+        """ This tests whether a file system backend instance can be
2574+        constructed. To pass the test, it has to use the
2575+        filesystem in only the prescribed ways. """
2576+
2577+        def call_open(fname, mode):
2578+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2579+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2580+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2581+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2582+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2583+                return StringIO()
2584+            else:
2585+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2586+        mockopen.side_effect = call_open
2587+
2588+        def call_isdir(fname):
2589+            if fname == os.path.join(tempdir,'shares'):
2590+                return True
2591+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2592+                return True
2593+            else:
2594+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2595+        mockisdir.side_effect = call_isdir
2596+
2597+        def call_mkdir(fname, mode):
2598+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2599+            self.failUnlessEqual(0777, mode)
2600+            if fname == tempdir:
2601+                return None
2602+            elif fname == os.path.join(tempdir,'shares'):
2603+                return None
2604+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2605+                return None
2606+            else:
2607+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2608+        mockmkdir.side_effect = call_mkdir
2609+
2610+        # Now begin the test.
2611+        DASCore('teststoredir', expiration_policy)
2612+
2613+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2614}
2615[checkpoint 6
2616wilcoxjg@gmail.com**20110706190824
2617 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2618] {
2619hunk ./src/allmydata/interfaces.py 100
2620                          renew_secret=LeaseRenewSecret,
2621                          cancel_secret=LeaseCancelSecret,
2622                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2623-                         allocated_size=Offset, canary=Referenceable):
2624+                         allocated_size=Offset,
2625+                         canary=Referenceable):
2626         """
2627hunk ./src/allmydata/interfaces.py 103
2628-        @param storage_index: the index of the bucket to be created or
2629+        @param storage_index: the index of the shares to be created or
2630                               increfed.
2631hunk ./src/allmydata/interfaces.py 105
2632-        @param sharenums: these are the share numbers (probably between 0 and
2633-                          99) that the sender is proposing to store on this
2634-                          server.
2635-        @param renew_secret: This is the secret used to protect bucket refresh
2636+        @param renew_secret: This is the secret used to protect shares refresh
2637                              This secret is generated by the client and
2638                              stored for later comparison by the server. Each
2639                              server is given a different secret.
2640hunk ./src/allmydata/interfaces.py 109
2641-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2642-        @param canary: If the canary is lost before close(), the bucket is
2643+        @param cancel_secret: Like renew_secret, but protects shares decref.
2644+        @param sharenums: these are the share numbers (probably between 0 and
2645+                          99) that the sender is proposing to store on this
2646+                          server.
2647+        @param allocated_size: XXX The size of the shares the client wishes to store.
2648+        @param canary: If the canary is lost before close(), the shares are
2649                        deleted.
2650hunk ./src/allmydata/interfaces.py 116
2651+
2652         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2653                  already have and allocated is what we hereby agree to accept.
2654                  New leases are added for shares in both lists.
2655hunk ./src/allmydata/interfaces.py 128
2656                   renew_secret=LeaseRenewSecret,
2657                   cancel_secret=LeaseCancelSecret):
2658         """
2659-        Add a new lease on the given bucket. If the renew_secret matches an
2660+        Add a new lease on the given shares. If the renew_secret matches an
2661         existing lease, that lease will be renewed instead. If there is no
2662         bucket for the given storage_index, return silently. (note that in
2663         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2664hunk ./src/allmydata/storage/server.py 17
2665 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2666      create_mutable_sharefile
2667 
2668-from zope.interface import implements
2669-
2670 # storage/
2671 # storage/shares/incoming
2672 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2673hunk ./src/allmydata/test/test_backends.py 6
2674 from StringIO import StringIO
2675 
2676 from allmydata.test.common_util import ReallyEqualMixin
2677+from allmydata.util.assertutil import _assert
2678 
2679 import mock, os
2680 
2681hunk ./src/allmydata/test/test_backends.py 92
2682                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2683             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2684                 return StringIO()
2685+            else:
2686+                _assert(False, "The tester code doesn't recognize this case.") 
2687+
2688         mockopen.side_effect = call_open
2689         testbackend = DASCore(tempdir, expiration_policy)
2690         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2691hunk ./src/allmydata/test/test_backends.py 109
2692 
2693         def call_listdir(dirname):
2694             self.failUnlessReallyEqual(dirname, sharedirname)
2695-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2696+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2697 
2698         mocklistdir.side_effect = call_listdir
2699 
2700hunk ./src/allmydata/test/test_backends.py 113
2701+        def call_isdir(dirname):
2702+            self.failUnlessReallyEqual(dirname, sharedirname)
2703+            return True
2704+
2705+        mockisdir.side_effect = call_isdir
2706+
2707+        def call_mkdir(dirname, permissions):
2708+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2709+                self.Fail
2710+            else:
2711+                return True
2712+
2713+        mockmkdir.side_effect = call_mkdir
2714+
2715         class MockFile:
2716             def __init__(self):
2717                 self.buffer = ''
2718hunk ./src/allmydata/test/test_backends.py 156
2719             return sharefile
2720 
2721         mockopen.side_effect = call_open
2722+
2723         # Now begin the test.
2724         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2725         bs[0].remote_write(0, 'a')
2726hunk ./src/allmydata/test/test_backends.py 161
2727         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2728+       
2729+        # Now test the allocated_size method.
2730+        spaceint = self.s.allocated_size()
2731 
2732     @mock.patch('os.path.exists')
2733     @mock.patch('os.path.getsize')
2734}
2735[checkpoint 7
2736wilcoxjg@gmail.com**20110706200820
2737 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2738] hunk ./src/allmydata/test/test_backends.py 164
2739         
2740         # Now test the allocated_size method.
2741         spaceint = self.s.allocated_size()
2742+        self.failUnlessReallyEqual(spaceint, 1)
2743 
2744     @mock.patch('os.path.exists')
2745     @mock.patch('os.path.getsize')
2746[checkpoint8
2747wilcoxjg@gmail.com**20110706223126
2748 Ignore-this: 97336180883cb798b16f15411179f827
2749   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2750] hunk ./src/allmydata/test/test_backends.py 32
2751                      'cutoff_date' : None,
2752                      'sharetypes' : None}
2753 
2754+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2755+    def setUp(self):
2756+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2757+
2758+    @mock.patch('os.mkdir')
2759+    @mock.patch('__builtin__.open')
2760+    @mock.patch('os.listdir')
2761+    @mock.patch('os.path.isdir')
2762+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2763+        """ Write a new share. """
2764+
2765+        # Now begin the test.
2766+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2767+        bs[0].remote_write(0, 'a')
2768+        self.failIf(mockisdir.called)
2769+        self.failIf(mocklistdir.called)
2770+        self.failIf(mockopen.called)
2771+        self.failIf(mockmkdir.called)
2772+
2773 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2774     @mock.patch('time.time')
2775     @mock.patch('os.mkdir')
2776[checkpoint 9
2777wilcoxjg@gmail.com**20110707042942
2778 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2779] {
2780hunk ./src/allmydata/storage/backends/das/core.py 88
2781                     filename = os.path.join(finalstoragedir, f)
2782                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2783         except OSError:
2784-            # Commonly caused by there being no buckets at all.
2785+            # Commonly caused by there being no shares at all.
2786             pass
2787         
2788     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2789hunk ./src/allmydata/storage/backends/das/core.py 141
2790         self.storage_index = storageindex
2791         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2792         self._max_size = max_size
2793+        self.incomingdir = os.path.join(sharedir, 'incoming')
2794+        si_dir = storage_index_to_dir(storageindex)
2795+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2796+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2797         if create:
2798             # touch the file, so later callers will see that we're working on
2799             # it. Also construct the metadata.
2800hunk ./src/allmydata/storage/backends/das/core.py 177
2801             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2802         self._data_offset = 0xc
2803 
2804+    def close(self):
2805+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2806+        fileutil.rename(self.incominghome, self.finalhome)
2807+        try:
2808+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2809+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2810+            # these directories lying around forever, but the delete might
2811+            # fail if we're working on another share for the same storage
2812+            # index (like ab/abcde/5). The alternative approach would be to
2813+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2814+            # ShareWriter), each of which is responsible for a single
2815+            # directory on disk, and have them use reference counting of
2816+            # their children to know when they should do the rmdir. This
2817+            # approach is simpler, but relies on os.rmdir refusing to delete
2818+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2819+            os.rmdir(os.path.dirname(self.incominghome))
2820+            # we also delete the grandparent (prefix) directory, .../ab ,
2821+            # again to avoid leaving directories lying around. This might
2822+            # fail if there is another bucket open that shares a prefix (like
2823+            # ab/abfff).
2824+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2825+            # we leave the great-grandparent (incoming/) directory in place.
2826+        except EnvironmentError:
2827+            # ignore the "can't rmdir because the directory is not empty"
2828+            # exceptions, those are normal consequences of the
2829+            # above-mentioned conditions.
2830+            pass
2831+        pass
2832+       
2833+    def stat(self):
2834+        return os.stat(self.finalhome)[stat.ST_SIZE]
2835+
2836     def get_shnum(self):
2837         return self.shnum
2838 
2839hunk ./src/allmydata/storage/immutable.py 7
2840 
2841 from zope.interface import implements
2842 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2843-from allmydata.util import base32, fileutil, log
2844+from allmydata.util import base32, log
2845 from allmydata.util.assertutil import precondition
2846 from allmydata.util.hashutil import constant_time_compare
2847 from allmydata.storage.lease import LeaseInfo
2848hunk ./src/allmydata/storage/immutable.py 44
2849     def remote_close(self):
2850         precondition(not self.closed)
2851         start = time.time()
2852-
2853-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2854-        fileutil.rename(self.incominghome, self.finalhome)
2855-        try:
2856-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2857-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2858-            # these directories lying around forever, but the delete might
2859-            # fail if we're working on another share for the same storage
2860-            # index (like ab/abcde/5). The alternative approach would be to
2861-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2862-            # ShareWriter), each of which is responsible for a single
2863-            # directory on disk, and have them use reference counting of
2864-            # their children to know when they should do the rmdir. This
2865-            # approach is simpler, but relies on os.rmdir refusing to delete
2866-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2867-            os.rmdir(os.path.dirname(self.incominghome))
2868-            # we also delete the grandparent (prefix) directory, .../ab ,
2869-            # again to avoid leaving directories lying around. This might
2870-            # fail if there is another bucket open that shares a prefix (like
2871-            # ab/abfff).
2872-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2873-            # we leave the great-grandparent (incoming/) directory in place.
2874-        except EnvironmentError:
2875-            # ignore the "can't rmdir because the directory is not empty"
2876-            # exceptions, those are normal consequences of the
2877-            # above-mentioned conditions.
2878-            pass
2879+        self._sharefile.close()
2880         self._sharefile = None
2881         self.closed = True
2882         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2883hunk ./src/allmydata/storage/immutable.py 49
2884 
2885-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2886+        filelen = self._sharefile.stat()
2887         self.ss.bucket_writer_closed(self, filelen)
2888         self.ss.add_latency("close", time.time() - start)
2889         self.ss.count("close")
2890hunk ./src/allmydata/storage/server.py 45
2891         self._active_writers = weakref.WeakKeyDictionary()
2892         self.backend = backend
2893         self.backend.setServiceParent(self)
2894+        self.backend.set_storage_server(self)
2895         log.msg("StorageServer created", facility="tahoe.storage")
2896 
2897         self.latencies = {"allocate": [], # immutable
2898hunk ./src/allmydata/storage/server.py 220
2899 
2900         for shnum in (sharenums - alreadygot):
2901             if (not limited) or (remaining_space >= max_space_per_bucket):
2902-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2903-                self.backend.set_storage_server(self)
2904                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2905                                                      max_space_per_bucket, lease_info, canary)
2906                 bucketwriters[shnum] = bw
2907hunk ./src/allmydata/test/test_backends.py 117
2908         mockopen.side_effect = call_open
2909         testbackend = DASCore(tempdir, expiration_policy)
2910         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2911-
2912+   
2913+    @mock.patch('allmydata.util.fileutil.get_available_space')
2914     @mock.patch('time.time')
2915     @mock.patch('os.mkdir')
2916     @mock.patch('__builtin__.open')
2917hunk ./src/allmydata/test/test_backends.py 124
2918     @mock.patch('os.listdir')
2919     @mock.patch('os.path.isdir')
2920-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2921+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2922+                             mockget_available_space):
2923         """ Write a new share. """
2924 
2925         def call_listdir(dirname):
2926hunk ./src/allmydata/test/test_backends.py 148
2927 
2928         mockmkdir.side_effect = call_mkdir
2929 
2930+        def call_get_available_space(storedir, reserved_space):
2931+            self.failUnlessReallyEqual(storedir, tempdir)
2932+            return 1
2933+
2934+        mockget_available_space.side_effect = call_get_available_space
2935+
2936         class MockFile:
2937             def __init__(self):
2938                 self.buffer = ''
2939hunk ./src/allmydata/test/test_backends.py 188
2940         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2941         bs[0].remote_write(0, 'a')
2942         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2943-       
2944+
2945+        # What happens when there's not enough space for the client's request?
2946+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2947+
2948         # Now test the allocated_size method.
2949         spaceint = self.s.allocated_size()
2950         self.failUnlessReallyEqual(spaceint, 1)
2951}
2952[checkpoint10
2953wilcoxjg@gmail.com**20110707172049
2954 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2955] {
2956hunk ./src/allmydata/test/test_backends.py 20
2957 # The following share file contents was generated with
2958 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2959 # with share data == 'a'.
2960-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2961+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2962+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2963+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2964 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2965 
2966hunk ./src/allmydata/test/test_backends.py 25
2967+testnodeid = 'testnodeidxxxxxxxxxx'
2968 tempdir = 'teststoredir'
2969 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2970 sharefname = os.path.join(sharedirname, '0')
2971hunk ./src/allmydata/test/test_backends.py 37
2972 
2973 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2974     def setUp(self):
2975-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2976+        self.s = StorageServer(testnodeid, backend=NullCore())
2977 
2978     @mock.patch('os.mkdir')
2979     @mock.patch('__builtin__.open')
2980hunk ./src/allmydata/test/test_backends.py 99
2981         mockmkdir.side_effect = call_mkdir
2982 
2983         # Now begin the test.
2984-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2985+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2986 
2987         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2988 
2989hunk ./src/allmydata/test/test_backends.py 119
2990 
2991         mockopen.side_effect = call_open
2992         testbackend = DASCore(tempdir, expiration_policy)
2993-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2994-   
2995+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
2996+       
2997+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
2998     @mock.patch('allmydata.util.fileutil.get_available_space')
2999     @mock.patch('time.time')
3000     @mock.patch('os.mkdir')
3001hunk ./src/allmydata/test/test_backends.py 129
3002     @mock.patch('os.listdir')
3003     @mock.patch('os.path.isdir')
3004     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3005-                             mockget_available_space):
3006+                             mockget_available_space, mockget_shares):
3007         """ Write a new share. """
3008 
3009         def call_listdir(dirname):
3010hunk ./src/allmydata/test/test_backends.py 139
3011         mocklistdir.side_effect = call_listdir
3012 
3013         def call_isdir(dirname):
3014+            #XXX Should there be any other tests here?
3015             self.failUnlessReallyEqual(dirname, sharedirname)
3016             return True
3017 
3018hunk ./src/allmydata/test/test_backends.py 159
3019 
3020         mockget_available_space.side_effect = call_get_available_space
3021 
3022+        mocktime.return_value = 0
3023+        class MockShare:
3024+            def __init__(self):
3025+                self.shnum = 1
3026+               
3027+            def add_or_renew_lease(elf, lease_info):
3028+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3029+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3030+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3031+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3032+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3033+               
3034+
3035+        share = MockShare()
3036+        def call_get_shares(storageindex):
3037+            return [share]
3038+
3039+        mockget_shares.side_effect = call_get_shares
3040+
3041         class MockFile:
3042             def __init__(self):
3043                 self.buffer = ''
3044hunk ./src/allmydata/test/test_backends.py 199
3045             def tell(self):
3046                 return self.pos
3047 
3048-        mocktime.return_value = 0
3049 
3050         sharefile = MockFile()
3051         def call_open(fname, mode):
3052}
3053[jacp 11
3054wilcoxjg@gmail.com**20110708213919
3055 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3056] {
3057hunk ./src/allmydata/storage/backends/das/core.py 144
3058         self.incomingdir = os.path.join(sharedir, 'incoming')
3059         si_dir = storage_index_to_dir(storageindex)
3060         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3061+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3062         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3063         if create:
3064             # touch the file, so later callers will see that we're working on
3065hunk ./src/allmydata/storage/backends/das/core.py 208
3066         pass
3067         
3068     def stat(self):
3069-        return os.stat(self.finalhome)[stat.ST_SIZE]
3070+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3071 
3072     def get_shnum(self):
3073         return self.shnum
3074hunk ./src/allmydata/storage/immutable.py 44
3075     def remote_close(self):
3076         precondition(not self.closed)
3077         start = time.time()
3078+
3079         self._sharefile.close()
3080hunk ./src/allmydata/storage/immutable.py 46
3081+        filelen = self._sharefile.stat()
3082         self._sharefile = None
3083hunk ./src/allmydata/storage/immutable.py 48
3084+
3085         self.closed = True
3086         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3087 
3088hunk ./src/allmydata/storage/immutable.py 52
3089-        filelen = self._sharefile.stat()
3090         self.ss.bucket_writer_closed(self, filelen)
3091         self.ss.add_latency("close", time.time() - start)
3092         self.ss.count("close")
3093hunk ./src/allmydata/storage/server.py 220
3094 
3095         for shnum in (sharenums - alreadygot):
3096             if (not limited) or (remaining_space >= max_space_per_bucket):
3097-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3098-                                                     max_space_per_bucket, lease_info, canary)
3099+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3100                 bucketwriters[shnum] = bw
3101                 self._active_writers[bw] = 1
3102                 if limited:
3103hunk ./src/allmydata/test/test_backends.py 20
3104 # The following share file contents was generated with
3105 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3106 # with share data == 'a'.
3107-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3108-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3109+renew_secret  = 'x'*32
3110+cancel_secret = 'y'*32
3111 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3112 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3113 
3114hunk ./src/allmydata/test/test_backends.py 27
3115 testnodeid = 'testnodeidxxxxxxxxxx'
3116 tempdir = 'teststoredir'
3117-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3118-sharefname = os.path.join(sharedirname, '0')
3119+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3120+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3121+shareincomingname = os.path.join(sharedirincomingname, '0')
3122+sharefname = os.path.join(sharedirfinalname, '0')
3123+
3124 expiration_policy = {'enabled' : False,
3125                      'mode' : 'age',
3126                      'override_lease_duration' : None,
3127hunk ./src/allmydata/test/test_backends.py 123
3128         mockopen.side_effect = call_open
3129         testbackend = DASCore(tempdir, expiration_policy)
3130         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3131-       
3132+
3133+    @mock.patch('allmydata.util.fileutil.rename')
3134+    @mock.patch('allmydata.util.fileutil.make_dirs')
3135+    @mock.patch('os.path.exists')
3136+    @mock.patch('os.stat')
3137     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3138     @mock.patch('allmydata.util.fileutil.get_available_space')
3139     @mock.patch('time.time')
3140hunk ./src/allmydata/test/test_backends.py 136
3141     @mock.patch('os.listdir')
3142     @mock.patch('os.path.isdir')
3143     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3144-                             mockget_available_space, mockget_shares):
3145+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3146+                             mockmake_dirs, mockrename):
3147         """ Write a new share. """
3148 
3149         def call_listdir(dirname):
3150hunk ./src/allmydata/test/test_backends.py 141
3151-            self.failUnlessReallyEqual(dirname, sharedirname)
3152+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3153             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3154 
3155         mocklistdir.side_effect = call_listdir
3156hunk ./src/allmydata/test/test_backends.py 148
3157 
3158         def call_isdir(dirname):
3159             #XXX Should there be any other tests here?
3160-            self.failUnlessReallyEqual(dirname, sharedirname)
3161+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3162             return True
3163 
3164         mockisdir.side_effect = call_isdir
3165hunk ./src/allmydata/test/test_backends.py 154
3166 
3167         def call_mkdir(dirname, permissions):
3168-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3169+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3170                 self.Fail
3171             else:
3172                 return True
3173hunk ./src/allmydata/test/test_backends.py 208
3174                 return self.pos
3175 
3176 
3177-        sharefile = MockFile()
3178+        fobj = MockFile()
3179         def call_open(fname, mode):
3180             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3181hunk ./src/allmydata/test/test_backends.py 211
3182-            return sharefile
3183+            return fobj
3184 
3185         mockopen.side_effect = call_open
3186 
3187hunk ./src/allmydata/test/test_backends.py 215
3188+        def call_make_dirs(dname):
3189+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3190+           
3191+        mockmake_dirs.side_effect = call_make_dirs
3192+
3193+        def call_rename(src, dst):
3194+           self.failUnlessReallyEqual(src, shareincomingname)
3195+           self.failUnlessReallyEqual(dst, sharefname)
3196+           
3197+        mockrename.side_effect = call_rename
3198+
3199+        def call_exists(fname):
3200+            self.failUnlessReallyEqual(fname, sharefname)
3201+
3202+        mockexists.side_effect = call_exists
3203+
3204         # Now begin the test.
3205         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3206         bs[0].remote_write(0, 'a')
3207hunk ./src/allmydata/test/test_backends.py 234
3208-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3209+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3210+        spaceint = self.s.allocated_size()
3211+        self.failUnlessReallyEqual(spaceint, 1)
3212+
3213+        bs[0].remote_close()
3214 
3215         # What happens when there's not enough space for the client's request?
3216hunk ./src/allmydata/test/test_backends.py 241
3217-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3218+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3219 
3220         # Now test the allocated_size method.
3221hunk ./src/allmydata/test/test_backends.py 244
3222-        spaceint = self.s.allocated_size()
3223-        self.failUnlessReallyEqual(spaceint, 1)
3224+        #self.failIf(mockexists.called, mockexists.call_args_list)
3225+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3226+        #self.failIf(mockrename.called, mockrename.call_args_list)
3227+        #self.failIf(mockstat.called, mockstat.call_args_list)
3228 
3229     @mock.patch('os.path.exists')
3230     @mock.patch('os.path.getsize')
3231}
3232[checkpoint12 testing correct behavior with regard to incoming and final
3233wilcoxjg@gmail.com**20110710191915
3234 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3235] {
3236hunk ./src/allmydata/storage/backends/das/core.py 74
3237         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3238         self.lease_checker.setServiceParent(self)
3239 
3240+    def get_incoming(self, storageindex):
3241+        return set((1,))
3242+
3243     def get_available_space(self):
3244         if self.readonly:
3245             return 0
3246hunk ./src/allmydata/storage/server.py 77
3247         """Return a dict, indexed by category, that contains a dict of
3248         latency numbers for each category. If there are sufficient samples
3249         for unambiguous interpretation, each dict will contain the
3250-        following keys: mean, 01_0_percentile, 10_0_percentile,
3251+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3252         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3253         99_0_percentile, 99_9_percentile.  If there are insufficient
3254         samples for a given percentile to be interpreted unambiguously
3255hunk ./src/allmydata/storage/server.py 120
3256 
3257     def get_stats(self):
3258         # remember: RIStatsProvider requires that our return dict
3259-        # contains numeric values.
3260+        # contains numeric, or None values.
3261         stats = { 'storage_server.allocated': self.allocated_size(), }
3262         stats['storage_server.reserved_space'] = self.reserved_space
3263         for category,ld in self.get_latencies().items():
3264hunk ./src/allmydata/storage/server.py 185
3265         start = time.time()
3266         self.count("allocate")
3267         alreadygot = set()
3268+        incoming = set()
3269         bucketwriters = {} # k: shnum, v: BucketWriter
3270 
3271         si_s = si_b2a(storage_index)
3272hunk ./src/allmydata/storage/server.py 219
3273             alreadygot.add(share.shnum)
3274             share.add_or_renew_lease(lease_info)
3275 
3276-        for shnum in (sharenums - alreadygot):
3277+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3278+        incoming = self.backend.get_incoming(storageindex)
3279+
3280+        for shnum in ((sharenums - alreadygot) - incoming):
3281             if (not limited) or (remaining_space >= max_space_per_bucket):
3282                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3283                 bucketwriters[shnum] = bw
3284hunk ./src/allmydata/storage/server.py 229
3285                 self._active_writers[bw] = 1
3286                 if limited:
3287                     remaining_space -= max_space_per_bucket
3288-
3289-        #XXX We SHOULD DOCUMENT LATER.
3290+            else:
3291+                # Bummer not enough space to accept this share.
3292+                pass
3293 
3294         self.add_latency("allocate", time.time() - start)
3295         return alreadygot, bucketwriters
3296hunk ./src/allmydata/storage/server.py 323
3297         self.add_latency("get", time.time() - start)
3298         return bucketreaders
3299 
3300-    def get_leases(self, storage_index):
3301+    def remote_get_incoming(self, storageindex):
3302+        incoming_share_set = self.backend.get_incoming(storageindex)
3303+        return incoming_share_set
3304+
3305+    def get_leases(self, storageindex):
3306         """Provide an iterator that yields all of the leases attached to this
3307         bucket. Each lease is returned as a LeaseInfo instance.
3308 
3309hunk ./src/allmydata/storage/server.py 337
3310         # since all shares get the same lease data, we just grab the leases
3311         # from the first share
3312         try:
3313-            shnum, filename = self._get_shares(storage_index).next()
3314+            shnum, filename = self._get_shares(storageindex).next()
3315             sf = ShareFile(filename)
3316             return sf.get_leases()
3317         except StopIteration:
3318hunk ./src/allmydata/test/test_backends.py 182
3319 
3320         share = MockShare()
3321         def call_get_shares(storageindex):
3322-            return [share]
3323+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3324+            return []#share]
3325 
3326         mockget_shares.side_effect = call_get_shares
3327 
3328hunk ./src/allmydata/test/test_backends.py 222
3329         mockmake_dirs.side_effect = call_make_dirs
3330 
3331         def call_rename(src, dst):
3332-           self.failUnlessReallyEqual(src, shareincomingname)
3333-           self.failUnlessReallyEqual(dst, sharefname)
3334+            self.failUnlessReallyEqual(src, shareincomingname)
3335+            self.failUnlessReallyEqual(dst, sharefname)
3336             
3337         mockrename.side_effect = call_rename
3338 
3339hunk ./src/allmydata/test/test_backends.py 233
3340         mockexists.side_effect = call_exists
3341 
3342         # Now begin the test.
3343+
3344+        # XXX (0) ???  Fail unless something is not properly set-up?
3345         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3346hunk ./src/allmydata/test/test_backends.py 236
3347+
3348+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3349+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3350+
3351+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3352+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3353+        # with the same si, until BucketWriter.remote_close() has been called.
3354+        # self.failIf(bsa)
3355+
3356+        # XXX (3) Inspect final and fail unless there's nothing there.
3357         bs[0].remote_write(0, 'a')
3358hunk ./src/allmydata/test/test_backends.py 247
3359+        # XXX (4a) Inspect final and fail unless share 0 is there.
3360+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3361         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3362         spaceint = self.s.allocated_size()
3363         self.failUnlessReallyEqual(spaceint, 1)
3364hunk ./src/allmydata/test/test_backends.py 253
3365 
3366+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3367         bs[0].remote_close()
3368 
3369         # What happens when there's not enough space for the client's request?
3370hunk ./src/allmydata/test/test_backends.py 260
3371         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3372 
3373         # Now test the allocated_size method.
3374-        #self.failIf(mockexists.called, mockexists.call_args_list)
3375+        # self.failIf(mockexists.called, mockexists.call_args_list)
3376         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3377         #self.failIf(mockrename.called, mockrename.call_args_list)
3378         #self.failIf(mockstat.called, mockstat.call_args_list)
3379}
3380[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3381wilcoxjg@gmail.com**20110710195139
3382 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3383] {
3384hunk ./src/allmydata/storage/server.py 220
3385             share.add_or_renew_lease(lease_info)
3386 
3387         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3388-        incoming = self.backend.get_incoming(storageindex)
3389+        incoming = self.backend.get_incoming(storage_index)
3390 
3391         for shnum in ((sharenums - alreadygot) - incoming):
3392             if (not limited) or (remaining_space >= max_space_per_bucket):
3393hunk ./src/allmydata/storage/server.py 323
3394         self.add_latency("get", time.time() - start)
3395         return bucketreaders
3396 
3397-    def remote_get_incoming(self, storageindex):
3398-        incoming_share_set = self.backend.get_incoming(storageindex)
3399+    def remote_get_incoming(self, storage_index):
3400+        incoming_share_set = self.backend.get_incoming(storage_index)
3401         return incoming_share_set
3402 
3403hunk ./src/allmydata/storage/server.py 327
3404-    def get_leases(self, storageindex):
3405+    def get_leases(self, storage_index):
3406         """Provide an iterator that yields all of the leases attached to this
3407         bucket. Each lease is returned as a LeaseInfo instance.
3408 
3409hunk ./src/allmydata/storage/server.py 337
3410         # since all shares get the same lease data, we just grab the leases
3411         # from the first share
3412         try:
3413-            shnum, filename = self._get_shares(storageindex).next()
3414+            shnum, filename = self._get_shares(storage_index).next()
3415             sf = ShareFile(filename)
3416             return sf.get_leases()
3417         except StopIteration:
3418replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3419}
3420[adding comments to clarify what I'm about to do.
3421wilcoxjg@gmail.com**20110710220623
3422 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3423] {
3424hunk ./src/allmydata/storage/backends/das/core.py 8
3425 
3426 import os, re, weakref, struct, time
3427 
3428-from foolscap.api import Referenceable
3429+#from foolscap.api import Referenceable
3430 from twisted.application import service
3431 
3432 from zope.interface import implements
3433hunk ./src/allmydata/storage/backends/das/core.py 12
3434-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3435+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3436 from allmydata.util import fileutil, idlib, log, time_format
3437 import allmydata # for __full_version__
3438 
3439hunk ./src/allmydata/storage/server.py 219
3440             alreadygot.add(share.shnum)
3441             share.add_or_renew_lease(lease_info)
3442 
3443-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3444+        # fill incoming with all shares that are incoming use a set operation
3445+        # since there's no need to operate on individual pieces
3446         incoming = self.backend.get_incoming(storageindex)
3447 
3448         for shnum in ((sharenums - alreadygot) - incoming):
3449hunk ./src/allmydata/test/test_backends.py 245
3450         # with the same si, until BucketWriter.remote_close() has been called.
3451         # self.failIf(bsa)
3452 
3453-        # XXX (3) Inspect final and fail unless there's nothing there.
3454         bs[0].remote_write(0, 'a')
3455hunk ./src/allmydata/test/test_backends.py 246
3456-        # XXX (4a) Inspect final and fail unless share 0 is there.
3457-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3458         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3459         spaceint = self.s.allocated_size()
3460         self.failUnlessReallyEqual(spaceint, 1)
3461hunk ./src/allmydata/test/test_backends.py 250
3462 
3463-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3464+        # XXX (3) Inspect final and fail unless there's nothing there.
3465         bs[0].remote_close()
3466hunk ./src/allmydata/test/test_backends.py 252
3467+        # XXX (4a) Inspect final and fail unless share 0 is there.
3468+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3469 
3470         # What happens when there's not enough space for the client's request?
3471         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3472}
3473[branching back, no longer attempting to mock inside TestServerFSBackend
3474wilcoxjg@gmail.com**20110711190849
3475 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3476] {
3477hunk ./src/allmydata/storage/backends/das/core.py 75
3478         self.lease_checker.setServiceParent(self)
3479 
3480     def get_incoming(self, storageindex):
3481-        return set((1,))
3482-
3483-    def get_available_space(self):
3484-        if self.readonly:
3485-            return 0
3486-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3487+        """Return the set of incoming shnums."""
3488+        return set(os.listdir(self.incomingdir))
3489 
3490     def get_shares(self, storage_index):
3491         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3492hunk ./src/allmydata/storage/backends/das/core.py 90
3493             # Commonly caused by there being no shares at all.
3494             pass
3495         
3496+    def get_available_space(self):
3497+        if self.readonly:
3498+            return 0
3499+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3500+
3501     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3502         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3503         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3504hunk ./src/allmydata/test/test_backends.py 27
3505 
3506 testnodeid = 'testnodeidxxxxxxxxxx'
3507 tempdir = 'teststoredir'
3508-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3509-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3510+basedir = os.path.join(tempdir, 'shares')
3511+baseincdir = os.path.join(basedir, 'incoming')
3512+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3513+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3514 shareincomingname = os.path.join(sharedirincomingname, '0')
3515 sharefname = os.path.join(sharedirfinalname, '0')
3516 
3517hunk ./src/allmydata/test/test_backends.py 142
3518                              mockmake_dirs, mockrename):
3519         """ Write a new share. """
3520 
3521-        def call_listdir(dirname):
3522-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3523-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3524-
3525-        mocklistdir.side_effect = call_listdir
3526-
3527-        def call_isdir(dirname):
3528-            #XXX Should there be any other tests here?
3529-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3530-            return True
3531-
3532-        mockisdir.side_effect = call_isdir
3533-
3534-        def call_mkdir(dirname, permissions):
3535-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3536-                self.Fail
3537-            else:
3538-                return True
3539-
3540-        mockmkdir.side_effect = call_mkdir
3541-
3542-        def call_get_available_space(storedir, reserved_space):
3543-            self.failUnlessReallyEqual(storedir, tempdir)
3544-            return 1
3545-
3546-        mockget_available_space.side_effect = call_get_available_space
3547-
3548-        mocktime.return_value = 0
3549         class MockShare:
3550             def __init__(self):
3551                 self.shnum = 1
3552hunk ./src/allmydata/test/test_backends.py 152
3553                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3554                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3555                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3556-               
3557 
3558         share = MockShare()
3559hunk ./src/allmydata/test/test_backends.py 154
3560-        def call_get_shares(storageindex):
3561-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3562-            return []#share]
3563-
3564-        mockget_shares.side_effect = call_get_shares
3565 
3566         class MockFile:
3567             def __init__(self):
3568hunk ./src/allmydata/test/test_backends.py 176
3569             def tell(self):
3570                 return self.pos
3571 
3572-
3573         fobj = MockFile()
3574hunk ./src/allmydata/test/test_backends.py 177
3575+
3576+        directories = {}
3577+        def call_listdir(dirname):
3578+            if dirname not in directories:
3579+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3580+            else:
3581+                return directories[dirname].get_contents()
3582+
3583+        mocklistdir.side_effect = call_listdir
3584+
3585+        class MockDir:
3586+            def __init__(self, dirname):
3587+                self.name = dirname
3588+                self.contents = []
3589+   
3590+            def get_contents(self):
3591+                return self.contents
3592+
3593+        def call_isdir(dirname):
3594+            #XXX Should there be any other tests here?
3595+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3596+            return True
3597+
3598+        mockisdir.side_effect = call_isdir
3599+
3600+        def call_mkdir(dirname, permissions):
3601+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3602+                self.Fail
3603+            if dirname in directories:
3604+                raise OSError(17, "File exists: '%s'" % dirname)
3605+                self.Fail
3606+            elif dirname not in directories:
3607+                directories[dirname] = MockDir(dirname)
3608+                return True
3609+
3610+        mockmkdir.side_effect = call_mkdir
3611+
3612+        def call_get_available_space(storedir, reserved_space):
3613+            self.failUnlessReallyEqual(storedir, tempdir)
3614+            return 1
3615+
3616+        mockget_available_space.side_effect = call_get_available_space
3617+
3618+        mocktime.return_value = 0
3619+        def call_get_shares(storageindex):
3620+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3621+            return []#share]
3622+
3623+        mockget_shares.side_effect = call_get_shares
3624+
3625         def call_open(fname, mode):
3626             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3627             return fobj
3628}
3629[checkpoint12 TestServerFSBackend no longer mocks filesystem
3630wilcoxjg@gmail.com**20110711193357
3631 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3632] {
3633hunk ./src/allmydata/storage/backends/das/core.py 23
3634      create_mutable_sharefile
3635 from allmydata.storage.immutable import BucketWriter, BucketReader
3636 from allmydata.storage.crawler import FSBucketCountingCrawler
3637+from allmydata.util.hashutil import constant_time_compare
3638 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3639 
3640 from zope.interface import implements
3641hunk ./src/allmydata/storage/backends/das/core.py 28
3642 
3643+# storage/
3644+# storage/shares/incoming
3645+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3646+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3647+# storage/shares/$START/$STORAGEINDEX
3648+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3649+
3650+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3651+# base-32 chars).
3652 # $SHARENUM matches this regex:
3653 NUM_RE=re.compile("^[0-9]+$")
3654 
3655hunk ./src/allmydata/test/test_backends.py 126
3656         testbackend = DASCore(tempdir, expiration_policy)
3657         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3658 
3659-    @mock.patch('allmydata.util.fileutil.rename')
3660-    @mock.patch('allmydata.util.fileutil.make_dirs')
3661-    @mock.patch('os.path.exists')
3662-    @mock.patch('os.stat')
3663-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3664-    @mock.patch('allmydata.util.fileutil.get_available_space')
3665     @mock.patch('time.time')
3666hunk ./src/allmydata/test/test_backends.py 127
3667-    @mock.patch('os.mkdir')
3668-    @mock.patch('__builtin__.open')
3669-    @mock.patch('os.listdir')
3670-    @mock.patch('os.path.isdir')
3671-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3672-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3673-                             mockmake_dirs, mockrename):
3674+    def test_write_share(self, mocktime):
3675         """ Write a new share. """
3676 
3677         class MockShare:
3678hunk ./src/allmydata/test/test_backends.py 143
3679 
3680         share = MockShare()
3681 
3682-        class MockFile:
3683-            def __init__(self):
3684-                self.buffer = ''
3685-                self.pos = 0
3686-            def write(self, instring):
3687-                begin = self.pos
3688-                padlen = begin - len(self.buffer)
3689-                if padlen > 0:
3690-                    self.buffer += '\x00' * padlen
3691-                end = self.pos + len(instring)
3692-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3693-                self.pos = end
3694-            def close(self):
3695-                pass
3696-            def seek(self, pos):
3697-                self.pos = pos
3698-            def read(self, numberbytes):
3699-                return self.buffer[self.pos:self.pos+numberbytes]
3700-            def tell(self):
3701-                return self.pos
3702-
3703-        fobj = MockFile()
3704-
3705-        directories = {}
3706-        def call_listdir(dirname):
3707-            if dirname not in directories:
3708-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3709-            else:
3710-                return directories[dirname].get_contents()
3711-
3712-        mocklistdir.side_effect = call_listdir
3713-
3714-        class MockDir:
3715-            def __init__(self, dirname):
3716-                self.name = dirname
3717-                self.contents = []
3718-   
3719-            def get_contents(self):
3720-                return self.contents
3721-
3722-        def call_isdir(dirname):
3723-            #XXX Should there be any other tests here?
3724-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3725-            return True
3726-
3727-        mockisdir.side_effect = call_isdir
3728-
3729-        def call_mkdir(dirname, permissions):
3730-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3731-                self.Fail
3732-            if dirname in directories:
3733-                raise OSError(17, "File exists: '%s'" % dirname)
3734-                self.Fail
3735-            elif dirname not in directories:
3736-                directories[dirname] = MockDir(dirname)
3737-                return True
3738-
3739-        mockmkdir.side_effect = call_mkdir
3740-
3741-        def call_get_available_space(storedir, reserved_space):
3742-            self.failUnlessReallyEqual(storedir, tempdir)
3743-            return 1
3744-
3745-        mockget_available_space.side_effect = call_get_available_space
3746-
3747-        mocktime.return_value = 0
3748-        def call_get_shares(storageindex):
3749-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3750-            return []#share]
3751-
3752-        mockget_shares.side_effect = call_get_shares
3753-
3754-        def call_open(fname, mode):
3755-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3756-            return fobj
3757-
3758-        mockopen.side_effect = call_open
3759-
3760-        def call_make_dirs(dname):
3761-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3762-           
3763-        mockmake_dirs.side_effect = call_make_dirs
3764-
3765-        def call_rename(src, dst):
3766-            self.failUnlessReallyEqual(src, shareincomingname)
3767-            self.failUnlessReallyEqual(dst, sharefname)
3768-           
3769-        mockrename.side_effect = call_rename
3770-
3771-        def call_exists(fname):
3772-            self.failUnlessReallyEqual(fname, sharefname)
3773-
3774-        mockexists.side_effect = call_exists
3775-
3776         # Now begin the test.
3777 
3778         # XXX (0) ???  Fail unless something is not properly set-up?
3779}
3780[JACP
3781wilcoxjg@gmail.com**20110711194407
3782 Ignore-this: b54745de777c4bb58d68d708f010bbb
3783] {
3784hunk ./src/allmydata/storage/backends/das/core.py 86
3785 
3786     def get_incoming(self, storageindex):
3787         """Return the set of incoming shnums."""
3788-        return set(os.listdir(self.incomingdir))
3789+        try:
3790+            incominglist = os.listdir(self.incomingdir)
3791+            print "incominglist: ", incominglist
3792+            return set(incominglist)
3793+        except OSError:
3794+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3795+            pass
3796 
3797     def get_shares(self, storage_index):
3798         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3799hunk ./src/allmydata/storage/server.py 17
3800 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3801      create_mutable_sharefile
3802 
3803-# storage/
3804-# storage/shares/incoming
3805-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3806-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3807-# storage/shares/$START/$STORAGEINDEX
3808-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3809-
3810-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3811-# base-32 chars).
3812-
3813-
3814 class StorageServer(service.MultiService, Referenceable):
3815     implements(RIStorageServer, IStatsProducer)
3816     name = 'storage'
3817}
3818[testing get incoming
3819wilcoxjg@gmail.com**20110711210224
3820 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3821] {
3822hunk ./src/allmydata/storage/backends/das/core.py 87
3823     def get_incoming(self, storageindex):
3824         """Return the set of incoming shnums."""
3825         try:
3826-            incominglist = os.listdir(self.incomingdir)
3827+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3828+            incominglist = os.listdir(incomingsharesdir)
3829             print "incominglist: ", incominglist
3830             return set(incominglist)
3831         except OSError:
3832hunk ./src/allmydata/storage/backends/das/core.py 92
3833-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3834-            pass
3835-
3836+            # XXX I'd like to make this more specific. If there are no shares at all.
3837+            return set()
3838+           
3839     def get_shares(self, storage_index):
3840         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3841         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3842hunk ./src/allmydata/test/test_backends.py 149
3843         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3844 
3845         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3846+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3847         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3848 
3849hunk ./src/allmydata/test/test_backends.py 152
3850-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3851         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3852         # with the same si, until BucketWriter.remote_close() has been called.
3853         # self.failIf(bsa)
3854}
3855[ImmutableShareFile does not know its StorageIndex
3856wilcoxjg@gmail.com**20110711211424
3857 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3858] {
3859hunk ./src/allmydata/storage/backends/das/core.py 112
3860             return 0
3861         return fileutil.get_available_space(self.storedir, self.reserved_space)
3862 
3863-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3864-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3865+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3866+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3867+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3868+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3869         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3870         return bw
3871 
3872hunk ./src/allmydata/storage/backends/das/core.py 155
3873     LEASE_SIZE = struct.calcsize(">L32s32sL")
3874     sharetype = "immutable"
3875 
3876-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3877+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3878         """ If max_size is not None then I won't allow more than
3879         max_size to be written to me. If create=True then max_size
3880         must not be None. """
3881}
3882[get_incoming correctly reports the 0 share after it has arrived
3883wilcoxjg@gmail.com**20110712025157
3884 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3885] {
3886hunk ./src/allmydata/storage/backends/das/core.py 1
3887+import os, re, weakref, struct, time, stat
3888+
3889 from allmydata.interfaces import IStorageBackend
3890 from allmydata.storage.backends.base import Backend
3891 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3892hunk ./src/allmydata/storage/backends/das/core.py 8
3893 from allmydata.util.assertutil import precondition
3894 
3895-import os, re, weakref, struct, time
3896-
3897 #from foolscap.api import Referenceable
3898 from twisted.application import service
3899 
3900hunk ./src/allmydata/storage/backends/das/core.py 89
3901         try:
3902             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3903             incominglist = os.listdir(incomingsharesdir)
3904-            print "incominglist: ", incominglist
3905-            return set(incominglist)
3906+            incomingshnums = [int(x) for x in incominglist]
3907+            return set(incomingshnums)
3908         except OSError:
3909             # XXX I'd like to make this more specific. If there are no shares at all.
3910             return set()
3911hunk ./src/allmydata/storage/backends/das/core.py 113
3912         return fileutil.get_available_space(self.storedir, self.reserved_space)
3913 
3914     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3915-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3916-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3917-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3918+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3919+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3920+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3921         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3922         return bw
3923 
3924hunk ./src/allmydata/storage/backends/das/core.py 160
3925         max_size to be written to me. If create=True then max_size
3926         must not be None. """
3927         precondition((max_size is not None) or (not create), max_size, create)
3928-        self.shnum = shnum
3929-        self.storage_index = storageindex
3930-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3931         self._max_size = max_size
3932hunk ./src/allmydata/storage/backends/das/core.py 161
3933-        self.incomingdir = os.path.join(sharedir, 'incoming')
3934-        si_dir = storage_index_to_dir(storageindex)
3935-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3936-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3937-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3938+        self.incominghome = incominghome
3939+        self.finalhome = finalhome
3940         if create:
3941             # touch the file, so later callers will see that we're working on
3942             # it. Also construct the metadata.
3943hunk ./src/allmydata/storage/backends/das/core.py 166
3944-            assert not os.path.exists(self.fname)
3945-            fileutil.make_dirs(os.path.dirname(self.fname))
3946-            f = open(self.fname, 'wb')
3947+            assert not os.path.exists(self.finalhome)
3948+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3949+            f = open(self.incominghome, 'wb')
3950             # The second field -- the four-byte share data length -- is no
3951             # longer used as of Tahoe v1.3.0, but we continue to write it in
3952             # there in case someone downgrades a storage server from >=
3953hunk ./src/allmydata/storage/backends/das/core.py 183
3954             self._lease_offset = max_size + 0x0c
3955             self._num_leases = 0
3956         else:
3957-            f = open(self.fname, 'rb')
3958-            filesize = os.path.getsize(self.fname)
3959+            f = open(self.finalhome, 'rb')
3960+            filesize = os.path.getsize(self.finalhome)
3961             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3962             f.close()
3963             if version != 1:
3964hunk ./src/allmydata/storage/backends/das/core.py 189
3965                 msg = "sharefile %s had version %d but we wanted 1" % \
3966-                      (self.fname, version)
3967+                      (self.finalhome, version)
3968                 raise UnknownImmutableContainerVersionError(msg)
3969             self._num_leases = num_leases
3970             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3971hunk ./src/allmydata/storage/backends/das/core.py 225
3972         pass
3973         
3974     def stat(self):
3975-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3976+        return os.stat(self.finalhome)[stat.ST_SIZE]
3977+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
3978 
3979     def get_shnum(self):
3980         return self.shnum
3981hunk ./src/allmydata/storage/backends/das/core.py 232
3982 
3983     def unlink(self):
3984-        os.unlink(self.fname)
3985+        os.unlink(self.finalhome)
3986 
3987     def read_share_data(self, offset, length):
3988         precondition(offset >= 0)
3989hunk ./src/allmydata/storage/backends/das/core.py 239
3990         # Reads beyond the end of the data are truncated. Reads that start
3991         # beyond the end of the data return an empty string.
3992         seekpos = self._data_offset+offset
3993-        fsize = os.path.getsize(self.fname)
3994+        fsize = os.path.getsize(self.finalhome)
3995         actuallength = max(0, min(length, fsize-seekpos))
3996         if actuallength == 0:
3997             return ""
3998hunk ./src/allmydata/storage/backends/das/core.py 243
3999-        f = open(self.fname, 'rb')
4000+        f = open(self.finalhome, 'rb')
4001         f.seek(seekpos)
4002         return f.read(actuallength)
4003 
4004hunk ./src/allmydata/storage/backends/das/core.py 252
4005         precondition(offset >= 0, offset)
4006         if self._max_size is not None and offset+length > self._max_size:
4007             raise DataTooLargeError(self._max_size, offset, length)
4008-        f = open(self.fname, 'rb+')
4009+        f = open(self.incominghome, 'rb+')
4010         real_offset = self._data_offset+offset
4011         f.seek(real_offset)
4012         assert f.tell() == real_offset
4013hunk ./src/allmydata/storage/backends/das/core.py 279
4014 
4015     def get_leases(self):
4016         """Yields a LeaseInfo instance for all leases."""
4017-        f = open(self.fname, 'rb')
4018+        f = open(self.finalhome, 'rb')
4019         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
4020         f.seek(self._lease_offset)
4021         for i in range(num_leases):
4022hunk ./src/allmydata/storage/backends/das/core.py 288
4023                 yield LeaseInfo().from_immutable_data(data)
4024 
4025     def add_lease(self, lease_info):
4026-        f = open(self.fname, 'rb+')
4027+        f = open(self.incominghome, 'rb+')
4028         num_leases = self._read_num_leases(f)
4029         self._write_lease_record(f, num_leases, lease_info)
4030         self._write_num_leases(f, num_leases+1)
4031hunk ./src/allmydata/storage/backends/das/core.py 301
4032                 if new_expire_time > lease.expiration_time:
4033                     # yes
4034                     lease.expiration_time = new_expire_time
4035-                    f = open(self.fname, 'rb+')
4036+                    f = open(self.finalhome, 'rb+')
4037                     self._write_lease_record(f, i, lease)
4038                     f.close()
4039                 return
4040hunk ./src/allmydata/storage/backends/das/core.py 336
4041             # the same order as they were added, so that if we crash while
4042             # doing this, we won't lose any non-cancelled leases.
4043             leases = [l for l in leases if l] # remove the cancelled leases
4044-            f = open(self.fname, 'rb+')
4045+            f = open(self.finalhome, 'rb+')
4046             for i,lease in enumerate(leases):
4047                 self._write_lease_record(f, i, lease)
4048             self._write_num_leases(f, len(leases))
4049hunk ./src/allmydata/storage/backends/das/core.py 344
4050             f.close()
4051         space_freed = self.LEASE_SIZE * num_leases_removed
4052         if not len(leases):
4053-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
4054+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
4055             self.unlink()
4056         return space_freed
4057hunk ./src/allmydata/test/test_backends.py 129
4058     @mock.patch('time.time')
4059     def test_write_share(self, mocktime):
4060         """ Write a new share. """
4061-
4062-        class MockShare:
4063-            def __init__(self):
4064-                self.shnum = 1
4065-               
4066-            def add_or_renew_lease(elf, lease_info):
4067-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4068-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4069-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4070-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4071-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4072-
4073-        share = MockShare()
4074-
4075         # Now begin the test.
4076 
4077         # XXX (0) ???  Fail unless something is not properly set-up?
4078hunk ./src/allmydata/test/test_backends.py 143
4079         # self.failIf(bsa)
4080 
4081         bs[0].remote_write(0, 'a')
4082-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4083+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4084         spaceint = self.s.allocated_size()
4085         self.failUnlessReallyEqual(spaceint, 1)
4086 
4087hunk ./src/allmydata/test/test_backends.py 161
4088         #self.failIf(mockrename.called, mockrename.call_args_list)
4089         #self.failIf(mockstat.called, mockstat.call_args_list)
4090 
4091+    def test_handle_incoming(self):
4092+        incomingset = self.s.backend.get_incoming('teststorage_index')
4093+        self.failUnlessReallyEqual(incomingset, set())
4094+
4095+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4096+       
4097+        incomingset = self.s.backend.get_incoming('teststorage_index')
4098+        self.failUnlessReallyEqual(incomingset, set((0,)))
4099+
4100+        bs[0].remote_close()
4101+        self.failUnlessReallyEqual(incomingset, set())
4102+
4103     @mock.patch('os.path.exists')
4104     @mock.patch('os.path.getsize')
4105     @mock.patch('__builtin__.open')
4106hunk ./src/allmydata/test/test_backends.py 223
4107         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4108 
4109 
4110-
4111 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4112     @mock.patch('time.time')
4113     @mock.patch('os.mkdir')
4114hunk ./src/allmydata/test/test_backends.py 271
4115         DASCore('teststoredir', expiration_policy)
4116 
4117         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4118+
4119}
4120[jacp14
4121wilcoxjg@gmail.com**20110712061211
4122 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4123] {
4124hunk ./src/allmydata/storage/backends/das/core.py 95
4125             # XXX I'd like to make this more specific. If there are no shares at all.
4126             return set()
4127             
4128-    def get_shares(self, storage_index):
4129+    def get_shares(self, storageindex):
4130         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4131hunk ./src/allmydata/storage/backends/das/core.py 97
4132-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4133+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4134         try:
4135             for f in os.listdir(finalstoragedir):
4136                 if NUM_RE.match(f):
4137hunk ./src/allmydata/storage/backends/das/core.py 102
4138                     filename = os.path.join(finalstoragedir, f)
4139-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4140+                    yield ImmutableShare(filename, storageindex, f)
4141         except OSError:
4142             # Commonly caused by there being no shares at all.
4143             pass
4144hunk ./src/allmydata/storage/backends/das/core.py 115
4145     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4146         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4147         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4148-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4149+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4150         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4151         return bw
4152 
4153hunk ./src/allmydata/storage/backends/das/core.py 155
4154     LEASE_SIZE = struct.calcsize(">L32s32sL")
4155     sharetype = "immutable"
4156 
4157-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4158+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4159         """ If max_size is not None then I won't allow more than
4160         max_size to be written to me. If create=True then max_size
4161         must not be None. """
4162hunk ./src/allmydata/storage/backends/das/core.py 160
4163         precondition((max_size is not None) or (not create), max_size, create)
4164+        self.storageindex = storageindex
4165         self._max_size = max_size
4166         self.incominghome = incominghome
4167         self.finalhome = finalhome
4168hunk ./src/allmydata/storage/backends/das/core.py 164
4169+        self.shnum = shnum
4170         if create:
4171             # touch the file, so later callers will see that we're working on
4172             # it. Also construct the metadata.
4173hunk ./src/allmydata/storage/backends/das/core.py 212
4174             # their children to know when they should do the rmdir. This
4175             # approach is simpler, but relies on os.rmdir refusing to delete
4176             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4177+            #print "os.path.dirname(self.incominghome): "
4178+            #print os.path.dirname(self.incominghome)
4179             os.rmdir(os.path.dirname(self.incominghome))
4180             # we also delete the grandparent (prefix) directory, .../ab ,
4181             # again to avoid leaving directories lying around. This might
4182hunk ./src/allmydata/storage/immutable.py 93
4183     def __init__(self, ss, share):
4184         self.ss = ss
4185         self._share_file = share
4186-        self.storage_index = share.storage_index
4187+        self.storageindex = share.storageindex
4188         self.shnum = share.shnum
4189 
4190     def __repr__(self):
4191hunk ./src/allmydata/storage/immutable.py 98
4192         return "<%s %s %s>" % (self.__class__.__name__,
4193-                               base32.b2a_l(self.storage_index[:8], 60),
4194+                               base32.b2a_l(self.storageindex[:8], 60),
4195                                self.shnum)
4196 
4197     def remote_read(self, offset, length):
4198hunk ./src/allmydata/storage/immutable.py 110
4199 
4200     def remote_advise_corrupt_share(self, reason):
4201         return self.ss.remote_advise_corrupt_share("immutable",
4202-                                                   self.storage_index,
4203+                                                   self.storageindex,
4204                                                    self.shnum,
4205                                                    reason)
4206hunk ./src/allmydata/test/test_backends.py 20
4207 # The following share file contents was generated with
4208 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4209 # with share data == 'a'.
4210-renew_secret  = 'x'*32
4211-cancel_secret = 'y'*32
4212-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4213-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4214+shareversionnumber = '\x00\x00\x00\x01'
4215+sharedatalength = '\x00\x00\x00\x01'
4216+numberofleases = '\x00\x00\x00\x01'
4217+shareinputdata = 'a'
4218+ownernumber = '\x00\x00\x00\x00'
4219+renewsecret  = 'x'*32
4220+cancelsecret = 'y'*32
4221+expirationtime = '\x00(\xde\x80'
4222+nextlease = ''
4223+containerdata = shareversionnumber + sharedatalength + numberofleases
4224+client_data = shareinputdata + ownernumber + renewsecret + \
4225+    cancelsecret + expirationtime + nextlease
4226+share_data = containerdata + client_data
4227+
4228 
4229 testnodeid = 'testnodeidxxxxxxxxxx'
4230 tempdir = 'teststoredir'
4231hunk ./src/allmydata/test/test_backends.py 52
4232 
4233 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4234     def setUp(self):
4235-        self.s = StorageServer(testnodeid, backend=NullCore())
4236+        self.ss = StorageServer(testnodeid, backend=NullCore())
4237 
4238     @mock.patch('os.mkdir')
4239     @mock.patch('__builtin__.open')
4240hunk ./src/allmydata/test/test_backends.py 62
4241         """ Write a new share. """
4242 
4243         # Now begin the test.
4244-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4245+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4246         bs[0].remote_write(0, 'a')
4247         self.failIf(mockisdir.called)
4248         self.failIf(mocklistdir.called)
4249hunk ./src/allmydata/test/test_backends.py 133
4250                 _assert(False, "The tester code doesn't recognize this case.") 
4251 
4252         mockopen.side_effect = call_open
4253-        testbackend = DASCore(tempdir, expiration_policy)
4254-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4255+        self.backend = DASCore(tempdir, expiration_policy)
4256+        self.ss = StorageServer(testnodeid, self.backend)
4257+        self.ssinf = StorageServer(testnodeid, self.backend)
4258 
4259     @mock.patch('time.time')
4260     def test_write_share(self, mocktime):
4261hunk ./src/allmydata/test/test_backends.py 142
4262         """ Write a new share. """
4263         # Now begin the test.
4264 
4265-        # XXX (0) ???  Fail unless something is not properly set-up?
4266-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4267+        mocktime.return_value = 0
4268+        # Inspect incoming and fail unless it's empty.
4269+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4270+        self.failUnlessReallyEqual(incomingset, set())
4271+       
4272+        # Among other things, populate incoming with the sharenum: 0.
4273+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4274 
4275hunk ./src/allmydata/test/test_backends.py 150
4276-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4277-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4278-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4279+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4280+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4281+       
4282+        # Attempt to create a second share writer with the same share.
4283+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4284 
4285hunk ./src/allmydata/test/test_backends.py 156
4286-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4287+        # Show that no sharewriter results from a remote_allocate_buckets
4288         # with the same si, until BucketWriter.remote_close() has been called.
4289hunk ./src/allmydata/test/test_backends.py 158
4290-        # self.failIf(bsa)
4291+        self.failIf(bsa)
4292 
4293hunk ./src/allmydata/test/test_backends.py 160
4294+        # Write 'a' to shnum 0. Only tested together with close and read.
4295         bs[0].remote_write(0, 'a')
4296hunk ./src/allmydata/test/test_backends.py 162
4297-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4298-        spaceint = self.s.allocated_size()
4299+
4300+        # Test allocated size.
4301+        spaceint = self.ss.allocated_size()
4302         self.failUnlessReallyEqual(spaceint, 1)
4303 
4304         # XXX (3) Inspect final and fail unless there's nothing there.
4305hunk ./src/allmydata/test/test_backends.py 168
4306+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4307         bs[0].remote_close()
4308         # XXX (4a) Inspect final and fail unless share 0 is there.
4309hunk ./src/allmydata/test/test_backends.py 171
4310+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4311+        #contents = sharesinfinal[0].read_share_data(0,999)
4312+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4313         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4314 
4315         # What happens when there's not enough space for the client's request?
4316hunk ./src/allmydata/test/test_backends.py 177
4317-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4318+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4319 
4320         # Now test the allocated_size method.
4321         # self.failIf(mockexists.called, mockexists.call_args_list)
4322hunk ./src/allmydata/test/test_backends.py 185
4323         #self.failIf(mockrename.called, mockrename.call_args_list)
4324         #self.failIf(mockstat.called, mockstat.call_args_list)
4325 
4326-    def test_handle_incoming(self):
4327-        incomingset = self.s.backend.get_incoming('teststorage_index')
4328-        self.failUnlessReallyEqual(incomingset, set())
4329-
4330-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4331-       
4332-        incomingset = self.s.backend.get_incoming('teststorage_index')
4333-        self.failUnlessReallyEqual(incomingset, set((0,)))
4334-
4335-        bs[0].remote_close()
4336-        self.failUnlessReallyEqual(incomingset, set())
4337-
4338     @mock.patch('os.path.exists')
4339     @mock.patch('os.path.getsize')
4340     @mock.patch('__builtin__.open')
4341hunk ./src/allmydata/test/test_backends.py 208
4342             self.failUnless('r' in mode, mode)
4343             self.failUnless('b' in mode, mode)
4344 
4345-            return StringIO(share_file_data)
4346+            return StringIO(share_data)
4347         mockopen.side_effect = call_open
4348 
4349hunk ./src/allmydata/test/test_backends.py 211
4350-        datalen = len(share_file_data)
4351+        datalen = len(share_data)
4352         def call_getsize(fname):
4353             self.failUnlessReallyEqual(fname, sharefname)
4354             return datalen
4355hunk ./src/allmydata/test/test_backends.py 223
4356         mockexists.side_effect = call_exists
4357 
4358         # Now begin the test.
4359-        bs = self.s.remote_get_buckets('teststorage_index')
4360+        bs = self.ss.remote_get_buckets('teststorage_index')
4361 
4362         self.failUnlessEqual(len(bs), 1)
4363hunk ./src/allmydata/test/test_backends.py 226
4364-        b = bs[0]
4365+        b = bs['0']
4366         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4367hunk ./src/allmydata/test/test_backends.py 228
4368-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4369+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4370         # If you try to read past the end you get the as much data as is there.
4371hunk ./src/allmydata/test/test_backends.py 230
4372-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4373+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4374         # If you start reading past the end of the file you get the empty string.
4375         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4376 
4377}
4378[jacp14 or so
4379wilcoxjg@gmail.com**20110713060346
4380 Ignore-this: 7026810f60879d65b525d450e43ff87a
4381] {
4382hunk ./src/allmydata/storage/backends/das/core.py 102
4383             for f in os.listdir(finalstoragedir):
4384                 if NUM_RE.match(f):
4385                     filename = os.path.join(finalstoragedir, f)
4386-                    yield ImmutableShare(filename, storageindex, f)
4387+                    yield ImmutableShare(filename, storageindex, int(f))
4388         except OSError:
4389             # Commonly caused by there being no shares at all.
4390             pass
4391hunk ./src/allmydata/storage/backends/null/core.py 25
4392     def set_storage_server(self, ss):
4393         self.ss = ss
4394 
4395+    def get_incoming(self, storageindex):
4396+        return set()
4397+
4398 class ImmutableShare:
4399     sharetype = "immutable"
4400 
4401hunk ./src/allmydata/storage/immutable.py 19
4402 
4403     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4404         self.ss = ss
4405-        self._max_size = max_size # don't allow the client to write more than this
4406+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4407+
4408         self._canary = canary
4409         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4410         self.closed = False
4411hunk ./src/allmydata/test/test_backends.py 135
4412         mockopen.side_effect = call_open
4413         self.backend = DASCore(tempdir, expiration_policy)
4414         self.ss = StorageServer(testnodeid, self.backend)
4415-        self.ssinf = StorageServer(testnodeid, self.backend)
4416+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4417+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4418 
4419     @mock.patch('time.time')
4420     def test_write_share(self, mocktime):
4421hunk ./src/allmydata/test/test_backends.py 161
4422         # with the same si, until BucketWriter.remote_close() has been called.
4423         self.failIf(bsa)
4424 
4425-        # Write 'a' to shnum 0. Only tested together with close and read.
4426-        bs[0].remote_write(0, 'a')
4427-
4428         # Test allocated size.
4429         spaceint = self.ss.allocated_size()
4430         self.failUnlessReallyEqual(spaceint, 1)
4431hunk ./src/allmydata/test/test_backends.py 165
4432 
4433-        # XXX (3) Inspect final and fail unless there's nothing there.
4434+        # Write 'a' to shnum 0. Only tested together with close and read.
4435+        bs[0].remote_write(0, 'a')
4436+       
4437+        # Preclose: Inspect final, failUnless nothing there.
4438         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4439         bs[0].remote_close()
4440hunk ./src/allmydata/test/test_backends.py 171
4441-        # XXX (4a) Inspect final and fail unless share 0 is there.
4442-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4443-        #contents = sharesinfinal[0].read_share_data(0,999)
4444-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4445-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4446 
4447hunk ./src/allmydata/test/test_backends.py 172
4448-        # What happens when there's not enough space for the client's request?
4449-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4450+        # Postclose: (Omnibus) failUnless written data is in final.
4451+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4452+        contents = sharesinfinal[0].read_share_data(0,73)
4453+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4454 
4455hunk ./src/allmydata/test/test_backends.py 177
4456-        # Now test the allocated_size method.
4457-        # self.failIf(mockexists.called, mockexists.call_args_list)
4458-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4459-        #self.failIf(mockrename.called, mockrename.call_args_list)
4460-        #self.failIf(mockstat.called, mockstat.call_args_list)
4461+        # Cover interior of for share in get_shares loop.
4462+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4463+       
4464+    @mock.patch('time.time')
4465+    @mock.patch('allmydata.util.fileutil.get_available_space')
4466+    def test_out_of_space(self, mockget_available_space, mocktime):
4467+        mocktime.return_value = 0
4468+       
4469+        def call_get_available_space(dir, reserve):
4470+            return 0
4471+
4472+        mockget_available_space.side_effect = call_get_available_space
4473+       
4474+       
4475+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4476 
4477     @mock.patch('os.path.exists')
4478     @mock.patch('os.path.getsize')
4479hunk ./src/allmydata/test/test_backends.py 234
4480         bs = self.ss.remote_get_buckets('teststorage_index')
4481 
4482         self.failUnlessEqual(len(bs), 1)
4483-        b = bs['0']
4484+        b = bs[0]
4485         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4486         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4487         # If you try to read past the end you get the as much data as is there.
4488}
4489[temporary work-in-progress patch to be unrecorded
4490zooko@zooko.com**20110714003008
4491 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4492 tidy up a few tests, work done in pair-programming with Zancas
4493] {
4494hunk ./src/allmydata/storage/backends/das/core.py 65
4495         self._clean_incomplete()
4496 
4497     def _clean_incomplete(self):
4498-        fileutil.rm_dir(self.incomingdir)
4499+        fileutil.rmtree(self.incomingdir)
4500         fileutil.make_dirs(self.incomingdir)
4501 
4502     def _setup_corruption_advisory(self):
4503hunk ./src/allmydata/storage/immutable.py 1
4504-import os, stat, struct, time
4505+import os, time
4506 
4507 from foolscap.api import Referenceable
4508 
4509hunk ./src/allmydata/storage/server.py 1
4510-import os, re, weakref, struct, time
4511+import os, weakref, struct, time
4512 
4513 from foolscap.api import Referenceable
4514 from twisted.application import service
4515hunk ./src/allmydata/storage/server.py 7
4516 
4517 from zope.interface import implements
4518-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4519+from allmydata.interfaces import RIStorageServer, IStatsProducer
4520 from allmydata.util import fileutil, idlib, log, time_format
4521 import allmydata # for __full_version__
4522 
4523hunk ./src/allmydata/storage/server.py 313
4524         self.add_latency("get", time.time() - start)
4525         return bucketreaders
4526 
4527-    def remote_get_incoming(self, storageindex):
4528-        incoming_share_set = self.backend.get_incoming(storageindex)
4529-        return incoming_share_set
4530-
4531     def get_leases(self, storageindex):
4532         """Provide an iterator that yields all of the leases attached to this
4533         bucket. Each lease is returned as a LeaseInfo instance.
4534hunk ./src/allmydata/test/test_backends.py 3
4535 from twisted.trial import unittest
4536 
4537+from twisted.path.filepath import FilePath
4538+
4539 from StringIO import StringIO
4540 
4541 from allmydata.test.common_util import ReallyEqualMixin
4542hunk ./src/allmydata/test/test_backends.py 38
4543 
4544 
4545 testnodeid = 'testnodeidxxxxxxxxxx'
4546-tempdir = 'teststoredir'
4547-basedir = os.path.join(tempdir, 'shares')
4548+storedir = 'teststoredir'
4549+storedirfp = FilePath(storedir)
4550+basedir = os.path.join(storedir, 'shares')
4551 baseincdir = os.path.join(basedir, 'incoming')
4552 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4553 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4554hunk ./src/allmydata/test/test_backends.py 53
4555                      'cutoff_date' : None,
4556                      'sharetypes' : None}
4557 
4558-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4559+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4560+    """ NullBackend is just for testing and executable documentation, so
4561+    this test is actually a test of StorageServer in which we're using
4562+    NullBackend as helper code for the test, rather than a test of
4563+    NullBackend. """
4564     def setUp(self):
4565         self.ss = StorageServer(testnodeid, backend=NullCore())
4566 
4567hunk ./src/allmydata/test/test_backends.py 62
4568     @mock.patch('os.mkdir')
4569+
4570     @mock.patch('__builtin__.open')
4571     @mock.patch('os.listdir')
4572     @mock.patch('os.path.isdir')
4573hunk ./src/allmydata/test/test_backends.py 69
4574     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4575         """ Write a new share. """
4576 
4577-        # Now begin the test.
4578         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4579         bs[0].remote_write(0, 'a')
4580         self.failIf(mockisdir.called)
4581hunk ./src/allmydata/test/test_backends.py 83
4582     @mock.patch('os.listdir')
4583     @mock.patch('os.path.isdir')
4584     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4585-        """ This tests whether a server instance can be constructed
4586-        with a filesystem backend. To pass the test, it has to use the
4587-        filesystem in only the prescribed ways. """
4588+        """ This tests whether a server instance can be constructed with a
4589+        filesystem backend. To pass the test, it mustn't use the filesystem
4590+        outside of its configured storedir. """
4591 
4592         def call_open(fname, mode):
4593hunk ./src/allmydata/test/test_backends.py 88
4594-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4595-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4596-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4597-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4598-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4599+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4600+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4601+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4602+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4603+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4604                 return StringIO()
4605             else:
4606hunk ./src/allmydata/test/test_backends.py 95
4607-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4608+                fnamefp = FilePath(fname)
4609+                self.failUnless(storedirfp in fnamefp.parents(),
4610+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4611         mockopen.side_effect = call_open
4612 
4613         def call_isdir(fname):
4614hunk ./src/allmydata/test/test_backends.py 101
4615-            if fname == os.path.join(tempdir,'shares'):
4616+            if fname == os.path.join(storedir, 'shares'):
4617                 return True
4618hunk ./src/allmydata/test/test_backends.py 103
4619-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4620+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4621                 return True
4622             else:
4623                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4624hunk ./src/allmydata/test/test_backends.py 109
4625         mockisdir.side_effect = call_isdir
4626 
4627+        mocklistdir.return_value = []
4628+
4629         def call_mkdir(fname, mode):
4630hunk ./src/allmydata/test/test_backends.py 112
4631-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4632             self.failUnlessEqual(0777, mode)
4633hunk ./src/allmydata/test/test_backends.py 113
4634-            if fname == tempdir:
4635-                return None
4636-            elif fname == os.path.join(tempdir,'shares'):
4637-                return None
4638-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4639-                return None
4640-            else:
4641-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4642+            self.failUnlessIn(fname,
4643+                              [storedir,
4644+                               os.path.join(storedir, 'shares'),
4645+                               os.path.join(storedir, 'shares', 'incoming')],
4646+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4647         mockmkdir.side_effect = call_mkdir
4648 
4649         # Now begin the test.
4650hunk ./src/allmydata/test/test_backends.py 121
4651-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4652+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4653 
4654         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4655 
4656hunk ./src/allmydata/test/test_backends.py 126
4657 
4658-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4659+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4660+    """ This tests both the StorageServer xyz """
4661     @mock.patch('__builtin__.open')
4662     def setUp(self, mockopen):
4663         def call_open(fname, mode):
4664hunk ./src/allmydata/test/test_backends.py 131
4665-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4666-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4667-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4668-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4669-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4670+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4671+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4672+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4673+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4674+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4675                 return StringIO()
4676             else:
4677                 _assert(False, "The tester code doesn't recognize this case.") 
4678hunk ./src/allmydata/test/test_backends.py 141
4679 
4680         mockopen.side_effect = call_open
4681-        self.backend = DASCore(tempdir, expiration_policy)
4682+        self.backend = DASCore(storedir, expiration_policy)
4683         self.ss = StorageServer(testnodeid, self.backend)
4684hunk ./src/allmydata/test/test_backends.py 143
4685-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4686+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4687         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4688 
4689     @mock.patch('time.time')
4690hunk ./src/allmydata/test/test_backends.py 147
4691-    def test_write_share(self, mocktime):
4692-        """ Write a new share. """
4693-        # Now begin the test.
4694+    def test_write_and_read_share(self, mocktime):
4695+        """
4696+        Write a new share, read it, and test the server's (and FS backend's)
4697+        handling of simultaneous and successive attempts to write the same
4698+        share.
4699+        """
4700 
4701         mocktime.return_value = 0
4702         # Inspect incoming and fail unless it's empty.
4703hunk ./src/allmydata/test/test_backends.py 159
4704         incomingset = self.ss.backend.get_incoming('teststorage_index')
4705         self.failUnlessReallyEqual(incomingset, set())
4706         
4707-        # Among other things, populate incoming with the sharenum: 0.
4708+        # Populate incoming with the sharenum: 0.
4709         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4710 
4711         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4712hunk ./src/allmydata/test/test_backends.py 163
4713-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4714+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4715         
4716hunk ./src/allmydata/test/test_backends.py 165
4717-        # Attempt to create a second share writer with the same share.
4718+        # Attempt to create a second share writer with the same sharenum.
4719         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4720 
4721         # Show that no sharewriter results from a remote_allocate_buckets
4722hunk ./src/allmydata/test/test_backends.py 169
4723-        # with the same si, until BucketWriter.remote_close() has been called.
4724+        # with the same si and sharenum, until BucketWriter.remote_close()
4725+        # has been called.
4726         self.failIf(bsa)
4727 
4728         # Test allocated size.
4729hunk ./src/allmydata/test/test_backends.py 187
4730         # Postclose: (Omnibus) failUnless written data is in final.
4731         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4732         contents = sharesinfinal[0].read_share_data(0,73)
4733-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4734+        self.failUnlessReallyEqual(contents, client_data)
4735 
4736hunk ./src/allmydata/test/test_backends.py 189
4737-        # Cover interior of for share in get_shares loop.
4738-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4739+        # Exercise the case that the share we're asking to allocate is
4740+        # already (completely) uploaded.
4741+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4742         
4743     @mock.patch('time.time')
4744     @mock.patch('allmydata.util.fileutil.get_available_space')
4745hunk ./src/allmydata/test/test_backends.py 210
4746     @mock.patch('os.path.getsize')
4747     @mock.patch('__builtin__.open')
4748     @mock.patch('os.listdir')
4749-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4750+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4751         """ This tests whether the code correctly finds and reads
4752         shares written out by old (Tahoe-LAFS <= v1.8.2)
4753         servers. There is a similar test in test_download, but that one
4754hunk ./src/allmydata/test/test_backends.py 219
4755         StorageServer object. """
4756 
4757         def call_listdir(dirname):
4758-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4759+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4760             return ['0']
4761 
4762         mocklistdir.side_effect = call_listdir
4763hunk ./src/allmydata/test/test_backends.py 226
4764 
4765         def call_open(fname, mode):
4766             self.failUnlessReallyEqual(fname, sharefname)
4767-            self.failUnless('r' in mode, mode)
4768+            self.failUnlessEqual(mode[0], 'r', mode)
4769             self.failUnless('b' in mode, mode)
4770 
4771             return StringIO(share_data)
4772hunk ./src/allmydata/test/test_backends.py 268
4773         filesystem in only the prescribed ways. """
4774 
4775         def call_open(fname, mode):
4776-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4777-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4778-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4779-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4780-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4781+            if fname == os.path.join(storedir,'bucket_counter.state'):
4782+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4783+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4784+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4785+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4786                 return StringIO()
4787             else:
4788                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4789hunk ./src/allmydata/test/test_backends.py 279
4790         mockopen.side_effect = call_open
4791 
4792         def call_isdir(fname):
4793-            if fname == os.path.join(tempdir,'shares'):
4794+            if fname == os.path.join(storedir,'shares'):
4795                 return True
4796hunk ./src/allmydata/test/test_backends.py 281
4797-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4798+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4799                 return True
4800             else:
4801                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4802hunk ./src/allmydata/test/test_backends.py 290
4803         def call_mkdir(fname, mode):
4804             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4805             self.failUnlessEqual(0777, mode)
4806-            if fname == tempdir:
4807+            if fname == storedir:
4808                 return None
4809hunk ./src/allmydata/test/test_backends.py 292
4810-            elif fname == os.path.join(tempdir,'shares'):
4811+            elif fname == os.path.join(storedir,'shares'):
4812                 return None
4813hunk ./src/allmydata/test/test_backends.py 294
4814-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4815+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4816                 return None
4817             else:
4818                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4819hunk ./src/allmydata/util/fileutil.py 5
4820 Futz with files like a pro.
4821 """
4822 
4823-import sys, exceptions, os, stat, tempfile, time, binascii
4824+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4825 
4826 from twisted.python import log
4827 
4828hunk ./src/allmydata/util/fileutil.py 186
4829             raise tx
4830         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4831 
4832-def rm_dir(dirname):
4833+def rmtree(dirname):
4834     """
4835     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4836     already gone, do nothing and return without raising an exception.  If this
4837hunk ./src/allmydata/util/fileutil.py 205
4838             else:
4839                 remove(fullname)
4840         os.rmdir(dirname)
4841-    except Exception, le:
4842-        # Ignore "No such file or directory"
4843-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4844+    except EnvironmentError, le:
4845+        # Ignore "No such file or directory", collect any other exception.
4846+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4847             excs.append(le)
4848hunk ./src/allmydata/util/fileutil.py 209
4849+    except Exception, le:
4850+        excs.append(le)
4851 
4852     # Okay, now we've recursively removed everything, ignoring any "No
4853     # such file or directory" errors, and collecting any other errors.
4854hunk ./src/allmydata/util/fileutil.py 222
4855             raise OSError, "Failed to remove dir for unknown reason."
4856         raise OSError, excs
4857 
4858+def rm_dir(dirname):
4859+    # Renamed to be like shutil.rmtree and unlike rmdir.
4860+    return rmtree(dirname)
4861 
4862 def remove_if_possible(f):
4863     try:
4864}
4865[work in progress intended to be unrecorded and never committed to trunk
4866zooko@zooko.com**20110714212139
4867 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
4868 switch from os.path.join to filepath
4869 incomplete refactoring of common "stay in your subtree" tester code into a superclass
4870 
4871] {
4872hunk ./src/allmydata/test/test_backends.py 3
4873 from twisted.trial import unittest
4874 
4875-from twisted.path.filepath import FilePath
4876+from twisted.python.filepath import FilePath
4877 
4878 from StringIO import StringIO
4879 
4880hunk ./src/allmydata/test/test_backends.py 10
4881 from allmydata.test.common_util import ReallyEqualMixin
4882 from allmydata.util.assertutil import _assert
4883 
4884-import mock, os
4885+import mock
4886 
4887 # This is the code that we're going to be testing.
4888 from allmydata.storage.server import StorageServer
4889hunk ./src/allmydata/test/test_backends.py 25
4890 shareversionnumber = '\x00\x00\x00\x01'
4891 sharedatalength = '\x00\x00\x00\x01'
4892 numberofleases = '\x00\x00\x00\x01'
4893+
4894 shareinputdata = 'a'
4895 ownernumber = '\x00\x00\x00\x00'
4896 renewsecret  = 'x'*32
4897hunk ./src/allmydata/test/test_backends.py 39
4898 
4899 
4900 testnodeid = 'testnodeidxxxxxxxxxx'
4901-storedir = 'teststoredir'
4902-storedirfp = FilePath(storedir)
4903-basedir = os.path.join(storedir, 'shares')
4904-baseincdir = os.path.join(basedir, 'incoming')
4905-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4906-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4907-shareincomingname = os.path.join(sharedirincomingname, '0')
4908-sharefname = os.path.join(sharedirfinalname, '0')
4909+
4910+class TestFilesMixin(unittest.TestCase):
4911+    def setUp(self):
4912+        self.storedir = FilePath('teststoredir')
4913+        self.basedir = self.storedir.child('shares')
4914+        self.baseincdir = self.basedir.child('incoming')
4915+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4916+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4917+        self.shareincomingname = self.sharedirincomingname.child('0')
4918+        self.sharefname = self.sharedirfinalname.child('0')
4919+
4920+    def call_open(self, fname, mode):
4921+        fnamefp = FilePath(fname)
4922+        if fnamefp == self.storedir.child('bucket_counter.state'):
4923+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
4924+        elif fnamefp == self.storedir.child('lease_checker.state'):
4925+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
4926+        elif fnamefp == self.storedir.child('lease_checker.history'):
4927+            return StringIO()
4928+        else:
4929+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4930+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
4931+
4932+    def call_isdir(self, fname):
4933+        fnamefp = FilePath(fname)
4934+        if fnamefp == self.storedir.child('shares'):
4935+            return True
4936+        elif fnamefp == self.storedir.child('shares').child('incoming'):
4937+            return True
4938+        else:
4939+            self.failUnless(self.storedir in fnamefp.parents(),
4940+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4941+
4942+    def call_mkdir(self, fname, mode):
4943+        self.failUnlessEqual(0777, mode)
4944+        fnamefp = FilePath(fname)
4945+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4946+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4947+
4948+
4949+    @mock.patch('os.mkdir')
4950+    @mock.patch('__builtin__.open')
4951+    @mock.patch('os.listdir')
4952+    @mock.patch('os.path.isdir')
4953+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4954+        mocklistdir.return_value = []
4955+        mockmkdir.side_effect = self.call_mkdir
4956+        mockisdir.side_effect = self.call_isdir
4957+        mockopen.side_effect = self.call_open
4958+        mocklistdir.return_value = []
4959+       
4960+        test_func()
4961+       
4962+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4963 
4964 expiration_policy = {'enabled' : False,
4965                      'mode' : 'age',
4966hunk ./src/allmydata/test/test_backends.py 123
4967         self.failIf(mockopen.called)
4968         self.failIf(mockmkdir.called)
4969 
4970-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
4971-    @mock.patch('time.time')
4972-    @mock.patch('os.mkdir')
4973-    @mock.patch('__builtin__.open')
4974-    @mock.patch('os.listdir')
4975-    @mock.patch('os.path.isdir')
4976-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4977+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
4978+    def test_create_server_fs_backend(self):
4979         """ This tests whether a server instance can be constructed with a
4980         filesystem backend. To pass the test, it mustn't use the filesystem
4981         outside of its configured storedir. """
4982hunk ./src/allmydata/test/test_backends.py 129
4983 
4984-        def call_open(fname, mode):
4985-            if fname == os.path.join(storedir, 'bucket_counter.state'):
4986-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4987-            elif fname == os.path.join(storedir, 'lease_checker.state'):
4988-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4989-            elif fname == os.path.join(storedir, 'lease_checker.history'):
4990-                return StringIO()
4991-            else:
4992-                fnamefp = FilePath(fname)
4993-                self.failUnless(storedirfp in fnamefp.parents(),
4994-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4995-        mockopen.side_effect = call_open
4996+        def _f():
4997+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4998 
4999hunk ./src/allmydata/test/test_backends.py 132
5000-        def call_isdir(fname):
5001-            if fname == os.path.join(storedir, 'shares'):
5002-                return True
5003-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
5004-                return True
5005-            else:
5006-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5007-        mockisdir.side_effect = call_isdir
5008-
5009-        mocklistdir.return_value = []
5010-
5011-        def call_mkdir(fname, mode):
5012-            self.failUnlessEqual(0777, mode)
5013-            self.failUnlessIn(fname,
5014-                              [storedir,
5015-                               os.path.join(storedir, 'shares'),
5016-                               os.path.join(storedir, 'shares', 'incoming')],
5017-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
5018-        mockmkdir.side_effect = call_mkdir
5019-
5020-        # Now begin the test.
5021-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5022-
5023-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5024+        self._help_test_stay_in_your_subtree(_f)
5025 
5026 
5027 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5028}
5029[another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
5030zooko@zooko.com**20110715191500
5031 Ignore-this: af33336789041800761e80510ea2f583
5032 In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
5033] {
5034hunk ./src/allmydata/storage/backends/das/core.py 59
5035                 log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5036                         umid="0wZ27w", level=log.UNUSUAL)
5037 
5038-        self.sharedir = os.path.join(self.storedir, "shares")
5039-        fileutil.make_dirs(self.sharedir)
5040-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
5041+        self.sharedir = self.storedir.child("shares")
5042+        fileutil.fp_make_dirs(self.sharedir)
5043+        self.incomingdir = self.sharedir.child('incoming')
5044         self._clean_incomplete()
5045 
5046     def _clean_incomplete(self):
5047hunk ./src/allmydata/storage/backends/das/core.py 65
5048-        fileutil.rmtree(self.incomingdir)
5049-        fileutil.make_dirs(self.incomingdir)
5050+        fileutil.fp_remove(self.incomingdir)
5051+        fileutil.fp_make_dirs(self.incomingdir)
5052 
5053     def _setup_corruption_advisory(self):
5054         # we don't actually create the corruption-advisory dir until necessary
5055hunk ./src/allmydata/storage/backends/das/core.py 70
5056-        self.corruption_advisory_dir = os.path.join(self.storedir,
5057-                                                    "corruption-advisories")
5058+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
5059 
5060     def _setup_bucket_counter(self):
5061hunk ./src/allmydata/storage/backends/das/core.py 73
5062-        statefname = os.path.join(self.storedir, "bucket_counter.state")
5063+        statefname = self.storedir.child("bucket_counter.state")
5064         self.bucket_counter = FSBucketCountingCrawler(statefname)
5065         self.bucket_counter.setServiceParent(self)
5066 
5067hunk ./src/allmydata/storage/backends/das/core.py 78
5068     def _setup_lease_checkerf(self, expiration_policy):
5069-        statefile = os.path.join(self.storedir, "lease_checker.state")
5070-        historyfile = os.path.join(self.storedir, "lease_checker.history")
5071+        statefile = self.storedir.child("lease_checker.state")
5072+        historyfile = self.storedir.child("lease_checker.history")
5073         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
5074         self.lease_checker.setServiceParent(self)
5075 
5076hunk ./src/allmydata/storage/backends/das/core.py 83
5077-    def get_incoming(self, storageindex):
5078+    def get_incoming_shnums(self, storageindex):
5079         """Return the set of incoming shnums."""
5080         try:
5081hunk ./src/allmydata/storage/backends/das/core.py 86
5082-            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
5083-            incominglist = os.listdir(incomingsharesdir)
5084-            incomingshnums = [int(x) for x in incominglist]
5085-            return set(incomingshnums)
5086-        except OSError:
5087-            # XXX I'd like to make this more specific. If there are no shares at all.
5088-            return set()
5089+           
5090+            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5091+            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5092+            return frozenset(incomingshnums)
5093+        except UnlistableError:
5094+            # There is no shares directory at all.
5095+            return frozenset()
5096             
5097     def get_shares(self, storageindex):
5098         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5099hunk ./src/allmydata/storage/backends/das/core.py 96
5100-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
5101+        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5102         try:
5103hunk ./src/allmydata/storage/backends/das/core.py 98
5104-            for f in os.listdir(finalstoragedir):
5105-                if NUM_RE.match(f):
5106-                    filename = os.path.join(finalstoragedir, f)
5107-                    yield ImmutableShare(filename, storageindex, int(f))
5108-        except OSError:
5109-            # Commonly caused by there being no shares at all.
5110+            for f in finalstoragedir.listdir():
5111+                if NUM_RE.match(f.basename):
5112+                    yield ImmutableShare(f, storageindex, int(f))
5113+        except UnlistableError:
5114+            # There is no shares directory at all.
5115             pass
5116         
5117     def get_available_space(self):
5118hunk ./src/allmydata/storage/backends/das/core.py 149
5119 # then the value stored in this field will be the actual share data length
5120 # modulo 2**32.
5121 
5122-class ImmutableShare:
5123+class ImmutableShare(object):
5124     LEASE_SIZE = struct.calcsize(">L32s32sL")
5125     sharetype = "immutable"
5126 
5127hunk ./src/allmydata/storage/backends/das/core.py 166
5128         if create:
5129             # touch the file, so later callers will see that we're working on
5130             # it. Also construct the metadata.
5131-            assert not os.path.exists(self.finalhome)
5132-            fileutil.make_dirs(os.path.dirname(self.incominghome))
5133+            assert not finalhome.exists()
5134+            fp_make_dirs(self.incominghome)
5135             f = open(self.incominghome, 'wb')
5136             # The second field -- the four-byte share data length -- is no
5137             # longer used as of Tahoe v1.3.0, but we continue to write it in
5138hunk ./src/allmydata/storage/backends/das/core.py 316
5139         except IndexError:
5140             self.add_lease(lease_info)
5141 
5142-
5143     def cancel_lease(self, cancel_secret):
5144         """Remove a lease with the given cancel_secret. If the last lease is
5145         cancelled, the file will be removed. Return the number of bytes that
5146hunk ./src/allmydata/storage/common.py 19
5147 def si_a2b(ascii_storageindex):
5148     return base32.a2b(ascii_storageindex)
5149 
5150-def storage_index_to_dir(storageindex):
5151+def storage_index_to_dir(startfp, storageindex):
5152     sia = si_b2a(storageindex)
5153     return os.path.join(sia[:2], sia)
5154hunk ./src/allmydata/storage/server.py 210
5155 
5156         # fill incoming with all shares that are incoming use a set operation
5157         # since there's no need to operate on individual pieces
5158-        incoming = self.backend.get_incoming(storageindex)
5159+        incoming = self.backend.get_incoming_shnums(storageindex)
5160 
5161         for shnum in ((sharenums - alreadygot) - incoming):
5162             if (not limited) or (remaining_space >= max_space_per_bucket):
5163hunk ./src/allmydata/test/test_backends.py 5
5164 
5165 from twisted.python.filepath import FilePath
5166 
5167+from allmydata.util.log import msg
5168+
5169 from StringIO import StringIO
5170 
5171 from allmydata.test.common_util import ReallyEqualMixin
5172hunk ./src/allmydata/test/test_backends.py 42
5173 
5174 testnodeid = 'testnodeidxxxxxxxxxx'
5175 
5176-class TestFilesMixin(unittest.TestCase):
5177-    def setUp(self):
5178-        self.storedir = FilePath('teststoredir')
5179-        self.basedir = self.storedir.child('shares')
5180-        self.baseincdir = self.basedir.child('incoming')
5181-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5182-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5183-        self.shareincomingname = self.sharedirincomingname.child('0')
5184-        self.sharefname = self.sharedirfinalname.child('0')
5185+class MockStat:
5186+    def __init__(self):
5187+        self.st_mode = None
5188 
5189hunk ./src/allmydata/test/test_backends.py 46
5190+class MockFiles(unittest.TestCase):
5191+    """ I simulate a filesystem that the code under test can use. I flag the
5192+    code under test if it reads or writes outside of its prescribed
5193+    subtree. I simulate just the parts of the filesystem that the current
5194+    implementation of DAS backend needs. """
5195     def call_open(self, fname, mode):
5196         fnamefp = FilePath(fname)
5197hunk ./src/allmydata/test/test_backends.py 53
5198+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5199+                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5200+
5201         if fnamefp == self.storedir.child('bucket_counter.state'):
5202             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
5203         elif fnamefp == self.storedir.child('lease_checker.state'):
5204hunk ./src/allmydata/test/test_backends.py 61
5205             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
5206         elif fnamefp == self.storedir.child('lease_checker.history'):
5207+            # This is separated out from the else clause below just because
5208+            # we know this particular file is going to be used by the
5209+            # current implementation of DAS backend, and we might want to
5210+            # use this information in this test in the future...
5211             return StringIO()
5212         else:
5213hunk ./src/allmydata/test/test_backends.py 67
5214-            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5215-                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5216+            # Anything else you open inside your subtree appears to be an
5217+            # empty file.
5218+            return StringIO()
5219 
5220     def call_isdir(self, fname):
5221         fnamefp = FilePath(fname)
5222hunk ./src/allmydata/test/test_backends.py 73
5223-        if fnamefp == self.storedir.child('shares'):
5224+        return fnamefp.isdir()
5225+
5226+        self.failUnless(self.storedir == self or self.storedir in self.parents(),
5227+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
5228+
5229+        # The first two cases are separate from the else clause below just
5230+        # because we know that the current implementation of the DAS backend
5231+        # inspects these two directories and we might want to make use of
5232+        # that information in the tests in the future...
5233+        if self == self.storedir.child('shares'):
5234             return True
5235hunk ./src/allmydata/test/test_backends.py 84
5236-        elif fnamefp == self.storedir.child('shares').child('incoming'):
5237+        elif self == self.storedir.child('shares').child('incoming'):
5238             return True
5239         else:
5240hunk ./src/allmydata/test/test_backends.py 87
5241-            self.failUnless(self.storedir in fnamefp.parents(),
5242-                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5243+            # Anything else you open inside your subtree appears to be a
5244+            # directory.
5245+            return True
5246 
5247     def call_mkdir(self, fname, mode):
5248hunk ./src/allmydata/test/test_backends.py 92
5249-        self.failUnlessEqual(0777, mode)
5250         fnamefp = FilePath(fname)
5251         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5252                         "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5253hunk ./src/allmydata/test/test_backends.py 95
5254+        self.failUnlessEqual(0777, mode)
5255 
5256hunk ./src/allmydata/test/test_backends.py 97
5257+    def call_listdir(self, fname):
5258+        fnamefp = FilePath(fname)
5259+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5260+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5261 
5262hunk ./src/allmydata/test/test_backends.py 102
5263-    @mock.patch('os.mkdir')
5264-    @mock.patch('__builtin__.open')
5265-    @mock.patch('os.listdir')
5266-    @mock.patch('os.path.isdir')
5267-    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5268-        mocklistdir.return_value = []
5269+    def call_stat(self, fname):
5270+        fnamefp = FilePath(fname)
5271+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5272+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5273+
5274+        msg("%s.call_stat(%s)" % (self, fname,))
5275+        mstat = MockStat()
5276+        mstat.st_mode = 16893 # a directory
5277+        return mstat
5278+
5279+    def setUp(self):
5280+        msg( "%s.setUp()" % (self,))
5281+        self.storedir = FilePath('teststoredir')
5282+        self.basedir = self.storedir.child('shares')
5283+        self.baseincdir = self.basedir.child('incoming')
5284+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5285+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5286+        self.shareincomingname = self.sharedirincomingname.child('0')
5287+        self.sharefname = self.sharedirfinalname.child('0')
5288+
5289+        self.mocklistdirp = mock.patch('os.listdir')
5290+        mocklistdir = self.mocklistdirp.__enter__()
5291+        mocklistdir.side_effect = self.call_listdir
5292+
5293+        self.mockmkdirp = mock.patch('os.mkdir')
5294+        mockmkdir = self.mockmkdirp.__enter__()
5295         mockmkdir.side_effect = self.call_mkdir
5296hunk ./src/allmydata/test/test_backends.py 129
5297+
5298+        self.mockisdirp = mock.patch('os.path.isdir')
5299+        mockisdir = self.mockisdirp.__enter__()
5300         mockisdir.side_effect = self.call_isdir
5301hunk ./src/allmydata/test/test_backends.py 133
5302+
5303+        self.mockopenp = mock.patch('__builtin__.open')
5304+        mockopen = self.mockopenp.__enter__()
5305         mockopen.side_effect = self.call_open
5306hunk ./src/allmydata/test/test_backends.py 137
5307-        mocklistdir.return_value = []
5308-       
5309-        test_func()
5310-       
5311-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5312+
5313+        self.mockstatp = mock.patch('os.stat')
5314+        mockstat = self.mockstatp.__enter__()
5315+        mockstat.side_effect = self.call_stat
5316+
5317+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
5318+        mockfpstat = self.mockfpstatp.__enter__()
5319+        mockfpstat.side_effect = self.call_stat
5320+
5321+    def tearDown(self):
5322+        msg( "%s.tearDown()" % (self,))
5323+        self.mockfpstatp.__exit__()
5324+        self.mockstatp.__exit__()
5325+        self.mockopenp.__exit__()
5326+        self.mockisdirp.__exit__()
5327+        self.mockmkdirp.__exit__()
5328+        self.mocklistdirp.__exit__()
5329 
5330 expiration_policy = {'enabled' : False,
5331                      'mode' : 'age',
5332hunk ./src/allmydata/test/test_backends.py 184
5333         self.failIf(mockopen.called)
5334         self.failIf(mockmkdir.called)
5335 
5336-class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5337+class TestServerConstruction(MockFiles, ReallyEqualMixin):
5338     def test_create_server_fs_backend(self):
5339         """ This tests whether a server instance can be constructed with a
5340         filesystem backend. To pass the test, it mustn't use the filesystem
5341hunk ./src/allmydata/test/test_backends.py 190
5342         outside of its configured storedir. """
5343 
5344-        def _f():
5345-            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5346+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5347 
5348hunk ./src/allmydata/test/test_backends.py 192
5349-        self._help_test_stay_in_your_subtree(_f)
5350-
5351-
5352-class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5353-    """ This tests both the StorageServer xyz """
5354-    @mock.patch('__builtin__.open')
5355-    def setUp(self, mockopen):
5356-        def call_open(fname, mode):
5357-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5358-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5359-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5360-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5361-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5362-                return StringIO()
5363-            else:
5364-                _assert(False, "The tester code doesn't recognize this case.") 
5365-
5366-        mockopen.side_effect = call_open
5367-        self.backend = DASCore(storedir, expiration_policy)
5368-        self.ss = StorageServer(testnodeid, self.backend)
5369-        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
5370-        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5371+class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
5372+    """ This tests both the StorageServer and the DAS backend together. """
5373+    def setUp(self):
5374+        MockFiles.setUp(self)
5375+        try:
5376+            self.backend = DASCore(self.storedir, expiration_policy)
5377+            self.ss = StorageServer(testnodeid, self.backend)
5378+            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
5379+            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5380+        except:
5381+            MockFiles.tearDown(self)
5382+            raise
5383 
5384     @mock.patch('time.time')
5385     def test_write_and_read_share(self, mocktime):
5386hunk ./src/allmydata/util/fileutil.py 8
5387 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5388 
5389 from twisted.python import log
5390+from twisted.python.filepath import UnlistableError
5391 
5392 from pycryptopp.cipher.aes import AES
5393 
5394hunk ./src/allmydata/util/fileutil.py 187
5395             raise tx
5396         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5397 
5398+def fp_make_dirs(dirfp):
5399+    """
5400+    An idempotent version of FilePath.makedirs().  If the dir already
5401+    exists, do nothing and return without raising an exception.  If this
5402+    call creates the dir, return without raising an exception.  If there is
5403+    an error that prevents creation or if the directory gets deleted after
5404+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
5405+    exists, raise an exception.
5406+    """
5407+    log.msg( "xxx 0 %s" % (dirfp,))
5408+    tx = None
5409+    try:
5410+        dirfp.makedirs()
5411+    except OSError, x:
5412+        tx = x
5413+
5414+    if not dirfp.isdir():
5415+        if tx:
5416+            raise tx
5417+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5418+
5419 def rmtree(dirname):
5420     """
5421     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5422hunk ./src/allmydata/util/fileutil.py 244
5423             raise OSError, "Failed to remove dir for unknown reason."
5424         raise OSError, excs
5425 
5426+def fp_remove(dirfp):
5427+    try:
5428+        dirfp.remove()
5429+    except UnlistableError, e:
5430+        if e.originalException.errno != errno.ENOENT:
5431+            raise
5432+
5433 def rm_dir(dirname):
5434     # Renamed to be like shutil.rmtree and unlike rmdir.
5435     return rmtree(dirname)
5436}
5437[another temporary patch for sharing work-in-progress
5438zooko@zooko.com**20110720055918
5439 Ignore-this: dfa0270476cbc6511cdb54c5c9a55a8e
5440 A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
5441 There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
5442 (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
5443 
5444] {
5445hunk ./src/allmydata/storage/backends/das/core.py 5
5446 
5447 from allmydata.interfaces import IStorageBackend
5448 from allmydata.storage.backends.base import Backend
5449-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5450+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5451 from allmydata.util.assertutil import precondition
5452 
5453 #from foolscap.api import Referenceable
5454hunk ./src/allmydata/storage/backends/das/core.py 10
5455 from twisted.application import service
5456+from twisted.python.filepath import UnlistableError
5457 
5458 from zope.interface import implements
5459 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
5460hunk ./src/allmydata/storage/backends/das/core.py 17
5461 from allmydata.util import fileutil, idlib, log, time_format
5462 import allmydata # for __full_version__
5463 
5464-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5465-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5466+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5467+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5468 from allmydata.storage.lease import LeaseInfo
5469 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5470      create_mutable_sharefile
5471hunk ./src/allmydata/storage/backends/das/core.py 41
5472 # $SHARENUM matches this regex:
5473 NUM_RE=re.compile("^[0-9]+$")
5474 
5475+def is_num(fp):
5476+    return NUM_RE.match(fp.basename)
5477+
5478 class DASCore(Backend):
5479     implements(IStorageBackend)
5480     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
5481hunk ./src/allmydata/storage/backends/das/core.py 58
5482         self.storedir = storedir
5483         self.readonly = readonly
5484         self.reserved_space = int(reserved_space)
5485-        if self.reserved_space:
5486-            if self.get_available_space() is None:
5487-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5488-                        umid="0wZ27w", level=log.UNUSUAL)
5489-
5490         self.sharedir = self.storedir.child("shares")
5491         fileutil.fp_make_dirs(self.sharedir)
5492         self.incomingdir = self.sharedir.child('incoming')
5493hunk ./src/allmydata/storage/backends/das/core.py 62
5494         self._clean_incomplete()
5495+        if self.reserved_space and (self.get_available_space() is None):
5496+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5497+                    umid="0wZ27w", level=log.UNUSUAL)
5498+
5499 
5500     def _clean_incomplete(self):
5501         fileutil.fp_remove(self.incomingdir)
5502hunk ./src/allmydata/storage/backends/das/core.py 87
5503         self.lease_checker.setServiceParent(self)
5504 
5505     def get_incoming_shnums(self, storageindex):
5506-        """Return the set of incoming shnums."""
5507+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
5508+        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
5509         try:
5510hunk ./src/allmydata/storage/backends/das/core.py 90
5511-           
5512-            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5513-            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5514-            return frozenset(incomingshnums)
5515+            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
5516+            shnums = [ int(fp.basename) for fp in childfps ]
5517+            return frozenset(shnums)
5518         except UnlistableError:
5519             # There is no shares directory at all.
5520             return frozenset()
5521hunk ./src/allmydata/storage/backends/das/core.py 98
5522             
5523     def get_shares(self, storageindex):
5524-        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5525+        """ Generate ImmutableShare objects for shares we have for this
5526+        storageindex. ("Shares we have" means completed ones, excluding
5527+        incoming ones.)"""
5528         finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5529         try:
5530hunk ./src/allmydata/storage/backends/das/core.py 103
5531-            for f in finalstoragedir.listdir():
5532-                if NUM_RE.match(f.basename):
5533-                    yield ImmutableShare(f, storageindex, int(f))
5534+            for fp in finalstoragedir.children():
5535+                if is_num(fp):
5536+                    yield ImmutableShare(fp, storageindex)
5537         except UnlistableError:
5538             # There is no shares directory at all.
5539             pass
5540hunk ./src/allmydata/storage/backends/das/core.py 116
5541         return fileutil.get_available_space(self.storedir, self.reserved_space)
5542 
5543     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
5544-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
5545-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
5546+        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
5547+        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
5548         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
5549         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
5550         return bw
5551hunk ./src/allmydata/storage/backends/das/expirer.py 50
5552     slow_start = 360 # wait 6 minutes after startup
5553     minimum_cycle_time = 12*60*60 # not more than twice per day
5554 
5555-    def __init__(self, statefile, historyfile, expiration_policy):
5556-        self.historyfile = historyfile
5557+    def __init__(self, statefile, historyfp, expiration_policy):
5558+        self.historyfp = historyfp
5559         self.expiration_enabled = expiration_policy['enabled']
5560         self.mode = expiration_policy['mode']
5561         self.override_lease_duration = None
5562hunk ./src/allmydata/storage/backends/das/expirer.py 80
5563             self.state["cycle-to-date"].setdefault(k, so_far[k])
5564 
5565         # initialize history
5566-        if not os.path.exists(self.historyfile):
5567+        if not self.historyfp.exists():
5568             history = {} # cyclenum -> dict
5569hunk ./src/allmydata/storage/backends/das/expirer.py 82
5570-            f = open(self.historyfile, "wb")
5571-            pickle.dump(history, f)
5572-            f.close()
5573+            self.historyfp.setContent(pickle.dumps(history))
5574 
5575     def create_empty_cycle_dict(self):
5576         recovered = self.create_empty_recovered_dict()
5577hunk ./src/allmydata/storage/backends/das/expirer.py 305
5578         # copy() needs to become a deepcopy
5579         h["space-recovered"] = s["space-recovered"].copy()
5580 
5581-        history = pickle.load(open(self.historyfile, "rb"))
5582+        history = pickle.load(self.historyfp.getContent())
5583         history[cycle] = h
5584         while len(history) > 10:
5585             oldcycles = sorted(history.keys())
5586hunk ./src/allmydata/storage/backends/das/expirer.py 310
5587             del history[oldcycles[0]]
5588-        f = open(self.historyfile, "wb")
5589-        pickle.dump(history, f)
5590-        f.close()
5591+        self.historyfp.setContent(pickle.dumps(history))
5592 
5593     def get_state(self):
5594         """In addition to the crawler state described in
5595hunk ./src/allmydata/storage/backends/das/expirer.py 379
5596         progress = self.get_progress()
5597 
5598         state = ShareCrawler.get_state(self) # does a shallow copy
5599-        history = pickle.load(open(self.historyfile, "rb"))
5600+        history = pickle.load(self.historyfp.getContent())
5601         state["history"] = history
5602 
5603         if not progress["cycle-in-progress"]:
5604hunk ./src/allmydata/storage/common.py 19
5605 def si_a2b(ascii_storageindex):
5606     return base32.a2b(ascii_storageindex)
5607 
5608-def storage_index_to_dir(startfp, storageindex):
5609+def si_dir(startfp, storageindex):
5610     sia = si_b2a(storageindex)
5611hunk ./src/allmydata/storage/common.py 21
5612-    return os.path.join(sia[:2], sia)
5613+    return startfp.child(sia[:2]).child(sia)
5614hunk ./src/allmydata/storage/crawler.py 68
5615     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
5616     minimum_cycle_time = 300 # don't run a cycle faster than this
5617 
5618-    def __init__(self, statefname, allowed_cpu_percentage=None):
5619+    def __init__(self, statefp, allowed_cpu_percentage=None):
5620         service.MultiService.__init__(self)
5621         if allowed_cpu_percentage is not None:
5622             self.allowed_cpu_percentage = allowed_cpu_percentage
5623hunk ./src/allmydata/storage/crawler.py 72
5624-        self.statefname = statefname
5625+        self.statefp = statefp
5626         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
5627                          for i in range(2**10)]
5628         self.prefixes.sort()
5629hunk ./src/allmydata/storage/crawler.py 192
5630         #                            of the last bucket to be processed, or
5631         #                            None if we are sleeping between cycles
5632         try:
5633-            f = open(self.statefname, "rb")
5634-            state = pickle.load(f)
5635-            f.close()
5636+            state = pickle.loads(self.statefp.getContent())
5637         except EnvironmentError:
5638             state = {"version": 1,
5639                      "last-cycle-finished": None,
5640hunk ./src/allmydata/storage/crawler.py 228
5641         else:
5642             last_complete_prefix = self.prefixes[lcpi]
5643         self.state["last-complete-prefix"] = last_complete_prefix
5644-        tmpfile = self.statefname + ".tmp"
5645-        f = open(tmpfile, "wb")
5646-        pickle.dump(self.state, f)
5647-        f.close()
5648-        fileutil.move_into_place(tmpfile, self.statefname)
5649+        self.statefp.setContent(pickle.dumps(self.state))
5650 
5651     def startService(self):
5652         # arrange things to look like we were just sleeping, so
5653hunk ./src/allmydata/storage/crawler.py 440
5654 
5655     minimum_cycle_time = 60*60 # we don't need this more than once an hour
5656 
5657-    def __init__(self, statefname, num_sample_prefixes=1):
5658-        FSShareCrawler.__init__(self, statefname)
5659+    def __init__(self, statefp, num_sample_prefixes=1):
5660+        FSShareCrawler.__init__(self, statefp)
5661         self.num_sample_prefixes = num_sample_prefixes
5662 
5663     def add_initial_state(self):
5664hunk ./src/allmydata/storage/server.py 11
5665 from allmydata.util import fileutil, idlib, log, time_format
5666 import allmydata # for __full_version__
5667 
5668-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5669-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5670+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5671+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5672 from allmydata.storage.lease import LeaseInfo
5673 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5674      create_mutable_sharefile
5675hunk ./src/allmydata/storage/server.py 173
5676         # to a particular owner.
5677         start = time.time()
5678         self.count("allocate")
5679-        alreadygot = set()
5680         incoming = set()
5681         bucketwriters = {} # k: shnum, v: BucketWriter
5682 
5683hunk ./src/allmydata/storage/server.py 199
5684             remaining_space -= self.allocated_size()
5685         # self.readonly_storage causes remaining_space <= 0
5686 
5687-        # fill alreadygot with all shares that we have, not just the ones
5688+        # Fill alreadygot with all shares that we have, not just the ones
5689         # they asked about: this will save them a lot of work. Add or update
5690         # leases for all of them: if they want us to hold shares for this
5691hunk ./src/allmydata/storage/server.py 202
5692-        # file, they'll want us to hold leases for this file.
5693+        # file, they'll want us to hold leases for all the shares of it.
5694+        alreadygot = set()
5695         for share in self.backend.get_shares(storageindex):
5696hunk ./src/allmydata/storage/server.py 205
5697-            alreadygot.add(share.shnum)
5698             share.add_or_renew_lease(lease_info)
5699hunk ./src/allmydata/storage/server.py 206
5700+            alreadygot.add(share.shnum)
5701 
5702hunk ./src/allmydata/storage/server.py 208
5703-        # fill incoming with all shares that are incoming use a set operation
5704-        # since there's no need to operate on individual pieces
5705+        # all share numbers that are incoming
5706         incoming = self.backend.get_incoming_shnums(storageindex)
5707 
5708         for shnum in ((sharenums - alreadygot) - incoming):
5709hunk ./src/allmydata/storage/server.py 282
5710             total_space_freed += sf.cancel_lease(cancel_secret)
5711 
5712         if found_buckets:
5713-            storagedir = os.path.join(self.sharedir,
5714-                                      storage_index_to_dir(storageindex))
5715-            if not os.listdir(storagedir):
5716-                os.rmdir(storagedir)
5717+            storagedir = si_dir(self.sharedir, storageindex)
5718+            fp_rmdir_if_empty(storagedir)
5719 
5720         if self.stats_provider:
5721             self.stats_provider.count('storage_server.bytes_freed',
5722hunk ./src/allmydata/test/test_backends.py 52
5723     subtree. I simulate just the parts of the filesystem that the current
5724     implementation of DAS backend needs. """
5725     def call_open(self, fname, mode):
5726+        assert isinstance(fname, basestring), fname
5727         fnamefp = FilePath(fname)
5728         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5729                         "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5730hunk ./src/allmydata/test/test_backends.py 104
5731                         "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5732 
5733     def call_stat(self, fname):
5734+        assert isinstance(fname, basestring), fname
5735         fnamefp = FilePath(fname)
5736         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5737                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5738hunk ./src/allmydata/test/test_backends.py 217
5739 
5740         mocktime.return_value = 0
5741         # Inspect incoming and fail unless it's empty.
5742-        incomingset = self.ss.backend.get_incoming('teststorage_index')
5743-        self.failUnlessReallyEqual(incomingset, set())
5744+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
5745+        self.failUnlessReallyEqual(incomingset, frozenset())
5746         
5747         # Populate incoming with the sharenum: 0.
5748hunk ./src/allmydata/test/test_backends.py 221
5749-        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5750+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5751 
5752         # Inspect incoming and fail unless the sharenum: 0 is listed there.
5753hunk ./src/allmydata/test/test_backends.py 224
5754-        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
5755+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
5756         
5757         # Attempt to create a second share writer with the same sharenum.
5758hunk ./src/allmydata/test/test_backends.py 227
5759-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5760+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5761 
5762         # Show that no sharewriter results from a remote_allocate_buckets
5763         # with the same si and sharenum, until BucketWriter.remote_close()
5764hunk ./src/allmydata/test/test_backends.py 280
5765         StorageServer object. """
5766 
5767         def call_listdir(dirname):
5768+            precondition(isinstance(dirname, basestring), dirname)
5769             self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
5770             return ['0']
5771 
5772hunk ./src/allmydata/test/test_backends.py 287
5773         mocklistdir.side_effect = call_listdir
5774 
5775         def call_open(fname, mode):
5776+            precondition(isinstance(fname, basestring), fname)
5777             self.failUnlessReallyEqual(fname, sharefname)
5778             self.failUnlessEqual(mode[0], 'r', mode)
5779             self.failUnless('b' in mode, mode)
5780hunk ./src/allmydata/test/test_backends.py 297
5781 
5782         datalen = len(share_data)
5783         def call_getsize(fname):
5784+            precondition(isinstance(fname, basestring), fname)
5785             self.failUnlessReallyEqual(fname, sharefname)
5786             return datalen
5787         mockgetsize.side_effect = call_getsize
5788hunk ./src/allmydata/test/test_backends.py 303
5789 
5790         def call_exists(fname):
5791+            precondition(isinstance(fname, basestring), fname)
5792             self.failUnlessReallyEqual(fname, sharefname)
5793             return True
5794         mockexists.side_effect = call_exists
5795hunk ./src/allmydata/test/test_backends.py 321
5796         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
5797 
5798 
5799-class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
5800-    @mock.patch('time.time')
5801-    @mock.patch('os.mkdir')
5802-    @mock.patch('__builtin__.open')
5803-    @mock.patch('os.listdir')
5804-    @mock.patch('os.path.isdir')
5805-    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5806+class TestBackendConstruction(MockFiles, ReallyEqualMixin):
5807+    def test_create_fs_backend(self):
5808         """ This tests whether a file system backend instance can be
5809         constructed. To pass the test, it has to use the
5810         filesystem in only the prescribed ways. """
5811hunk ./src/allmydata/test/test_backends.py 327
5812 
5813-        def call_open(fname, mode):
5814-            if fname == os.path.join(storedir,'bucket_counter.state'):
5815-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5816-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5817-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5818-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5819-                return StringIO()
5820-            else:
5821-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5822-        mockopen.side_effect = call_open
5823-
5824-        def call_isdir(fname):
5825-            if fname == os.path.join(storedir,'shares'):
5826-                return True
5827-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5828-                return True
5829-            else:
5830-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5831-        mockisdir.side_effect = call_isdir
5832-
5833-        def call_mkdir(fname, mode):
5834-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
5835-            self.failUnlessEqual(0777, mode)
5836-            if fname == storedir:
5837-                return None
5838-            elif fname == os.path.join(storedir,'shares'):
5839-                return None
5840-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5841-                return None
5842-            else:
5843-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
5844-        mockmkdir.side_effect = call_mkdir
5845-
5846         # Now begin the test.
5847hunk ./src/allmydata/test/test_backends.py 328
5848-        DASCore('teststoredir', expiration_policy)
5849-
5850-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5851-
5852+        DASCore(self.storedir, expiration_policy)
5853hunk ./src/allmydata/util/fileutil.py 7
5854 
5855 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5856 
5857+from allmydata.util.assertutil import precondition
5858+
5859 from twisted.python import log
5860hunk ./src/allmydata/util/fileutil.py 10
5861-from twisted.python.filepath import UnlistableError
5862+from twisted.python.filepath import FilePath, UnlistableError
5863 
5864 from pycryptopp.cipher.aes import AES
5865 
5866hunk ./src/allmydata/util/fileutil.py 210
5867             raise tx
5868         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5869 
5870+def fp_rmdir_if_empty(dirfp):
5871+    """ Remove the directory if it is empty. """
5872+    try:
5873+        os.rmdir(dirfp.path)
5874+    except OSError, e:
5875+        if e.errno != errno.ENOTEMPTY:
5876+            raise
5877+    else:
5878+        dirfp.changed()
5879+
5880 def rmtree(dirname):
5881     """
5882     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5883hunk ./src/allmydata/util/fileutil.py 257
5884         raise OSError, excs
5885 
5886 def fp_remove(dirfp):
5887+    """
5888+    An idempotent version of shutil.rmtree().  If the dir is already gone,
5889+    do nothing and return without raising an exception.  If this call
5890+    removes the dir, return without raising an exception.  If there is an
5891+    error that prevents removal or if the directory gets created again by
5892+    someone else after this deletes it and before this checks that it is
5893+    gone, raise an exception.
5894+    """
5895     try:
5896         dirfp.remove()
5897     except UnlistableError, e:
5898hunk ./src/allmydata/util/fileutil.py 270
5899         if e.originalException.errno != errno.ENOENT:
5900             raise
5901+    except OSError, e:
5902+        if e.errno != errno.ENOENT:
5903+            raise
5904 
5905 def rm_dir(dirname):
5906     # Renamed to be like shutil.rmtree and unlike rmdir.
5907hunk ./src/allmydata/util/fileutil.py 387
5908         import traceback
5909         traceback.print_exc()
5910 
5911-def get_disk_stats(whichdir, reserved_space=0):
5912+def get_disk_stats(whichdirfp, reserved_space=0):
5913     """Return disk statistics for the storage disk, in the form of a dict
5914     with the following fields.
5915       total:            total bytes on disk
5916hunk ./src/allmydata/util/fileutil.py 408
5917     you can pass how many bytes you would like to leave unused on this
5918     filesystem as reserved_space.
5919     """
5920+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5921 
5922     if have_GetDiskFreeSpaceExW:
5923         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
5924hunk ./src/allmydata/util/fileutil.py 419
5925         n_free_for_nonroot = c_ulonglong(0)
5926         n_total            = c_ulonglong(0)
5927         n_free_for_root    = c_ulonglong(0)
5928-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
5929+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
5930                                                byref(n_total),
5931                                                byref(n_free_for_root))
5932         if retval == 0:
5933hunk ./src/allmydata/util/fileutil.py 424
5934             raise OSError("Windows error %d attempting to get disk statistics for %r"
5935-                          % (GetLastError(), whichdir))
5936+                          % (GetLastError(), whichdirfp.path))
5937         free_for_nonroot = n_free_for_nonroot.value
5938         total            = n_total.value
5939         free_for_root    = n_free_for_root.value
5940hunk ./src/allmydata/util/fileutil.py 433
5941         # <http://docs.python.org/library/os.html#os.statvfs>
5942         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
5943         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
5944-        s = os.statvfs(whichdir)
5945+        s = os.statvfs(whichdirfp.path)
5946 
5947         # on my mac laptop:
5948         #  statvfs(2) is a wrapper around statfs(2).
5949hunk ./src/allmydata/util/fileutil.py 460
5950              'avail': avail,
5951            }
5952 
5953-def get_available_space(whichdir, reserved_space):
5954+def get_available_space(whichdirfp, reserved_space):
5955     """Returns available space for share storage in bytes, or None if no
5956     API to get this information is available.
5957 
5958hunk ./src/allmydata/util/fileutil.py 472
5959     you can pass how many bytes you would like to leave unused on this
5960     filesystem as reserved_space.
5961     """
5962+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5963     try:
5964hunk ./src/allmydata/util/fileutil.py 474
5965-        return get_disk_stats(whichdir, reserved_space)['avail']
5966+        return get_disk_stats(whichdirfp, reserved_space)['avail']
5967     except AttributeError:
5968         return None
5969hunk ./src/allmydata/util/fileutil.py 477
5970-    except EnvironmentError:
5971-        log.msg("OS call to get disk statistics failed")
5972-        return 0
5973}
5974[jacp16 or so
5975wilcoxjg@gmail.com**20110722070036
5976 Ignore-this: 7548785cad146056eede9a16b93b569f
5977] {
5978hunk ./src/allmydata/_auto_deps.py 19
5979 
5980     "zope.interface",
5981 
5982-    "Twisted >= 2.4.0",
5983+    "Twisted >= 11.0",
5984 
5985     # foolscap < 0.5.1 had a performance bug which spent
5986     # O(N**2) CPU for transferring large mutable files
5987hunk ./src/allmydata/storage/backends/das/core.py 2
5988 import os, re, weakref, struct, time, stat
5989+from twisted.application import service
5990+from twisted.python.filepath import UnlistableError
5991+from twisted.python.filepath import FilePath
5992+from zope.interface import implements
5993 
5994hunk ./src/allmydata/storage/backends/das/core.py 7
5995+import allmydata # for __full_version__
5996 from allmydata.interfaces import IStorageBackend
5997 from allmydata.storage.backends.base import Backend
5998hunk ./src/allmydata/storage/backends/das/core.py 10
5999-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6000+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6001 from allmydata.util.assertutil import precondition
6002hunk ./src/allmydata/storage/backends/das/core.py 12
6003-
6004-#from foolscap.api import Referenceable
6005-from twisted.application import service
6006-from twisted.python.filepath import UnlistableError
6007-
6008-from zope.interface import implements
6009 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6010 from allmydata.util import fileutil, idlib, log, time_format
6011hunk ./src/allmydata/storage/backends/das/core.py 14
6012-import allmydata # for __full_version__
6013-
6014-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6015-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6016 from allmydata.storage.lease import LeaseInfo
6017 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6018      create_mutable_sharefile
6019hunk ./src/allmydata/storage/backends/das/core.py 21
6020 from allmydata.storage.crawler import FSBucketCountingCrawler
6021 from allmydata.util.hashutil import constant_time_compare
6022 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6023-
6024-from zope.interface import implements
6025+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6026 
6027 # storage/
6028 # storage/shares/incoming
6029hunk ./src/allmydata/storage/backends/das/core.py 49
6030         self._setup_lease_checkerf(expiration_policy)
6031 
6032     def _setup_storage(self, storedir, readonly, reserved_space):
6033+        precondition(isinstance(storedir, FilePath)) 
6034         self.storedir = storedir
6035         self.readonly = readonly
6036         self.reserved_space = int(reserved_space)
6037hunk ./src/allmydata/storage/backends/das/core.py 83
6038 
6039     def get_incoming_shnums(self, storageindex):
6040         """ Return a frozenset of the shnum (as ints) of incoming shares. """
6041-        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
6042+        incomingdir = si_si2dir(self.incomingdir, storageindex)
6043         try:
6044             childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
6045             shnums = [ int(fp.basename) for fp in childfps ]
6046hunk ./src/allmydata/storage/backends/das/core.py 96
6047         """ Generate ImmutableShare objects for shares we have for this
6048         storageindex. ("Shares we have" means completed ones, excluding
6049         incoming ones.)"""
6050-        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
6051+        finalstoragedir = si_si2dir(self.sharedir, storageindex)
6052         try:
6053             for fp in finalstoragedir.children():
6054                 if is_num(fp):
6055hunk ./src/allmydata/storage/backends/das/core.py 111
6056         return fileutil.get_available_space(self.storedir, self.reserved_space)
6057 
6058     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6059-        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
6060-        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6061+        finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
6062+        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6063         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
6064         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
6065         return bw
6066hunk ./src/allmydata/storage/backends/null/core.py 18
6067         return None
6068 
6069     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6070-       
6071-        immutableshare = ImmutableShare()
6072+        immutableshare = ImmutableShare()
6073         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
6074 
6075     def set_storage_server(self, ss):
6076hunk ./src/allmydata/storage/backends/null/core.py 24
6077         self.ss = ss
6078 
6079-    def get_incoming(self, storageindex):
6080-        return set()
6081+    def get_incoming_shnums(self, storageindex):
6082+        return frozenset()
6083 
6084 class ImmutableShare:
6085     sharetype = "immutable"
6086hunk ./src/allmydata/storage/common.py 19
6087 def si_a2b(ascii_storageindex):
6088     return base32.a2b(ascii_storageindex)
6089 
6090-def si_dir(startfp, storageindex):
6091+def si_si2dir(startfp, storageindex):
6092     sia = si_b2a(storageindex)
6093     return startfp.child(sia[:2]).child(sia)
6094hunk ./src/allmydata/storage/immutable.py 20
6095     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
6096         self.ss = ss
6097         self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
6098-
6099         self._canary = canary
6100         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
6101         self.closed = False
6102hunk ./src/allmydata/storage/lease.py 17
6103 
6104     def get_expiration_time(self):
6105         return self.expiration_time
6106+
6107     def get_grant_renew_time_time(self):
6108         # hack, based upon fixed 31day expiration period
6109         return self.expiration_time - 31*24*60*60
6110hunk ./src/allmydata/storage/lease.py 21
6111+
6112     def get_age(self):
6113         return time.time() - self.get_grant_renew_time_time()
6114 
6115hunk ./src/allmydata/storage/lease.py 32
6116          self.expiration_time) = struct.unpack(">L32s32sL", data)
6117         self.nodeid = None
6118         return self
6119+
6120     def to_immutable_data(self):
6121         return struct.pack(">L32s32sL",
6122                            self.owner_num,
6123hunk ./src/allmydata/storage/lease.py 45
6124                            int(self.expiration_time),
6125                            self.renew_secret, self.cancel_secret,
6126                            self.nodeid)
6127+
6128     def from_mutable_data(self, data):
6129         (self.owner_num,
6130          self.expiration_time,
6131hunk ./src/allmydata/storage/server.py 11
6132 from allmydata.util import fileutil, idlib, log, time_format
6133 import allmydata # for __full_version__
6134 
6135-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6136-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6137+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6138+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6139 from allmydata.storage.lease import LeaseInfo
6140 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6141      create_mutable_sharefile
6142hunk ./src/allmydata/storage/server.py 88
6143             else:
6144                 stats["mean"] = None
6145 
6146-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
6147-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
6148-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
6149+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
6150+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
6151+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
6152                              (0.999, "99_9_percentile", 1000)]
6153 
6154             for percentile, percentilestring, minnumtoobserve in orderstatlist:
6155hunk ./src/allmydata/storage/server.py 231
6156             header = f.read(32)
6157             f.close()
6158             if header[:32] == MutableShareFile.MAGIC:
6159+                # XXX  Can I exploit this code?
6160                 sf = MutableShareFile(filename, self)
6161                 # note: if the share has been migrated, the renew_lease()
6162                 # call will throw an exception, with information to help the
6163hunk ./src/allmydata/storage/server.py 237
6164                 # client update the lease.
6165             elif header[:4] == struct.pack(">L", 1):
6166+                # Check if version number is "1".
6167+                # XXX WHAT ABOUT OTHER VERSIONS!!!!!!!?
6168                 sf = ShareFile(filename)
6169             else:
6170                 continue # non-sharefile
6171hunk ./src/allmydata/storage/server.py 285
6172             total_space_freed += sf.cancel_lease(cancel_secret)
6173 
6174         if found_buckets:
6175-            storagedir = si_dir(self.sharedir, storageindex)
6176+            # XXX  Yikes looks like code that shouldn't be in the server!
6177+            storagedir = si_si2dir(self.sharedir, storageindex)
6178             fp_rmdir_if_empty(storagedir)
6179 
6180         if self.stats_provider:
6181hunk ./src/allmydata/storage/server.py 301
6182             self.stats_provider.count('storage_server.bytes_added', consumed_size)
6183         del self._active_writers[bw]
6184 
6185-
6186     def remote_get_buckets(self, storageindex):
6187         start = time.time()
6188         self.count("get")
6189hunk ./src/allmydata/storage/server.py 329
6190         except StopIteration:
6191             return iter([])
6192 
6193+    #  XXX  As far as Zancas' grockery has gotten.
6194     def remote_slot_testv_and_readv_and_writev(self, storageindex,
6195                                                secrets,
6196                                                test_and_write_vectors,
6197hunk ./src/allmydata/storage/server.py 338
6198         self.count("writev")
6199         si_s = si_b2a(storageindex)
6200         log.msg("storage: slot_writev %s" % si_s)
6201-        si_dir = storage_index_to_dir(storageindex)
6202+       
6203         (write_enabler, renew_secret, cancel_secret) = secrets
6204         # shares exist if there is a file for them
6205hunk ./src/allmydata/storage/server.py 341
6206-        bucketdir = os.path.join(self.sharedir, si_dir)
6207+        bucketdir = si_si2dir(self.sharedir, storageindex)
6208         shares = {}
6209         if os.path.isdir(bucketdir):
6210             for sharenum_s in os.listdir(bucketdir):
6211hunk ./src/allmydata/storage/server.py 430
6212         si_s = si_b2a(storageindex)
6213         lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
6214                      facility="tahoe.storage", level=log.OPERATIONAL)
6215-        si_dir = storage_index_to_dir(storageindex)
6216         # shares exist if there is a file for them
6217hunk ./src/allmydata/storage/server.py 431
6218-        bucketdir = os.path.join(self.sharedir, si_dir)
6219+        bucketdir = si_si2dir(self.sharedir, storageindex)
6220         if not os.path.isdir(bucketdir):
6221             self.add_latency("readv", time.time() - start)
6222             return {}
6223hunk ./src/allmydata/test/test_backends.py 2
6224 from twisted.trial import unittest
6225-
6226 from twisted.python.filepath import FilePath
6227hunk ./src/allmydata/test/test_backends.py 3
6228-
6229 from allmydata.util.log import msg
6230hunk ./src/allmydata/test/test_backends.py 4
6231-
6232 from StringIO import StringIO
6233hunk ./src/allmydata/test/test_backends.py 5
6234-
6235 from allmydata.test.common_util import ReallyEqualMixin
6236 from allmydata.util.assertutil import _assert
6237hunk ./src/allmydata/test/test_backends.py 7
6238-
6239 import mock
6240 
6241 # This is the code that we're going to be testing.
6242hunk ./src/allmydata/test/test_backends.py 11
6243 from allmydata.storage.server import StorageServer
6244-
6245 from allmydata.storage.backends.das.core import DASCore
6246 from allmydata.storage.backends.null.core import NullCore
6247 
6248hunk ./src/allmydata/test/test_backends.py 14
6249-
6250-# The following share file contents was generated with
6251+# The following share file content was generated with
6252 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
6253hunk ./src/allmydata/test/test_backends.py 16
6254-# with share data == 'a'.
6255+# with share data == 'a'. The total size of this input
6256+# is 85 bytes.
6257 shareversionnumber = '\x00\x00\x00\x01'
6258 sharedatalength = '\x00\x00\x00\x01'
6259 numberofleases = '\x00\x00\x00\x01'
6260hunk ./src/allmydata/test/test_backends.py 21
6261-
6262 shareinputdata = 'a'
6263 ownernumber = '\x00\x00\x00\x00'
6264 renewsecret  = 'x'*32
6265hunk ./src/allmydata/test/test_backends.py 31
6266 client_data = shareinputdata + ownernumber + renewsecret + \
6267     cancelsecret + expirationtime + nextlease
6268 share_data = containerdata + client_data
6269-
6270-
6271 testnodeid = 'testnodeidxxxxxxxxxx'
6272 
6273 class MockStat:
6274hunk ./src/allmydata/test/test_backends.py 105
6275         mstat.st_mode = 16893 # a directory
6276         return mstat
6277 
6278+    def call_get_available_space(self, storedir, reservedspace):
6279+        # The input vector has an input size of 85.
6280+        return 85 - reservedspace
6281+
6282+    def call_exists(self):
6283+        # I'm only called in the ImmutableShareFile constructor.
6284+        return False
6285+
6286     def setUp(self):
6287         msg( "%s.setUp()" % (self,))
6288         self.storedir = FilePath('teststoredir')
6289hunk ./src/allmydata/test/test_backends.py 147
6290         mockfpstat = self.mockfpstatp.__enter__()
6291         mockfpstat.side_effect = self.call_stat
6292 
6293+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6294+        mockget_available_space = self.mockget_available_space.__enter__()
6295+        mockget_available_space.side_effect = self.call_get_available_space
6296+
6297+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6298+        mockfpexists = self.mockfpexists.__enter__()
6299+        mockfpexists.side_effect = self.call_exists
6300+
6301     def tearDown(self):
6302         msg( "%s.tearDown()" % (self,))
6303hunk ./src/allmydata/test/test_backends.py 157
6304+        self.mockfpexists.__exit__()
6305+        self.mockget_available_space.__exit__()
6306         self.mockfpstatp.__exit__()
6307         self.mockstatp.__exit__()
6308         self.mockopenp.__exit__()
6309hunk ./src/allmydata/test/test_backends.py 166
6310         self.mockmkdirp.__exit__()
6311         self.mocklistdirp.__exit__()
6312 
6313+
6314 expiration_policy = {'enabled' : False,
6315                      'mode' : 'age',
6316                      'override_lease_duration' : None,
6317hunk ./src/allmydata/test/test_backends.py 182
6318         self.ss = StorageServer(testnodeid, backend=NullCore())
6319 
6320     @mock.patch('os.mkdir')
6321-
6322     @mock.patch('__builtin__.open')
6323     @mock.patch('os.listdir')
6324     @mock.patch('os.path.isdir')
6325hunk ./src/allmydata/test/test_backends.py 201
6326         filesystem backend. To pass the test, it mustn't use the filesystem
6327         outside of its configured storedir. """
6328 
6329-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
6330+        StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
6331 
6332 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
6333     """ This tests both the StorageServer and the DAS backend together. """
6334hunk ./src/allmydata/test/test_backends.py 205
6335+   
6336     def setUp(self):
6337         MockFiles.setUp(self)
6338         try:
6339hunk ./src/allmydata/test/test_backends.py 211
6340             self.backend = DASCore(self.storedir, expiration_policy)
6341             self.ss = StorageServer(testnodeid, self.backend)
6342-            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6343-            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
6344+            self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6345+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
6346         except:
6347             MockFiles.tearDown(self)
6348             raise
6349hunk ./src/allmydata/test/test_backends.py 233
6350         # Populate incoming with the sharenum: 0.
6351         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6352 
6353-        # Inspect incoming and fail unless the sharenum: 0 is listed there.
6354-        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6355+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
6356+        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6357         
6358         # Attempt to create a second share writer with the same sharenum.
6359         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6360hunk ./src/allmydata/test/test_backends.py 257
6361 
6362         # Postclose: (Omnibus) failUnless written data is in final.
6363         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
6364-        contents = sharesinfinal[0].read_share_data(0,73)
6365+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
6366+        contents = sharesinfinal[0].read_share_data(0, 73)
6367         self.failUnlessReallyEqual(contents, client_data)
6368 
6369         # Exercise the case that the share we're asking to allocate is
6370hunk ./src/allmydata/test/test_backends.py 276
6371         mockget_available_space.side_effect = call_get_available_space
6372         
6373         
6374-        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6375+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6376 
6377     @mock.patch('os.path.exists')
6378     @mock.patch('os.path.getsize')
6379}
6380
6381Context:
6382
6383[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
6384david-sarah@jacaranda.org**20110718005949
6385 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
6386]
6387[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
6388david-sarah@jacaranda.org**20110717194315
6389 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
6390]
6391[README.txt: say that quickstart.rst is in the docs directory.
6392david-sarah@jacaranda.org**20110717192400
6393 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
6394]
6395[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
6396zooko@zooko.com**20110717114226
6397 Ignore-this: df222120d41447ce4102616921626c82
6398 fixes #1383
6399]
6400[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
6401david-sarah@jacaranda.org**20110716181813
6402 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
6403]
6404[docs: add missing link in NEWS.rst
6405zooko@zooko.com**20110712153307
6406 Ignore-this: be7b7eb81c03700b739daa1027d72b35
6407]
6408[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
6409zooko@zooko.com**20110712153229
6410 Ignore-this: 723c4f9e2211027c79d711715d972c5
6411 Also remove a couple of vestigial references to figleaf, which is long gone.
6412 fixes #1409 (remove contrib/fuse)
6413]
6414[add Protovis.js-based download-status timeline visualization
6415Brian Warner <warner@lothar.com>**20110629222606
6416 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
6417 
6418 provide status overlap info on the webapi t=json output, add decode/decrypt
6419 rate tooltips, add zoomin/zoomout buttons
6420]
6421[add more download-status data, fix tests
6422Brian Warner <warner@lothar.com>**20110629222555
6423 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
6424]
6425[prepare for viz: improve DownloadStatus events
6426Brian Warner <warner@lothar.com>**20110629222542
6427 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
6428 
6429 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
6430]
6431[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
6432zooko@zooko.com**20110629185711
6433 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
6434]
6435[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
6436david-sarah@jacaranda.org**20110130235809
6437 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
6438]
6439[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
6440david-sarah@jacaranda.org**20110626054124
6441 Ignore-this: abb864427a1b91bd10d5132b4589fd90
6442]
6443[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
6444david-sarah@jacaranda.org**20110623205528
6445 Ignore-this: c63e23146c39195de52fb17c7c49b2da
6446]
6447[Rename test_package_initialization.py to (much shorter) test_import.py .
6448Brian Warner <warner@lothar.com>**20110611190234
6449 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
6450 
6451 The former name was making my 'ls' listings hard to read, by forcing them
6452 down to just two columns.
6453]
6454[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
6455zooko@zooko.com**20110611163741
6456 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
6457 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
6458 fixes #1412
6459]
6460[wui: right-align the size column in the WUI
6461zooko@zooko.com**20110611153758
6462 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
6463 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
6464 fixes #1412
6465]
6466[docs: three minor fixes
6467zooko@zooko.com**20110610121656
6468 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
6469 CREDITS for arc for stats tweak
6470 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
6471 English usage tweak
6472]
6473[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
6474david-sarah@jacaranda.org**20110609223719
6475 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
6476]
6477[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
6478wilcoxjg@gmail.com**20110527120135
6479 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
6480 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
6481 NEWS.rst, stats.py: documentation of change to get_latencies
6482 stats.rst: now documents percentile modification in get_latencies
6483 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
6484 fixes #1392
6485]
6486[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
6487david-sarah@jacaranda.org**20110517011214
6488 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
6489]
6490[docs: convert NEWS to NEWS.rst and change all references to it.
6491david-sarah@jacaranda.org**20110517010255
6492 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
6493]
6494[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
6495david-sarah@jacaranda.org**20110512140559
6496 Ignore-this: 784548fc5367fac5450df1c46890876d
6497]
6498[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
6499david-sarah@jacaranda.org**20110130164923
6500 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
6501]
6502[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
6503zooko@zooko.com**20110128142006
6504 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
6505 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
6506]
6507[M-x whitespace-cleanup
6508zooko@zooko.com**20110510193653
6509 Ignore-this: dea02f831298c0f65ad096960e7df5c7
6510]
6511[docs: fix typo in running.rst, thanks to arch_o_median
6512zooko@zooko.com**20110510193633
6513 Ignore-this: ca06de166a46abbc61140513918e79e8
6514]
6515[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
6516david-sarah@jacaranda.org**20110204204902
6517 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
6518]
6519[relnotes.txt: forseeable -> foreseeable. refs #1342
6520david-sarah@jacaranda.org**20110204204116
6521 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
6522]
6523[replace remaining .html docs with .rst docs
6524zooko@zooko.com**20110510191650
6525 Ignore-this: d557d960a986d4ac8216d1677d236399
6526 Remove install.html (long since deprecated).
6527 Also replace some obsolete references to install.html with references to quickstart.rst.
6528 Fix some broken internal references within docs/historical/historical_known_issues.txt.
6529 Thanks to Ravi Pinjala and Patrick McDonald.
6530 refs #1227
6531]
6532[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
6533zooko@zooko.com**20110428055232
6534 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
6535]
6536[munin tahoe_files plugin: fix incorrect file count
6537francois@ctrlaltdel.ch**20110428055312
6538 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
6539 fixes #1391
6540]
6541[corrected "k must never be smaller than N" to "k must never be greater than N"
6542secorp@allmydata.org**20110425010308
6543 Ignore-this: 233129505d6c70860087f22541805eac
6544]
6545[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
6546david-sarah@jacaranda.org**20110411190738
6547 Ignore-this: 7847d26bc117c328c679f08a7baee519
6548]
6549[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
6550david-sarah@jacaranda.org**20110410155844
6551 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
6552]
6553[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
6554david-sarah@jacaranda.org**20110410155705
6555 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
6556]
6557[remove unused variable detected by pyflakes
6558zooko@zooko.com**20110407172231
6559 Ignore-this: 7344652d5e0720af822070d91f03daf9
6560]
6561[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
6562david-sarah@jacaranda.org**20110401202750
6563 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
6564]
6565[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
6566Brian Warner <warner@lothar.com>**20110325232511
6567 Ignore-this: d5307faa6900f143193bfbe14e0f01a
6568]
6569[control.py: remove all uses of s.get_serverid()
6570warner@lothar.com**20110227011203
6571 Ignore-this: f80a787953bd7fa3d40e828bde00e855
6572]
6573[web: remove some uses of s.get_serverid(), not all
6574warner@lothar.com**20110227011159
6575 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
6576]
6577[immutable/downloader/fetcher.py: remove all get_serverid() calls
6578warner@lothar.com**20110227011156
6579 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
6580]
6581[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
6582warner@lothar.com**20110227011153
6583 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
6584 
6585 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
6586 _shares_from_server dict was being popped incorrectly (using shnum as the
6587 index instead of serverid). I'm still thinking through the consequences of
6588 this bug. It was probably benign and really hard to detect. I think it would
6589 cause us to incorrectly believe that we're pulling too many shares from a
6590 server, and thus prefer a different server rather than asking for a second
6591 share from the first server. The diversity code is intended to spread out the
6592 number of shares simultaneously being requested from each server, but with
6593 this bug, it might be spreading out the total number of shares requested at
6594 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
6595 segment, so the effect doesn't last very long).
6596]
6597[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
6598warner@lothar.com**20110227011150
6599 Ignore-this: d8d56dd8e7b280792b40105e13664554
6600 
6601 test_download.py: create+check MyShare instances better, make sure they share
6602 Server objects, now that finder.py cares
6603]
6604[immutable/downloader/finder.py: reduce use of get_serverid(), one left
6605warner@lothar.com**20110227011146
6606 Ignore-this: 5785be173b491ae8a78faf5142892020
6607]
6608[immutable/offloaded.py: reduce use of get_serverid() a bit more
6609warner@lothar.com**20110227011142
6610 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
6611]
6612[immutable/upload.py: reduce use of get_serverid()
6613warner@lothar.com**20110227011138
6614 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
6615]
6616[immutable/checker.py: remove some uses of s.get_serverid(), not all
6617warner@lothar.com**20110227011134
6618 Ignore-this: e480a37efa9e94e8016d826c492f626e
6619]
6620[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
6621warner@lothar.com**20110227011132
6622 Ignore-this: 6078279ddf42b179996a4b53bee8c421
6623 MockIServer stubs
6624]
6625[upload.py: rearrange _make_trackers a bit, no behavior changes
6626warner@lothar.com**20110227011128
6627 Ignore-this: 296d4819e2af452b107177aef6ebb40f
6628]
6629[happinessutil.py: finally rename merge_peers to merge_servers
6630warner@lothar.com**20110227011124
6631 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
6632]
6633[test_upload.py: factor out FakeServerTracker
6634warner@lothar.com**20110227011120
6635 Ignore-this: 6c182cba90e908221099472cc159325b
6636]
6637[test_upload.py: server-vs-tracker cleanup
6638warner@lothar.com**20110227011115
6639 Ignore-this: 2915133be1a3ba456e8603885437e03
6640]
6641[happinessutil.py: server-vs-tracker cleanup
6642warner@lothar.com**20110227011111
6643 Ignore-this: b856c84033562d7d718cae7cb01085a9
6644]
6645[upload.py: more tracker-vs-server cleanup
6646warner@lothar.com**20110227011107
6647 Ignore-this: bb75ed2afef55e47c085b35def2de315
6648]
6649[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
6650warner@lothar.com**20110227011103
6651 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
6652]
6653[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
6654warner@lothar.com**20110227011100
6655 Ignore-this: 7ea858755cbe5896ac212a925840fe68
6656 
6657 No behavioral changes, just updating variable/method names and log messages.
6658 The effects outside these three files should be minimal: some exception
6659 messages changed (to say "server" instead of "peer"), and some internal class
6660 names were changed. A few things still use "peer" to minimize external
6661 changes, like UploadResults.timings["peer_selection"] and
6662 happinessutil.merge_peers, which can be changed later.
6663]
6664[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
6665warner@lothar.com**20110227011056
6666 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
6667]
6668[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
6669warner@lothar.com**20110227011051
6670 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
6671]
6672[test: increase timeout on a network test because Francois's ARM machine hit that timeout
6673zooko@zooko.com**20110317165909
6674 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
6675 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
6676]
6677[docs/configuration.rst: add a "Frontend Configuration" section
6678Brian Warner <warner@lothar.com>**20110222014323
6679 Ignore-this: 657018aa501fe4f0efef9851628444ca
6680 
6681 this points to docs/frontends/*.rst, which were previously underlinked
6682]
6683[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
6684"Brian Warner <warner@lothar.com>"**20110221061544
6685 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
6686]
6687[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
6688david-sarah@jacaranda.org**20110221015817
6689 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
6690]
6691[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
6692david-sarah@jacaranda.org**20110221020125
6693 Ignore-this: b0744ed58f161bf188e037bad077fc48
6694]
6695[Refactor StorageFarmBroker handling of servers
6696Brian Warner <warner@lothar.com>**20110221015804
6697 Ignore-this: 842144ed92f5717699b8f580eab32a51
6698 
6699 Pass around IServer instance instead of (peerid, rref) tuple. Replace
6700 "descriptor" with "server". Other replacements:
6701 
6702  get_all_servers -> get_connected_servers/get_known_servers
6703  get_servers_for_index -> get_servers_for_psi (now returns IServers)
6704 
6705 This change still needs to be pushed further down: lots of code is now
6706 getting the IServer and then distributing (peerid, rref) internally.
6707 Instead, it ought to distribute the IServer internally and delay
6708 extracting a serverid or rref until the last moment.
6709 
6710 no_network.py was updated to retain parallelism.
6711]
6712[TAG allmydata-tahoe-1.8.2
6713warner@lothar.com**20110131020101]
6714Patch bundle hash:
6715e2a14fe8b6971f0f873f025bfaabc1f06047552d