Ticket #999: work-in-progress-on-tests-from-pair-programming-with-Zancas.darcs.patch

File work-in-progress-on-tests-from-pair-programming-with-Zancas.darcs.patch, 222.1 KB (added by zooko, at 2011-07-14T00:31:09Z)
Line 
124 patches for repository /home/zooko/playground/tahoe-lafs/pristine:
2
3Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
4  * storage: new mocking tests of storage server read and write
5  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
6
7Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
8  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
9  sloppy not for production
10
11Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
12  * snapshot of progress on backend implementation (not suitable for trunk)
13
14Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
15  * checkpoint patch
16
17Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
18  * checkpoint4
19
20Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
21  * checkpoint5
22
23Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
24  * checkpoint 6
25
26Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
27  * checkpoint 7
28
29Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
30  * checkpoint8
31    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
32
33Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
34  * checkpoint 9
35
36Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
37  * checkpoint10
38
39Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
40  * jacp 11
41
42Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
43  * checkpoint12 testing correct behavior with regard to incoming and final
44
45Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
46  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
47
48Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
49  * adding comments to clarify what I'm about to do.
50
51Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
52  * branching back, no longer attempting to mock inside TestServerFSBackend
53
54Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
55  * checkpoint12 TestServerFSBackend no longer mocks filesystem
56
57Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
58  * JACP
59
60Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
61  * testing get incoming
62
63Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
64  * ImmutableShareFile does not know its StorageIndex
65
66Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
67  * get_incoming correctly reports the 0 share after it has arrived
68
69Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
70  * jacp14
71
72Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
73  * jacp14 or so
74
75Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
76  * temporary work-in-progress patch to be unrecorded
77  tidy up a few tests, work done in pair-programming with Zancas
78
79New patches:
80
81[storage: new mocking tests of storage server read and write
82wilcoxjg@gmail.com**20110325203514
83 Ignore-this: df65c3c4f061dd1516f88662023fdb41
84 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
85] {
86addfile ./src/allmydata/test/test_server.py
87hunk ./src/allmydata/test/test_server.py 1
88+from twisted.trial import unittest
89+
90+from StringIO import StringIO
91+
92+from allmydata.test.common_util import ReallyEqualMixin
93+
94+import mock
95+
96+# This is the code that we're going to be testing.
97+from allmydata.storage.server import StorageServer
98+
99+# The following share file contents was generated with
100+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
101+# with share data == 'a'.
102+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
103+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
104+
105+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
106+
107+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
108+    @mock.patch('__builtin__.open')
109+    def test_create_server(self, mockopen):
110+        """ This tests whether a server instance can be constructed. """
111+
112+        def call_open(fname, mode):
113+            if fname == 'testdir/bucket_counter.state':
114+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
115+            elif fname == 'testdir/lease_checker.state':
116+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
117+            elif fname == 'testdir/lease_checker.history':
118+                return StringIO()
119+        mockopen.side_effect = call_open
120+
121+        # Now begin the test.
122+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
123+
124+        # You passed!
125+
126+class TestServer(unittest.TestCase, ReallyEqualMixin):
127+    @mock.patch('__builtin__.open')
128+    def setUp(self, mockopen):
129+        def call_open(fname, mode):
130+            if fname == 'testdir/bucket_counter.state':
131+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
132+            elif fname == 'testdir/lease_checker.state':
133+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
134+            elif fname == 'testdir/lease_checker.history':
135+                return StringIO()
136+        mockopen.side_effect = call_open
137+
138+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
139+
140+
141+    @mock.patch('time.time')
142+    @mock.patch('os.mkdir')
143+    @mock.patch('__builtin__.open')
144+    @mock.patch('os.listdir')
145+    @mock.patch('os.path.isdir')
146+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
147+        """Handle a report of corruption."""
148+
149+        def call_listdir(dirname):
150+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
151+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
152+
153+        mocklistdir.side_effect = call_listdir
154+
155+        class MockFile:
156+            def __init__(self):
157+                self.buffer = ''
158+                self.pos = 0
159+            def write(self, instring):
160+                begin = self.pos
161+                padlen = begin - len(self.buffer)
162+                if padlen > 0:
163+                    self.buffer += '\x00' * padlen
164+                end = self.pos + len(instring)
165+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
166+                self.pos = end
167+            def close(self):
168+                pass
169+            def seek(self, pos):
170+                self.pos = pos
171+            def read(self, numberbytes):
172+                return self.buffer[self.pos:self.pos+numberbytes]
173+            def tell(self):
174+                return self.pos
175+
176+        mocktime.return_value = 0
177+
178+        sharefile = MockFile()
179+        def call_open(fname, mode):
180+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
181+            return sharefile
182+
183+        mockopen.side_effect = call_open
184+        # Now begin the test.
185+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
186+        print bs
187+        bs[0].remote_write(0, 'a')
188+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
189+
190+
191+    @mock.patch('os.path.exists')
192+    @mock.patch('os.path.getsize')
193+    @mock.patch('__builtin__.open')
194+    @mock.patch('os.listdir')
195+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
196+        """ This tests whether the code correctly finds and reads
197+        shares written out by old (Tahoe-LAFS <= v1.8.2)
198+        servers. There is a similar test in test_download, but that one
199+        is from the perspective of the client and exercises a deeper
200+        stack of code. This one is for exercising just the
201+        StorageServer object. """
202+
203+        def call_listdir(dirname):
204+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
205+            return ['0']
206+
207+        mocklistdir.side_effect = call_listdir
208+
209+        def call_open(fname, mode):
210+            self.failUnlessReallyEqual(fname, sharefname)
211+            self.failUnless('r' in mode, mode)
212+            self.failUnless('b' in mode, mode)
213+
214+            return StringIO(share_file_data)
215+        mockopen.side_effect = call_open
216+
217+        datalen = len(share_file_data)
218+        def call_getsize(fname):
219+            self.failUnlessReallyEqual(fname, sharefname)
220+            return datalen
221+        mockgetsize.side_effect = call_getsize
222+
223+        def call_exists(fname):
224+            self.failUnlessReallyEqual(fname, sharefname)
225+            return True
226+        mockexists.side_effect = call_exists
227+
228+        # Now begin the test.
229+        bs = self.s.remote_get_buckets('teststorage_index')
230+
231+        self.failUnlessEqual(len(bs), 1)
232+        b = bs[0]
233+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
234+        # If you try to read past the end you get the as much data as is there.
235+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
236+        # If you start reading past the end of the file you get the empty string.
237+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
238}
239[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
240wilcoxjg@gmail.com**20110624202850
241 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
242 sloppy not for production
243] {
244move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
245hunk ./src/allmydata/storage/crawler.py 13
246     pass
247 
248 class ShareCrawler(service.MultiService):
249-    """A ShareCrawler subclass is attached to a StorageServer, and
250+    """A subcless of ShareCrawler is attached to a StorageServer, and
251     periodically walks all of its shares, processing each one in some
252     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
253     since large servers can easily have a terabyte of shares, in several
254hunk ./src/allmydata/storage/crawler.py 31
255     We assume that the normal upload/download/get_buckets traffic of a tahoe
256     grid will cause the prefixdir contents to be mostly cached in the kernel,
257     or that the number of buckets in each prefixdir will be small enough to
258-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
259+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
260     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
261     prefix. On this server, each prefixdir took 130ms-200ms to list the first
262     time, and 17ms to list the second time.
263hunk ./src/allmydata/storage/crawler.py 68
264     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
265     minimum_cycle_time = 300 # don't run a cycle faster than this
266 
267-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
268+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
269         service.MultiService.__init__(self)
270         if allowed_cpu_percentage is not None:
271             self.allowed_cpu_percentage = allowed_cpu_percentage
272hunk ./src/allmydata/storage/crawler.py 72
273-        self.server = server
274-        self.sharedir = server.sharedir
275-        self.statefile = statefile
276+        self.backend = backend
277         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
278                          for i in range(2**10)]
279         self.prefixes.sort()
280hunk ./src/allmydata/storage/crawler.py 446
281 
282     minimum_cycle_time = 60*60 # we don't need this more than once an hour
283 
284-    def __init__(self, server, statefile, num_sample_prefixes=1):
285-        ShareCrawler.__init__(self, server, statefile)
286+    def __init__(self, statefile, num_sample_prefixes=1):
287+        ShareCrawler.__init__(self, statefile)
288         self.num_sample_prefixes = num_sample_prefixes
289 
290     def add_initial_state(self):
291hunk ./src/allmydata/storage/expirer.py 15
292     removed.
293 
294     I collect statistics on the leases and make these available to a web
295-    status page, including::
296+    status page, including:
297 
298     Space recovered during this cycle-so-far:
299      actual (only if expiration_enabled=True):
300hunk ./src/allmydata/storage/expirer.py 51
301     slow_start = 360 # wait 6 minutes after startup
302     minimum_cycle_time = 12*60*60 # not more than twice per day
303 
304-    def __init__(self, server, statefile, historyfile,
305+    def __init__(self, statefile, historyfile,
306                  expiration_enabled, mode,
307                  override_lease_duration, # used if expiration_mode=="age"
308                  cutoff_date, # used if expiration_mode=="cutoff-date"
309hunk ./src/allmydata/storage/expirer.py 71
310         else:
311             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
312         self.sharetypes_to_expire = sharetypes
313-        ShareCrawler.__init__(self, server, statefile)
314+        ShareCrawler.__init__(self, statefile)
315 
316     def add_initial_state(self):
317         # we fill ["cycle-to-date"] here (even though they will be reset in
318hunk ./src/allmydata/storage/immutable.py 44
319     sharetype = "immutable"
320 
321     def __init__(self, filename, max_size=None, create=False):
322-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
323+        """ If max_size is not None then I won't allow more than
324+        max_size to be written to me. If create=True then max_size
325+        must not be None. """
326         precondition((max_size is not None) or (not create), max_size, create)
327         self.home = filename
328         self._max_size = max_size
329hunk ./src/allmydata/storage/immutable.py 87
330 
331     def read_share_data(self, offset, length):
332         precondition(offset >= 0)
333-        # reads beyond the end of the data are truncated. Reads that start
334-        # beyond the end of the data return an empty string. I wonder why
335-        # Python doesn't do the following computation for me?
336+        # Reads beyond the end of the data are truncated. Reads that start
337+        # beyond the end of the data return an empty string.
338         seekpos = self._data_offset+offset
339         fsize = os.path.getsize(self.home)
340         actuallength = max(0, min(length, fsize-seekpos))
341hunk ./src/allmydata/storage/immutable.py 198
342             space_freed += os.stat(self.home)[stat.ST_SIZE]
343             self.unlink()
344         return space_freed
345+class NullBucketWriter(Referenceable):
346+    implements(RIBucketWriter)
347 
348hunk ./src/allmydata/storage/immutable.py 201
349+    def remote_write(self, offset, data):
350+        return
351 
352 class BucketWriter(Referenceable):
353     implements(RIBucketWriter)
354hunk ./src/allmydata/storage/server.py 7
355 from twisted.application import service
356 
357 from zope.interface import implements
358-from allmydata.interfaces import RIStorageServer, IStatsProducer
359+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
360 from allmydata.util import fileutil, idlib, log, time_format
361 import allmydata # for __full_version__
362 
363hunk ./src/allmydata/storage/server.py 16
364 from allmydata.storage.lease import LeaseInfo
365 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
366      create_mutable_sharefile
367-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
368+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
369 from allmydata.storage.crawler import BucketCountingCrawler
370 from allmydata.storage.expirer import LeaseCheckingCrawler
371 
372hunk ./src/allmydata/storage/server.py 20
373+from zope.interface import implements
374+
375+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
376+# be started and stopped.
377+class Backend(service.MultiService):
378+    implements(IStatsProducer)
379+    def __init__(self):
380+        service.MultiService.__init__(self)
381+
382+    def get_bucket_shares(self):
383+        """XXX"""
384+        raise NotImplementedError
385+
386+    def get_share(self):
387+        """XXX"""
388+        raise NotImplementedError
389+
390+    def make_bucket_writer(self):
391+        """XXX"""
392+        raise NotImplementedError
393+
394+class NullBackend(Backend):
395+    def __init__(self):
396+        Backend.__init__(self)
397+
398+    def get_available_space(self):
399+        return None
400+
401+    def get_bucket_shares(self, storage_index):
402+        return set()
403+
404+    def get_share(self, storage_index, sharenum):
405+        return None
406+
407+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
408+        return NullBucketWriter()
409+
410+class FSBackend(Backend):
411+    def __init__(self, storedir, readonly=False, reserved_space=0):
412+        Backend.__init__(self)
413+
414+        self._setup_storage(storedir, readonly, reserved_space)
415+        self._setup_corruption_advisory()
416+        self._setup_bucket_counter()
417+        self._setup_lease_checkerf()
418+
419+    def _setup_storage(self, storedir, readonly, reserved_space):
420+        self.storedir = storedir
421+        self.readonly = readonly
422+        self.reserved_space = int(reserved_space)
423+        if self.reserved_space:
424+            if self.get_available_space() is None:
425+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
426+                        umid="0wZ27w", level=log.UNUSUAL)
427+
428+        self.sharedir = os.path.join(self.storedir, "shares")
429+        fileutil.make_dirs(self.sharedir)
430+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
431+        self._clean_incomplete()
432+
433+    def _clean_incomplete(self):
434+        fileutil.rm_dir(self.incomingdir)
435+        fileutil.make_dirs(self.incomingdir)
436+
437+    def _setup_corruption_advisory(self):
438+        # we don't actually create the corruption-advisory dir until necessary
439+        self.corruption_advisory_dir = os.path.join(self.storedir,
440+                                                    "corruption-advisories")
441+
442+    def _setup_bucket_counter(self):
443+        statefile = os.path.join(self.storedir, "bucket_counter.state")
444+        self.bucket_counter = BucketCountingCrawler(statefile)
445+        self.bucket_counter.setServiceParent(self)
446+
447+    def _setup_lease_checkerf(self):
448+        statefile = os.path.join(self.storedir, "lease_checker.state")
449+        historyfile = os.path.join(self.storedir, "lease_checker.history")
450+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
451+                                   expiration_enabled, expiration_mode,
452+                                   expiration_override_lease_duration,
453+                                   expiration_cutoff_date,
454+                                   expiration_sharetypes)
455+        self.lease_checker.setServiceParent(self)
456+
457+    def get_available_space(self):
458+        if self.readonly:
459+            return 0
460+        return fileutil.get_available_space(self.storedir, self.reserved_space)
461+
462+    def get_bucket_shares(self, storage_index):
463+        """Return a list of (shnum, pathname) tuples for files that hold
464+        shares for this storage_index. In each tuple, 'shnum' will always be
465+        the integer form of the last component of 'pathname'."""
466+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
467+        try:
468+            for f in os.listdir(storagedir):
469+                if NUM_RE.match(f):
470+                    filename = os.path.join(storagedir, f)
471+                    yield (int(f), filename)
472+        except OSError:
473+            # Commonly caused by there being no buckets at all.
474+            pass
475+
476 # storage/
477 # storage/shares/incoming
478 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
479hunk ./src/allmydata/storage/server.py 143
480     name = 'storage'
481     LeaseCheckerClass = LeaseCheckingCrawler
482 
483-    def __init__(self, storedir, nodeid, reserved_space=0,
484-                 discard_storage=False, readonly_storage=False,
485+    def __init__(self, nodeid, backend, reserved_space=0,
486+                 readonly_storage=False,
487                  stats_provider=None,
488                  expiration_enabled=False,
489                  expiration_mode="age",
490hunk ./src/allmydata/storage/server.py 155
491         assert isinstance(nodeid, str)
492         assert len(nodeid) == 20
493         self.my_nodeid = nodeid
494-        self.storedir = storedir
495-        sharedir = os.path.join(storedir, "shares")
496-        fileutil.make_dirs(sharedir)
497-        self.sharedir = sharedir
498-        # we don't actually create the corruption-advisory dir until necessary
499-        self.corruption_advisory_dir = os.path.join(storedir,
500-                                                    "corruption-advisories")
501-        self.reserved_space = int(reserved_space)
502-        self.no_storage = discard_storage
503-        self.readonly_storage = readonly_storage
504         self.stats_provider = stats_provider
505         if self.stats_provider:
506             self.stats_provider.register_producer(self)
507hunk ./src/allmydata/storage/server.py 158
508-        self.incomingdir = os.path.join(sharedir, 'incoming')
509-        self._clean_incomplete()
510-        fileutil.make_dirs(self.incomingdir)
511         self._active_writers = weakref.WeakKeyDictionary()
512hunk ./src/allmydata/storage/server.py 159
513+        self.backend = backend
514+        self.backend.setServiceParent(self)
515         log.msg("StorageServer created", facility="tahoe.storage")
516 
517hunk ./src/allmydata/storage/server.py 163
518-        if reserved_space:
519-            if self.get_available_space() is None:
520-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
521-                        umin="0wZ27w", level=log.UNUSUAL)
522-
523         self.latencies = {"allocate": [], # immutable
524                           "write": [],
525                           "close": [],
526hunk ./src/allmydata/storage/server.py 174
527                           "renew": [],
528                           "cancel": [],
529                           }
530-        self.add_bucket_counter()
531-
532-        statefile = os.path.join(self.storedir, "lease_checker.state")
533-        historyfile = os.path.join(self.storedir, "lease_checker.history")
534-        klass = self.LeaseCheckerClass
535-        self.lease_checker = klass(self, statefile, historyfile,
536-                                   expiration_enabled, expiration_mode,
537-                                   expiration_override_lease_duration,
538-                                   expiration_cutoff_date,
539-                                   expiration_sharetypes)
540-        self.lease_checker.setServiceParent(self)
541 
542     def __repr__(self):
543         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
544hunk ./src/allmydata/storage/server.py 178
545 
546-    def add_bucket_counter(self):
547-        statefile = os.path.join(self.storedir, "bucket_counter.state")
548-        self.bucket_counter = BucketCountingCrawler(self, statefile)
549-        self.bucket_counter.setServiceParent(self)
550-
551     def count(self, name, delta=1):
552         if self.stats_provider:
553             self.stats_provider.count("storage_server." + name, delta)
554hunk ./src/allmydata/storage/server.py 233
555             kwargs["facility"] = "tahoe.storage"
556         return log.msg(*args, **kwargs)
557 
558-    def _clean_incomplete(self):
559-        fileutil.rm_dir(self.incomingdir)
560-
561     def get_stats(self):
562         # remember: RIStatsProvider requires that our return dict
563         # contains numeric values.
564hunk ./src/allmydata/storage/server.py 269
565             stats['storage_server.total_bucket_count'] = bucket_count
566         return stats
567 
568-    def get_available_space(self):
569-        """Returns available space for share storage in bytes, or None if no
570-        API to get this information is available."""
571-
572-        if self.readonly_storage:
573-            return 0
574-        return fileutil.get_available_space(self.storedir, self.reserved_space)
575-
576     def allocated_size(self):
577         space = 0
578         for bw in self._active_writers:
579hunk ./src/allmydata/storage/server.py 276
580         return space
581 
582     def remote_get_version(self):
583-        remaining_space = self.get_available_space()
584+        remaining_space = self.backend.get_available_space()
585         if remaining_space is None:
586             # We're on a platform that has no API to get disk stats.
587             remaining_space = 2**64
588hunk ./src/allmydata/storage/server.py 301
589         self.count("allocate")
590         alreadygot = set()
591         bucketwriters = {} # k: shnum, v: BucketWriter
592-        si_dir = storage_index_to_dir(storage_index)
593-        si_s = si_b2a(storage_index)
594 
595hunk ./src/allmydata/storage/server.py 302
596+        si_s = si_b2a(storage_index)
597         log.msg("storage: allocate_buckets %s" % si_s)
598 
599         # in this implementation, the lease information (including secrets)
600hunk ./src/allmydata/storage/server.py 316
601 
602         max_space_per_bucket = allocated_size
603 
604-        remaining_space = self.get_available_space()
605+        remaining_space = self.backend.get_available_space()
606         limited = remaining_space is not None
607         if limited:
608             # this is a bit conservative, since some of this allocated_size()
609hunk ./src/allmydata/storage/server.py 329
610         # they asked about: this will save them a lot of work. Add or update
611         # leases for all of them: if they want us to hold shares for this
612         # file, they'll want us to hold leases for this file.
613-        for (shnum, fn) in self._get_bucket_shares(storage_index):
614+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
615             alreadygot.add(shnum)
616             sf = ShareFile(fn)
617             sf.add_or_renew_lease(lease_info)
618hunk ./src/allmydata/storage/server.py 335
619 
620         for shnum in sharenums:
621-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
622-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
623-            if os.path.exists(finalhome):
624+            share = self.backend.get_share(storage_index, shnum)
625+
626+            if not share:
627+                if (not limited) or (remaining_space >= max_space_per_bucket):
628+                    # ok! we need to create the new share file.
629+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
630+                                      max_space_per_bucket, lease_info, canary)
631+                    bucketwriters[shnum] = bw
632+                    self._active_writers[bw] = 1
633+                    if limited:
634+                        remaining_space -= max_space_per_bucket
635+                else:
636+                    # bummer! not enough space to accept this bucket
637+                    pass
638+
639+            elif share.is_complete():
640                 # great! we already have it. easy.
641                 pass
642hunk ./src/allmydata/storage/server.py 353
643-            elif os.path.exists(incominghome):
644+            elif not share.is_complete():
645                 # Note that we don't create BucketWriters for shnums that
646                 # have a partial share (in incoming/), so if a second upload
647                 # occurs while the first is still in progress, the second
648hunk ./src/allmydata/storage/server.py 359
649                 # uploader will use different storage servers.
650                 pass
651-            elif (not limited) or (remaining_space >= max_space_per_bucket):
652-                # ok! we need to create the new share file.
653-                bw = BucketWriter(self, incominghome, finalhome,
654-                                  max_space_per_bucket, lease_info, canary)
655-                if self.no_storage:
656-                    bw.throw_out_all_data = True
657-                bucketwriters[shnum] = bw
658-                self._active_writers[bw] = 1
659-                if limited:
660-                    remaining_space -= max_space_per_bucket
661-            else:
662-                # bummer! not enough space to accept this bucket
663-                pass
664-
665-        if bucketwriters:
666-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
667 
668         self.add_latency("allocate", time.time() - start)
669         return alreadygot, bucketwriters
670hunk ./src/allmydata/storage/server.py 437
671             self.stats_provider.count('storage_server.bytes_added', consumed_size)
672         del self._active_writers[bw]
673 
674-    def _get_bucket_shares(self, storage_index):
675-        """Return a list of (shnum, pathname) tuples for files that hold
676-        shares for this storage_index. In each tuple, 'shnum' will always be
677-        the integer form of the last component of 'pathname'."""
678-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
679-        try:
680-            for f in os.listdir(storagedir):
681-                if NUM_RE.match(f):
682-                    filename = os.path.join(storagedir, f)
683-                    yield (int(f), filename)
684-        except OSError:
685-            # Commonly caused by there being no buckets at all.
686-            pass
687 
688     def remote_get_buckets(self, storage_index):
689         start = time.time()
690hunk ./src/allmydata/storage/server.py 444
691         si_s = si_b2a(storage_index)
692         log.msg("storage: get_buckets %s" % si_s)
693         bucketreaders = {} # k: sharenum, v: BucketReader
694-        for shnum, filename in self._get_bucket_shares(storage_index):
695+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
696             bucketreaders[shnum] = BucketReader(self, filename,
697                                                 storage_index, shnum)
698         self.add_latency("get", time.time() - start)
699hunk ./src/allmydata/test/test_backends.py 10
700 import mock
701 
702 # This is the code that we're going to be testing.
703-from allmydata.storage.server import StorageServer
704+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
705 
706 # The following share file contents was generated with
707 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
708hunk ./src/allmydata/test/test_backends.py 21
709 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
710 
711 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
712+    @mock.patch('time.time')
713+    @mock.patch('os.mkdir')
714+    @mock.patch('__builtin__.open')
715+    @mock.patch('os.listdir')
716+    @mock.patch('os.path.isdir')
717+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
718+        """ This tests whether a server instance can be constructed
719+        with a null backend. The server instance fails the test if it
720+        tries to read or write to the file system. """
721+
722+        # Now begin the test.
723+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
724+
725+        self.failIf(mockisdir.called)
726+        self.failIf(mocklistdir.called)
727+        self.failIf(mockopen.called)
728+        self.failIf(mockmkdir.called)
729+
730+        # You passed!
731+
732+    @mock.patch('time.time')
733+    @mock.patch('os.mkdir')
734     @mock.patch('__builtin__.open')
735hunk ./src/allmydata/test/test_backends.py 44
736-    def test_create_server(self, mockopen):
737-        """ This tests whether a server instance can be constructed. """
738+    @mock.patch('os.listdir')
739+    @mock.patch('os.path.isdir')
740+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
741+        """ This tests whether a server instance can be constructed
742+        with a filesystem backend. To pass the test, it has to use the
743+        filesystem in only the prescribed ways. """
744 
745         def call_open(fname, mode):
746             if fname == 'testdir/bucket_counter.state':
747hunk ./src/allmydata/test/test_backends.py 58
748                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
749             elif fname == 'testdir/lease_checker.history':
750                 return StringIO()
751+            else:
752+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
753         mockopen.side_effect = call_open
754 
755         # Now begin the test.
756hunk ./src/allmydata/test/test_backends.py 63
757-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
758+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
759+
760+        self.failIf(mockisdir.called)
761+        self.failIf(mocklistdir.called)
762+        self.failIf(mockopen.called)
763+        self.failIf(mockmkdir.called)
764+        self.failIf(mocktime.called)
765 
766         # You passed!
767 
768hunk ./src/allmydata/test/test_backends.py 73
769-class TestServer(unittest.TestCase, ReallyEqualMixin):
770+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
771+    def setUp(self):
772+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
773+
774+    @mock.patch('os.mkdir')
775+    @mock.patch('__builtin__.open')
776+    @mock.patch('os.listdir')
777+    @mock.patch('os.path.isdir')
778+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
779+        """ Write a new share. """
780+
781+        # Now begin the test.
782+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
783+        bs[0].remote_write(0, 'a')
784+        self.failIf(mockisdir.called)
785+        self.failIf(mocklistdir.called)
786+        self.failIf(mockopen.called)
787+        self.failIf(mockmkdir.called)
788+
789+    @mock.patch('os.path.exists')
790+    @mock.patch('os.path.getsize')
791+    @mock.patch('__builtin__.open')
792+    @mock.patch('os.listdir')
793+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
794+        """ This tests whether the code correctly finds and reads
795+        shares written out by old (Tahoe-LAFS <= v1.8.2)
796+        servers. There is a similar test in test_download, but that one
797+        is from the perspective of the client and exercises a deeper
798+        stack of code. This one is for exercising just the
799+        StorageServer object. """
800+
801+        # Now begin the test.
802+        bs = self.s.remote_get_buckets('teststorage_index')
803+
804+        self.failUnlessEqual(len(bs), 0)
805+        self.failIf(mocklistdir.called)
806+        self.failIf(mockopen.called)
807+        self.failIf(mockgetsize.called)
808+        self.failIf(mockexists.called)
809+
810+
811+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
812     @mock.patch('__builtin__.open')
813     def setUp(self, mockopen):
814         def call_open(fname, mode):
815hunk ./src/allmydata/test/test_backends.py 126
816                 return StringIO()
817         mockopen.side_effect = call_open
818 
819-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
820-
821+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
822 
823     @mock.patch('time.time')
824     @mock.patch('os.mkdir')
825hunk ./src/allmydata/test/test_backends.py 134
826     @mock.patch('os.listdir')
827     @mock.patch('os.path.isdir')
828     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
829-        """Handle a report of corruption."""
830+        """ Write a new share. """
831 
832         def call_listdir(dirname):
833             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
834hunk ./src/allmydata/test/test_backends.py 173
835         mockopen.side_effect = call_open
836         # Now begin the test.
837         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
838-        print bs
839         bs[0].remote_write(0, 'a')
840         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
841 
842hunk ./src/allmydata/test/test_backends.py 176
843-
844     @mock.patch('os.path.exists')
845     @mock.patch('os.path.getsize')
846     @mock.patch('__builtin__.open')
847hunk ./src/allmydata/test/test_backends.py 218
848 
849         self.failUnlessEqual(len(bs), 1)
850         b = bs[0]
851+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
852         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
853         # If you try to read past the end you get the as much data as is there.
854         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
855hunk ./src/allmydata/test/test_backends.py 224
856         # If you start reading past the end of the file you get the empty string.
857         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
858+
859+
860}
861[snapshot of progress on backend implementation (not suitable for trunk)
862wilcoxjg@gmail.com**20110626053244
863 Ignore-this: 50c764af791c2b99ada8289546806a0a
864] {
865adddir ./src/allmydata/storage/backends
866adddir ./src/allmydata/storage/backends/das
867move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
868adddir ./src/allmydata/storage/backends/null
869hunk ./src/allmydata/interfaces.py 270
870         store that on disk.
871         """
872 
873+class IStorageBackend(Interface):
874+    """
875+    Objects of this kind live on the server side and are used by the
876+    storage server object.
877+    """
878+    def get_available_space(self, reserved_space):
879+        """ Returns available space for share storage in bytes, or
880+        None if this information is not available or if the available
881+        space is unlimited.
882+
883+        If the backend is configured for read-only mode then this will
884+        return 0.
885+
886+        reserved_space is how many bytes to subtract from the answer, so
887+        you can pass how many bytes you would like to leave unused on this
888+        filesystem as reserved_space. """
889+
890+    def get_bucket_shares(self):
891+        """XXX"""
892+
893+    def get_share(self):
894+        """XXX"""
895+
896+    def make_bucket_writer(self):
897+        """XXX"""
898+
899+class IStorageBackendShare(Interface):
900+    """
901+    This object contains as much as all of the share data.  It is intended
902+    for lazy evaluation such that in many use cases substantially less than
903+    all of the share data will be accessed.
904+    """
905+    def is_complete(self):
906+        """
907+        Returns the share state, or None if the share does not exist.
908+        """
909+
910 class IStorageBucketWriter(Interface):
911     """
912     Objects of this kind live on the client side.
913hunk ./src/allmydata/interfaces.py 2492
914 
915 class EmptyPathnameComponentError(Exception):
916     """The webapi disallows empty pathname components."""
917+
918+class IShareStore(Interface):
919+    pass
920+
921addfile ./src/allmydata/storage/backends/__init__.py
922addfile ./src/allmydata/storage/backends/das/__init__.py
923addfile ./src/allmydata/storage/backends/das/core.py
924hunk ./src/allmydata/storage/backends/das/core.py 1
925+from allmydata.interfaces import IStorageBackend
926+from allmydata.storage.backends.base import Backend
927+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
928+from allmydata.util.assertutil import precondition
929+
930+import os, re, weakref, struct, time
931+
932+from foolscap.api import Referenceable
933+from twisted.application import service
934+
935+from zope.interface import implements
936+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
937+from allmydata.util import fileutil, idlib, log, time_format
938+import allmydata # for __full_version__
939+
940+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
941+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
942+from allmydata.storage.lease import LeaseInfo
943+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
944+     create_mutable_sharefile
945+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
946+from allmydata.storage.crawler import FSBucketCountingCrawler
947+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
948+
949+from zope.interface import implements
950+
951+class DASCore(Backend):
952+    implements(IStorageBackend)
953+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
954+        Backend.__init__(self)
955+
956+        self._setup_storage(storedir, readonly, reserved_space)
957+        self._setup_corruption_advisory()
958+        self._setup_bucket_counter()
959+        self._setup_lease_checkerf(expiration_policy)
960+
961+    def _setup_storage(self, storedir, readonly, reserved_space):
962+        self.storedir = storedir
963+        self.readonly = readonly
964+        self.reserved_space = int(reserved_space)
965+        if self.reserved_space:
966+            if self.get_available_space() is None:
967+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
968+                        umid="0wZ27w", level=log.UNUSUAL)
969+
970+        self.sharedir = os.path.join(self.storedir, "shares")
971+        fileutil.make_dirs(self.sharedir)
972+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
973+        self._clean_incomplete()
974+
975+    def _clean_incomplete(self):
976+        fileutil.rm_dir(self.incomingdir)
977+        fileutil.make_dirs(self.incomingdir)
978+
979+    def _setup_corruption_advisory(self):
980+        # we don't actually create the corruption-advisory dir until necessary
981+        self.corruption_advisory_dir = os.path.join(self.storedir,
982+                                                    "corruption-advisories")
983+
984+    def _setup_bucket_counter(self):
985+        statefname = os.path.join(self.storedir, "bucket_counter.state")
986+        self.bucket_counter = FSBucketCountingCrawler(statefname)
987+        self.bucket_counter.setServiceParent(self)
988+
989+    def _setup_lease_checkerf(self, expiration_policy):
990+        statefile = os.path.join(self.storedir, "lease_checker.state")
991+        historyfile = os.path.join(self.storedir, "lease_checker.history")
992+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
993+        self.lease_checker.setServiceParent(self)
994+
995+    def get_available_space(self):
996+        if self.readonly:
997+            return 0
998+        return fileutil.get_available_space(self.storedir, self.reserved_space)
999+
1000+    def get_shares(self, storage_index):
1001+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1002+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1003+        try:
1004+            for f in os.listdir(finalstoragedir):
1005+                if NUM_RE.match(f):
1006+                    filename = os.path.join(finalstoragedir, f)
1007+                    yield FSBShare(filename, int(f))
1008+        except OSError:
1009+            # Commonly caused by there being no buckets at all.
1010+            pass
1011+       
1012+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1013+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1014+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1015+        return bw
1016+       
1017+
1018+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1019+# and share data. The share data is accessed by RIBucketWriter.write and
1020+# RIBucketReader.read . The lease information is not accessible through these
1021+# interfaces.
1022+
1023+# The share file has the following layout:
1024+#  0x00: share file version number, four bytes, current version is 1
1025+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1026+#  0x08: number of leases, four bytes big-endian
1027+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1028+#  A+0x0c = B: first lease. Lease format is:
1029+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1030+#   B+0x04: renew secret, 32 bytes (SHA256)
1031+#   B+0x24: cancel secret, 32 bytes (SHA256)
1032+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1033+#   B+0x48: next lease, or end of record
1034+
1035+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1036+# but it is still filled in by storage servers in case the storage server
1037+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1038+# share file is moved from one storage server to another. The value stored in
1039+# this field is truncated, so if the actual share data length is >= 2**32,
1040+# then the value stored in this field will be the actual share data length
1041+# modulo 2**32.
1042+
1043+class ImmutableShare:
1044+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1045+    sharetype = "immutable"
1046+
1047+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1048+        """ If max_size is not None then I won't allow more than
1049+        max_size to be written to me. If create=True then max_size
1050+        must not be None. """
1051+        precondition((max_size is not None) or (not create), max_size, create)
1052+        self.shnum = shnum
1053+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1054+        self._max_size = max_size
1055+        if create:
1056+            # touch the file, so later callers will see that we're working on
1057+            # it. Also construct the metadata.
1058+            assert not os.path.exists(self.fname)
1059+            fileutil.make_dirs(os.path.dirname(self.fname))
1060+            f = open(self.fname, 'wb')
1061+            # The second field -- the four-byte share data length -- is no
1062+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1063+            # there in case someone downgrades a storage server from >=
1064+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1065+            # server to another, etc. We do saturation -- a share data length
1066+            # larger than 2**32-1 (what can fit into the field) is marked as
1067+            # the largest length that can fit into the field. That way, even
1068+            # if this does happen, the old < v1.3.0 server will still allow
1069+            # clients to read the first part of the share.
1070+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1071+            f.close()
1072+            self._lease_offset = max_size + 0x0c
1073+            self._num_leases = 0
1074+        else:
1075+            f = open(self.fname, 'rb')
1076+            filesize = os.path.getsize(self.fname)
1077+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1078+            f.close()
1079+            if version != 1:
1080+                msg = "sharefile %s had version %d but we wanted 1" % \
1081+                      (self.fname, version)
1082+                raise UnknownImmutableContainerVersionError(msg)
1083+            self._num_leases = num_leases
1084+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1085+        self._data_offset = 0xc
1086+
1087+    def unlink(self):
1088+        os.unlink(self.fname)
1089+
1090+    def read_share_data(self, offset, length):
1091+        precondition(offset >= 0)
1092+        # Reads beyond the end of the data are truncated. Reads that start
1093+        # beyond the end of the data return an empty string.
1094+        seekpos = self._data_offset+offset
1095+        fsize = os.path.getsize(self.fname)
1096+        actuallength = max(0, min(length, fsize-seekpos))
1097+        if actuallength == 0:
1098+            return ""
1099+        f = open(self.fname, 'rb')
1100+        f.seek(seekpos)
1101+        return f.read(actuallength)
1102+
1103+    def write_share_data(self, offset, data):
1104+        length = len(data)
1105+        precondition(offset >= 0, offset)
1106+        if self._max_size is not None and offset+length > self._max_size:
1107+            raise DataTooLargeError(self._max_size, offset, length)
1108+        f = open(self.fname, 'rb+')
1109+        real_offset = self._data_offset+offset
1110+        f.seek(real_offset)
1111+        assert f.tell() == real_offset
1112+        f.write(data)
1113+        f.close()
1114+
1115+    def _write_lease_record(self, f, lease_number, lease_info):
1116+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1117+        f.seek(offset)
1118+        assert f.tell() == offset
1119+        f.write(lease_info.to_immutable_data())
1120+
1121+    def _read_num_leases(self, f):
1122+        f.seek(0x08)
1123+        (num_leases,) = struct.unpack(">L", f.read(4))
1124+        return num_leases
1125+
1126+    def _write_num_leases(self, f, num_leases):
1127+        f.seek(0x08)
1128+        f.write(struct.pack(">L", num_leases))
1129+
1130+    def _truncate_leases(self, f, num_leases):
1131+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1132+
1133+    def get_leases(self):
1134+        """Yields a LeaseInfo instance for all leases."""
1135+        f = open(self.fname, 'rb')
1136+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1137+        f.seek(self._lease_offset)
1138+        for i in range(num_leases):
1139+            data = f.read(self.LEASE_SIZE)
1140+            if data:
1141+                yield LeaseInfo().from_immutable_data(data)
1142+
1143+    def add_lease(self, lease_info):
1144+        f = open(self.fname, 'rb+')
1145+        num_leases = self._read_num_leases(f)
1146+        self._write_lease_record(f, num_leases, lease_info)
1147+        self._write_num_leases(f, num_leases+1)
1148+        f.close()
1149+
1150+    def renew_lease(self, renew_secret, new_expire_time):
1151+        for i,lease in enumerate(self.get_leases()):
1152+            if constant_time_compare(lease.renew_secret, renew_secret):
1153+                # yup. See if we need to update the owner time.
1154+                if new_expire_time > lease.expiration_time:
1155+                    # yes
1156+                    lease.expiration_time = new_expire_time
1157+                    f = open(self.fname, 'rb+')
1158+                    self._write_lease_record(f, i, lease)
1159+                    f.close()
1160+                return
1161+        raise IndexError("unable to renew non-existent lease")
1162+
1163+    def add_or_renew_lease(self, lease_info):
1164+        try:
1165+            self.renew_lease(lease_info.renew_secret,
1166+                             lease_info.expiration_time)
1167+        except IndexError:
1168+            self.add_lease(lease_info)
1169+
1170+
1171+    def cancel_lease(self, cancel_secret):
1172+        """Remove a lease with the given cancel_secret. If the last lease is
1173+        cancelled, the file will be removed. Return the number of bytes that
1174+        were freed (by truncating the list of leases, and possibly by
1175+        deleting the file. Raise IndexError if there was no lease with the
1176+        given cancel_secret.
1177+        """
1178+
1179+        leases = list(self.get_leases())
1180+        num_leases_removed = 0
1181+        for i,lease in enumerate(leases):
1182+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1183+                leases[i] = None
1184+                num_leases_removed += 1
1185+        if not num_leases_removed:
1186+            raise IndexError("unable to find matching lease to cancel")
1187+        if num_leases_removed:
1188+            # pack and write out the remaining leases. We write these out in
1189+            # the same order as they were added, so that if we crash while
1190+            # doing this, we won't lose any non-cancelled leases.
1191+            leases = [l for l in leases if l] # remove the cancelled leases
1192+            f = open(self.fname, 'rb+')
1193+            for i,lease in enumerate(leases):
1194+                self._write_lease_record(f, i, lease)
1195+            self._write_num_leases(f, len(leases))
1196+            self._truncate_leases(f, len(leases))
1197+            f.close()
1198+        space_freed = self.LEASE_SIZE * num_leases_removed
1199+        if not len(leases):
1200+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1201+            self.unlink()
1202+        return space_freed
1203hunk ./src/allmydata/storage/backends/das/expirer.py 2
1204 import time, os, pickle, struct
1205-from allmydata.storage.crawler import ShareCrawler
1206-from allmydata.storage.shares import get_share_file
1207+from allmydata.storage.crawler import FSShareCrawler
1208 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1209      UnknownImmutableContainerVersionError
1210 from twisted.python import log as twlog
1211hunk ./src/allmydata/storage/backends/das/expirer.py 7
1212 
1213-class LeaseCheckingCrawler(ShareCrawler):
1214+class FSLeaseCheckingCrawler(FSShareCrawler):
1215     """I examine the leases on all shares, determining which are still valid
1216     and which have expired. I can remove the expired leases (if so
1217     configured), and the share will be deleted when the last lease is
1218hunk ./src/allmydata/storage/backends/das/expirer.py 50
1219     slow_start = 360 # wait 6 minutes after startup
1220     minimum_cycle_time = 12*60*60 # not more than twice per day
1221 
1222-    def __init__(self, statefile, historyfile,
1223-                 expiration_enabled, mode,
1224-                 override_lease_duration, # used if expiration_mode=="age"
1225-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1226-                 sharetypes):
1227+    def __init__(self, statefile, historyfile, expiration_policy):
1228         self.historyfile = historyfile
1229hunk ./src/allmydata/storage/backends/das/expirer.py 52
1230-        self.expiration_enabled = expiration_enabled
1231-        self.mode = mode
1232+        self.expiration_enabled = expiration_policy['enabled']
1233+        self.mode = expiration_policy['mode']
1234         self.override_lease_duration = None
1235         self.cutoff_date = None
1236         if self.mode == "age":
1237hunk ./src/allmydata/storage/backends/das/expirer.py 57
1238-            assert isinstance(override_lease_duration, (int, type(None)))
1239-            self.override_lease_duration = override_lease_duration # seconds
1240+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1241+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1242         elif self.mode == "cutoff-date":
1243hunk ./src/allmydata/storage/backends/das/expirer.py 60
1244-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1245+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1246             assert cutoff_date is not None
1247hunk ./src/allmydata/storage/backends/das/expirer.py 62
1248-            self.cutoff_date = cutoff_date
1249+            self.cutoff_date = expiration_policy['cutoff_date']
1250         else:
1251hunk ./src/allmydata/storage/backends/das/expirer.py 64
1252-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1253-        self.sharetypes_to_expire = sharetypes
1254-        ShareCrawler.__init__(self, statefile)
1255+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1256+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1257+        FSShareCrawler.__init__(self, statefile)
1258 
1259     def add_initial_state(self):
1260         # we fill ["cycle-to-date"] here (even though they will be reset in
1261hunk ./src/allmydata/storage/backends/das/expirer.py 156
1262 
1263     def process_share(self, sharefilename):
1264         # first, find out what kind of a share it is
1265-        sf = get_share_file(sharefilename)
1266+        f = open(sharefilename, "rb")
1267+        prefix = f.read(32)
1268+        f.close()
1269+        if prefix == MutableShareFile.MAGIC:
1270+            sf = MutableShareFile(sharefilename)
1271+        else:
1272+            # otherwise assume it's immutable
1273+            sf = FSBShare(sharefilename)
1274         sharetype = sf.sharetype
1275         now = time.time()
1276         s = self.stat(sharefilename)
1277addfile ./src/allmydata/storage/backends/null/__init__.py
1278addfile ./src/allmydata/storage/backends/null/core.py
1279hunk ./src/allmydata/storage/backends/null/core.py 1
1280+from allmydata.storage.backends.base import Backend
1281+
1282+class NullCore(Backend):
1283+    def __init__(self):
1284+        Backend.__init__(self)
1285+
1286+    def get_available_space(self):
1287+        return None
1288+
1289+    def get_shares(self, storage_index):
1290+        return set()
1291+
1292+    def get_share(self, storage_index, sharenum):
1293+        return None
1294+
1295+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1296+        return NullBucketWriter()
1297hunk ./src/allmydata/storage/crawler.py 12
1298 class TimeSliceExceeded(Exception):
1299     pass
1300 
1301-class ShareCrawler(service.MultiService):
1302+class FSShareCrawler(service.MultiService):
1303     """A subcless of ShareCrawler is attached to a StorageServer, and
1304     periodically walks all of its shares, processing each one in some
1305     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1306hunk ./src/allmydata/storage/crawler.py 68
1307     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1308     minimum_cycle_time = 300 # don't run a cycle faster than this
1309 
1310-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1311+    def __init__(self, statefname, allowed_cpu_percentage=None):
1312         service.MultiService.__init__(self)
1313         if allowed_cpu_percentage is not None:
1314             self.allowed_cpu_percentage = allowed_cpu_percentage
1315hunk ./src/allmydata/storage/crawler.py 72
1316-        self.backend = backend
1317+        self.statefname = statefname
1318         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1319                          for i in range(2**10)]
1320         self.prefixes.sort()
1321hunk ./src/allmydata/storage/crawler.py 192
1322         #                            of the last bucket to be processed, or
1323         #                            None if we are sleeping between cycles
1324         try:
1325-            f = open(self.statefile, "rb")
1326+            f = open(self.statefname, "rb")
1327             state = pickle.load(f)
1328             f.close()
1329         except EnvironmentError:
1330hunk ./src/allmydata/storage/crawler.py 230
1331         else:
1332             last_complete_prefix = self.prefixes[lcpi]
1333         self.state["last-complete-prefix"] = last_complete_prefix
1334-        tmpfile = self.statefile + ".tmp"
1335+        tmpfile = self.statefname + ".tmp"
1336         f = open(tmpfile, "wb")
1337         pickle.dump(self.state, f)
1338         f.close()
1339hunk ./src/allmydata/storage/crawler.py 433
1340         pass
1341 
1342 
1343-class BucketCountingCrawler(ShareCrawler):
1344+class FSBucketCountingCrawler(FSShareCrawler):
1345     """I keep track of how many buckets are being managed by this server.
1346     This is equivalent to the number of distributed files and directories for
1347     which I am providing storage. The actual number of files+directories in
1348hunk ./src/allmydata/storage/crawler.py 446
1349 
1350     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1351 
1352-    def __init__(self, statefile, num_sample_prefixes=1):
1353-        ShareCrawler.__init__(self, statefile)
1354+    def __init__(self, statefname, num_sample_prefixes=1):
1355+        FSShareCrawler.__init__(self, statefname)
1356         self.num_sample_prefixes = num_sample_prefixes
1357 
1358     def add_initial_state(self):
1359hunk ./src/allmydata/storage/immutable.py 14
1360 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1361      DataTooLargeError
1362 
1363-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1364-# and share data. The share data is accessed by RIBucketWriter.write and
1365-# RIBucketReader.read . The lease information is not accessible through these
1366-# interfaces.
1367-
1368-# The share file has the following layout:
1369-#  0x00: share file version number, four bytes, current version is 1
1370-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1371-#  0x08: number of leases, four bytes big-endian
1372-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1373-#  A+0x0c = B: first lease. Lease format is:
1374-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1375-#   B+0x04: renew secret, 32 bytes (SHA256)
1376-#   B+0x24: cancel secret, 32 bytes (SHA256)
1377-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1378-#   B+0x48: next lease, or end of record
1379-
1380-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1381-# but it is still filled in by storage servers in case the storage server
1382-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1383-# share file is moved from one storage server to another. The value stored in
1384-# this field is truncated, so if the actual share data length is >= 2**32,
1385-# then the value stored in this field will be the actual share data length
1386-# modulo 2**32.
1387-
1388-class ShareFile:
1389-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1390-    sharetype = "immutable"
1391-
1392-    def __init__(self, filename, max_size=None, create=False):
1393-        """ If max_size is not None then I won't allow more than
1394-        max_size to be written to me. If create=True then max_size
1395-        must not be None. """
1396-        precondition((max_size is not None) or (not create), max_size, create)
1397-        self.home = filename
1398-        self._max_size = max_size
1399-        if create:
1400-            # touch the file, so later callers will see that we're working on
1401-            # it. Also construct the metadata.
1402-            assert not os.path.exists(self.home)
1403-            fileutil.make_dirs(os.path.dirname(self.home))
1404-            f = open(self.home, 'wb')
1405-            # The second field -- the four-byte share data length -- is no
1406-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1407-            # there in case someone downgrades a storage server from >=
1408-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1409-            # server to another, etc. We do saturation -- a share data length
1410-            # larger than 2**32-1 (what can fit into the field) is marked as
1411-            # the largest length that can fit into the field. That way, even
1412-            # if this does happen, the old < v1.3.0 server will still allow
1413-            # clients to read the first part of the share.
1414-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1415-            f.close()
1416-            self._lease_offset = max_size + 0x0c
1417-            self._num_leases = 0
1418-        else:
1419-            f = open(self.home, 'rb')
1420-            filesize = os.path.getsize(self.home)
1421-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1422-            f.close()
1423-            if version != 1:
1424-                msg = "sharefile %s had version %d but we wanted 1" % \
1425-                      (filename, version)
1426-                raise UnknownImmutableContainerVersionError(msg)
1427-            self._num_leases = num_leases
1428-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1429-        self._data_offset = 0xc
1430-
1431-    def unlink(self):
1432-        os.unlink(self.home)
1433-
1434-    def read_share_data(self, offset, length):
1435-        precondition(offset >= 0)
1436-        # Reads beyond the end of the data are truncated. Reads that start
1437-        # beyond the end of the data return an empty string.
1438-        seekpos = self._data_offset+offset
1439-        fsize = os.path.getsize(self.home)
1440-        actuallength = max(0, min(length, fsize-seekpos))
1441-        if actuallength == 0:
1442-            return ""
1443-        f = open(self.home, 'rb')
1444-        f.seek(seekpos)
1445-        return f.read(actuallength)
1446-
1447-    def write_share_data(self, offset, data):
1448-        length = len(data)
1449-        precondition(offset >= 0, offset)
1450-        if self._max_size is not None and offset+length > self._max_size:
1451-            raise DataTooLargeError(self._max_size, offset, length)
1452-        f = open(self.home, 'rb+')
1453-        real_offset = self._data_offset+offset
1454-        f.seek(real_offset)
1455-        assert f.tell() == real_offset
1456-        f.write(data)
1457-        f.close()
1458-
1459-    def _write_lease_record(self, f, lease_number, lease_info):
1460-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1461-        f.seek(offset)
1462-        assert f.tell() == offset
1463-        f.write(lease_info.to_immutable_data())
1464-
1465-    def _read_num_leases(self, f):
1466-        f.seek(0x08)
1467-        (num_leases,) = struct.unpack(">L", f.read(4))
1468-        return num_leases
1469-
1470-    def _write_num_leases(self, f, num_leases):
1471-        f.seek(0x08)
1472-        f.write(struct.pack(">L", num_leases))
1473-
1474-    def _truncate_leases(self, f, num_leases):
1475-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1476-
1477-    def get_leases(self):
1478-        """Yields a LeaseInfo instance for all leases."""
1479-        f = open(self.home, 'rb')
1480-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1481-        f.seek(self._lease_offset)
1482-        for i in range(num_leases):
1483-            data = f.read(self.LEASE_SIZE)
1484-            if data:
1485-                yield LeaseInfo().from_immutable_data(data)
1486-
1487-    def add_lease(self, lease_info):
1488-        f = open(self.home, 'rb+')
1489-        num_leases = self._read_num_leases(f)
1490-        self._write_lease_record(f, num_leases, lease_info)
1491-        self._write_num_leases(f, num_leases+1)
1492-        f.close()
1493-
1494-    def renew_lease(self, renew_secret, new_expire_time):
1495-        for i,lease in enumerate(self.get_leases()):
1496-            if constant_time_compare(lease.renew_secret, renew_secret):
1497-                # yup. See if we need to update the owner time.
1498-                if new_expire_time > lease.expiration_time:
1499-                    # yes
1500-                    lease.expiration_time = new_expire_time
1501-                    f = open(self.home, 'rb+')
1502-                    self._write_lease_record(f, i, lease)
1503-                    f.close()
1504-                return
1505-        raise IndexError("unable to renew non-existent lease")
1506-
1507-    def add_or_renew_lease(self, lease_info):
1508-        try:
1509-            self.renew_lease(lease_info.renew_secret,
1510-                             lease_info.expiration_time)
1511-        except IndexError:
1512-            self.add_lease(lease_info)
1513-
1514-
1515-    def cancel_lease(self, cancel_secret):
1516-        """Remove a lease with the given cancel_secret. If the last lease is
1517-        cancelled, the file will be removed. Return the number of bytes that
1518-        were freed (by truncating the list of leases, and possibly by
1519-        deleting the file. Raise IndexError if there was no lease with the
1520-        given cancel_secret.
1521-        """
1522-
1523-        leases = list(self.get_leases())
1524-        num_leases_removed = 0
1525-        for i,lease in enumerate(leases):
1526-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1527-                leases[i] = None
1528-                num_leases_removed += 1
1529-        if not num_leases_removed:
1530-            raise IndexError("unable to find matching lease to cancel")
1531-        if num_leases_removed:
1532-            # pack and write out the remaining leases. We write these out in
1533-            # the same order as they were added, so that if we crash while
1534-            # doing this, we won't lose any non-cancelled leases.
1535-            leases = [l for l in leases if l] # remove the cancelled leases
1536-            f = open(self.home, 'rb+')
1537-            for i,lease in enumerate(leases):
1538-                self._write_lease_record(f, i, lease)
1539-            self._write_num_leases(f, len(leases))
1540-            self._truncate_leases(f, len(leases))
1541-            f.close()
1542-        space_freed = self.LEASE_SIZE * num_leases_removed
1543-        if not len(leases):
1544-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1545-            self.unlink()
1546-        return space_freed
1547-class NullBucketWriter(Referenceable):
1548-    implements(RIBucketWriter)
1549-
1550-    def remote_write(self, offset, data):
1551-        return
1552-
1553 class BucketWriter(Referenceable):
1554     implements(RIBucketWriter)
1555 
1556hunk ./src/allmydata/storage/immutable.py 17
1557-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1558+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1559         self.ss = ss
1560hunk ./src/allmydata/storage/immutable.py 19
1561-        self.incominghome = incominghome
1562-        self.finalhome = finalhome
1563         self._max_size = max_size # don't allow the client to write more than this
1564         self._canary = canary
1565         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1566hunk ./src/allmydata/storage/immutable.py 24
1567         self.closed = False
1568         self.throw_out_all_data = False
1569-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1570+        self._sharefile = immutableshare
1571         # also, add our lease to the file now, so that other ones can be
1572         # added by simultaneous uploaders
1573         self._sharefile.add_lease(lease_info)
1574hunk ./src/allmydata/storage/server.py 16
1575 from allmydata.storage.lease import LeaseInfo
1576 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1577      create_mutable_sharefile
1578-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1579-from allmydata.storage.crawler import BucketCountingCrawler
1580-from allmydata.storage.expirer import LeaseCheckingCrawler
1581 
1582 from zope.interface import implements
1583 
1584hunk ./src/allmydata/storage/server.py 19
1585-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1586-# be started and stopped.
1587-class Backend(service.MultiService):
1588-    implements(IStatsProducer)
1589-    def __init__(self):
1590-        service.MultiService.__init__(self)
1591-
1592-    def get_bucket_shares(self):
1593-        """XXX"""
1594-        raise NotImplementedError
1595-
1596-    def get_share(self):
1597-        """XXX"""
1598-        raise NotImplementedError
1599-
1600-    def make_bucket_writer(self):
1601-        """XXX"""
1602-        raise NotImplementedError
1603-
1604-class NullBackend(Backend):
1605-    def __init__(self):
1606-        Backend.__init__(self)
1607-
1608-    def get_available_space(self):
1609-        return None
1610-
1611-    def get_bucket_shares(self, storage_index):
1612-        return set()
1613-
1614-    def get_share(self, storage_index, sharenum):
1615-        return None
1616-
1617-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1618-        return NullBucketWriter()
1619-
1620-class FSBackend(Backend):
1621-    def __init__(self, storedir, readonly=False, reserved_space=0):
1622-        Backend.__init__(self)
1623-
1624-        self._setup_storage(storedir, readonly, reserved_space)
1625-        self._setup_corruption_advisory()
1626-        self._setup_bucket_counter()
1627-        self._setup_lease_checkerf()
1628-
1629-    def _setup_storage(self, storedir, readonly, reserved_space):
1630-        self.storedir = storedir
1631-        self.readonly = readonly
1632-        self.reserved_space = int(reserved_space)
1633-        if self.reserved_space:
1634-            if self.get_available_space() is None:
1635-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1636-                        umid="0wZ27w", level=log.UNUSUAL)
1637-
1638-        self.sharedir = os.path.join(self.storedir, "shares")
1639-        fileutil.make_dirs(self.sharedir)
1640-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1641-        self._clean_incomplete()
1642-
1643-    def _clean_incomplete(self):
1644-        fileutil.rm_dir(self.incomingdir)
1645-        fileutil.make_dirs(self.incomingdir)
1646-
1647-    def _setup_corruption_advisory(self):
1648-        # we don't actually create the corruption-advisory dir until necessary
1649-        self.corruption_advisory_dir = os.path.join(self.storedir,
1650-                                                    "corruption-advisories")
1651-
1652-    def _setup_bucket_counter(self):
1653-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1654-        self.bucket_counter = BucketCountingCrawler(statefile)
1655-        self.bucket_counter.setServiceParent(self)
1656-
1657-    def _setup_lease_checkerf(self):
1658-        statefile = os.path.join(self.storedir, "lease_checker.state")
1659-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1660-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1661-                                   expiration_enabled, expiration_mode,
1662-                                   expiration_override_lease_duration,
1663-                                   expiration_cutoff_date,
1664-                                   expiration_sharetypes)
1665-        self.lease_checker.setServiceParent(self)
1666-
1667-    def get_available_space(self):
1668-        if self.readonly:
1669-            return 0
1670-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1671-
1672-    def get_bucket_shares(self, storage_index):
1673-        """Return a list of (shnum, pathname) tuples for files that hold
1674-        shares for this storage_index. In each tuple, 'shnum' will always be
1675-        the integer form of the last component of 'pathname'."""
1676-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1677-        try:
1678-            for f in os.listdir(storagedir):
1679-                if NUM_RE.match(f):
1680-                    filename = os.path.join(storagedir, f)
1681-                    yield (int(f), filename)
1682-        except OSError:
1683-            # Commonly caused by there being no buckets at all.
1684-            pass
1685-
1686 # storage/
1687 # storage/shares/incoming
1688 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1689hunk ./src/allmydata/storage/server.py 32
1690 # $SHARENUM matches this regex:
1691 NUM_RE=re.compile("^[0-9]+$")
1692 
1693-
1694-
1695 class StorageServer(service.MultiService, Referenceable):
1696     implements(RIStorageServer, IStatsProducer)
1697     name = 'storage'
1698hunk ./src/allmydata/storage/server.py 35
1699-    LeaseCheckerClass = LeaseCheckingCrawler
1700 
1701     def __init__(self, nodeid, backend, reserved_space=0,
1702                  readonly_storage=False,
1703hunk ./src/allmydata/storage/server.py 38
1704-                 stats_provider=None,
1705-                 expiration_enabled=False,
1706-                 expiration_mode="age",
1707-                 expiration_override_lease_duration=None,
1708-                 expiration_cutoff_date=None,
1709-                 expiration_sharetypes=("mutable", "immutable")):
1710+                 stats_provider=None ):
1711         service.MultiService.__init__(self)
1712         assert isinstance(nodeid, str)
1713         assert len(nodeid) == 20
1714hunk ./src/allmydata/storage/server.py 217
1715         # they asked about: this will save them a lot of work. Add or update
1716         # leases for all of them: if they want us to hold shares for this
1717         # file, they'll want us to hold leases for this file.
1718-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1719-            alreadygot.add(shnum)
1720-            sf = ShareFile(fn)
1721-            sf.add_or_renew_lease(lease_info)
1722-
1723-        for shnum in sharenums:
1724-            share = self.backend.get_share(storage_index, shnum)
1725+        for share in self.backend.get_shares(storage_index):
1726+            alreadygot.add(share.shnum)
1727+            share.add_or_renew_lease(lease_info)
1728 
1729hunk ./src/allmydata/storage/server.py 221
1730-            if not share:
1731-                if (not limited) or (remaining_space >= max_space_per_bucket):
1732-                    # ok! we need to create the new share file.
1733-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1734-                                      max_space_per_bucket, lease_info, canary)
1735-                    bucketwriters[shnum] = bw
1736-                    self._active_writers[bw] = 1
1737-                    if limited:
1738-                        remaining_space -= max_space_per_bucket
1739-                else:
1740-                    # bummer! not enough space to accept this bucket
1741-                    pass
1742+        for shnum in (sharenums - alreadygot):
1743+            if (not limited) or (remaining_space >= max_space_per_bucket):
1744+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1745+                self.backend.set_storage_server(self)
1746+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1747+                                                     max_space_per_bucket, lease_info, canary)
1748+                bucketwriters[shnum] = bw
1749+                self._active_writers[bw] = 1
1750+                if limited:
1751+                    remaining_space -= max_space_per_bucket
1752 
1753hunk ./src/allmydata/storage/server.py 232
1754-            elif share.is_complete():
1755-                # great! we already have it. easy.
1756-                pass
1757-            elif not share.is_complete():
1758-                # Note that we don't create BucketWriters for shnums that
1759-                # have a partial share (in incoming/), so if a second upload
1760-                # occurs while the first is still in progress, the second
1761-                # uploader will use different storage servers.
1762-                pass
1763+        #XXX We SHOULD DOCUMENT LATER.
1764 
1765         self.add_latency("allocate", time.time() - start)
1766         return alreadygot, bucketwriters
1767hunk ./src/allmydata/storage/server.py 238
1768 
1769     def _iter_share_files(self, storage_index):
1770-        for shnum, filename in self._get_bucket_shares(storage_index):
1771+        for shnum, filename in self._get_shares(storage_index):
1772             f = open(filename, 'rb')
1773             header = f.read(32)
1774             f.close()
1775hunk ./src/allmydata/storage/server.py 318
1776         si_s = si_b2a(storage_index)
1777         log.msg("storage: get_buckets %s" % si_s)
1778         bucketreaders = {} # k: sharenum, v: BucketReader
1779-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1780+        for shnum, filename in self.backend.get_shares(storage_index):
1781             bucketreaders[shnum] = BucketReader(self, filename,
1782                                                 storage_index, shnum)
1783         self.add_latency("get", time.time() - start)
1784hunk ./src/allmydata/storage/server.py 334
1785         # since all shares get the same lease data, we just grab the leases
1786         # from the first share
1787         try:
1788-            shnum, filename = self._get_bucket_shares(storage_index).next()
1789+            shnum, filename = self._get_shares(storage_index).next()
1790             sf = ShareFile(filename)
1791             return sf.get_leases()
1792         except StopIteration:
1793hunk ./src/allmydata/storage/shares.py 1
1794-#! /usr/bin/python
1795-
1796-from allmydata.storage.mutable import MutableShareFile
1797-from allmydata.storage.immutable import ShareFile
1798-
1799-def get_share_file(filename):
1800-    f = open(filename, "rb")
1801-    prefix = f.read(32)
1802-    f.close()
1803-    if prefix == MutableShareFile.MAGIC:
1804-        return MutableShareFile(filename)
1805-    # otherwise assume it's immutable
1806-    return ShareFile(filename)
1807-
1808rmfile ./src/allmydata/storage/shares.py
1809hunk ./src/allmydata/test/common_util.py 20
1810 
1811 def flip_one_bit(s, offset=0, size=None):
1812     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1813-    than offset+size. """
1814+    than offset+size. Return the new string. """
1815     if size is None:
1816         size=len(s)-offset
1817     i = randrange(offset, offset+size)
1818hunk ./src/allmydata/test/test_backends.py 7
1819 
1820 from allmydata.test.common_util import ReallyEqualMixin
1821 
1822-import mock
1823+import mock, os
1824 
1825 # This is the code that we're going to be testing.
1826hunk ./src/allmydata/test/test_backends.py 10
1827-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1828+from allmydata.storage.server import StorageServer
1829+
1830+from allmydata.storage.backends.das.core import DASCore
1831+from allmydata.storage.backends.null.core import NullCore
1832+
1833 
1834 # The following share file contents was generated with
1835 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1836hunk ./src/allmydata/test/test_backends.py 22
1837 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1838 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1839 
1840-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1841+tempdir = 'teststoredir'
1842+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1843+sharefname = os.path.join(sharedirname, '0')
1844 
1845 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1846     @mock.patch('time.time')
1847hunk ./src/allmydata/test/test_backends.py 58
1848         filesystem in only the prescribed ways. """
1849 
1850         def call_open(fname, mode):
1851-            if fname == 'testdir/bucket_counter.state':
1852-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1853-            elif fname == 'testdir/lease_checker.state':
1854-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1855-            elif fname == 'testdir/lease_checker.history':
1856+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1857+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1858+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1859+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1860+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1861                 return StringIO()
1862             else:
1863                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1864hunk ./src/allmydata/test/test_backends.py 124
1865     @mock.patch('__builtin__.open')
1866     def setUp(self, mockopen):
1867         def call_open(fname, mode):
1868-            if fname == 'testdir/bucket_counter.state':
1869-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1870-            elif fname == 'testdir/lease_checker.state':
1871-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1872-            elif fname == 'testdir/lease_checker.history':
1873+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1874+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1875+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1876+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1877+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1878                 return StringIO()
1879         mockopen.side_effect = call_open
1880hunk ./src/allmydata/test/test_backends.py 131
1881-
1882-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1883+        expiration_policy = {'enabled' : False,
1884+                             'mode' : 'age',
1885+                             'override_lease_duration' : None,
1886+                             'cutoff_date' : None,
1887+                             'sharetypes' : None}
1888+        testbackend = DASCore(tempdir, expiration_policy)
1889+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1890 
1891     @mock.patch('time.time')
1892     @mock.patch('os.mkdir')
1893hunk ./src/allmydata/test/test_backends.py 148
1894         """ Write a new share. """
1895 
1896         def call_listdir(dirname):
1897-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1898-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1899+            self.failUnlessReallyEqual(dirname, sharedirname)
1900+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1901 
1902         mocklistdir.side_effect = call_listdir
1903 
1904hunk ./src/allmydata/test/test_backends.py 178
1905 
1906         sharefile = MockFile()
1907         def call_open(fname, mode):
1908-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1909+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1910             return sharefile
1911 
1912         mockopen.side_effect = call_open
1913hunk ./src/allmydata/test/test_backends.py 200
1914         StorageServer object. """
1915 
1916         def call_listdir(dirname):
1917-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1918+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1919             return ['0']
1920 
1921         mocklistdir.side_effect = call_listdir
1922}
1923[checkpoint patch
1924wilcoxjg@gmail.com**20110626165715
1925 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1926] {
1927hunk ./src/allmydata/storage/backends/das/core.py 21
1928 from allmydata.storage.lease import LeaseInfo
1929 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1930      create_mutable_sharefile
1931-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1932+from allmydata.storage.immutable import BucketWriter, BucketReader
1933 from allmydata.storage.crawler import FSBucketCountingCrawler
1934 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1935 
1936hunk ./src/allmydata/storage/backends/das/core.py 27
1937 from zope.interface import implements
1938 
1939+# $SHARENUM matches this regex:
1940+NUM_RE=re.compile("^[0-9]+$")
1941+
1942 class DASCore(Backend):
1943     implements(IStorageBackend)
1944     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1945hunk ./src/allmydata/storage/backends/das/core.py 80
1946         return fileutil.get_available_space(self.storedir, self.reserved_space)
1947 
1948     def get_shares(self, storage_index):
1949-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1950+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
1951         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1952         try:
1953             for f in os.listdir(finalstoragedir):
1954hunk ./src/allmydata/storage/backends/das/core.py 86
1955                 if NUM_RE.match(f):
1956                     filename = os.path.join(finalstoragedir, f)
1957-                    yield FSBShare(filename, int(f))
1958+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
1959         except OSError:
1960             # Commonly caused by there being no buckets at all.
1961             pass
1962hunk ./src/allmydata/storage/backends/das/core.py 95
1963         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1964         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1965         return bw
1966+
1967+    def set_storage_server(self, ss):
1968+        self.ss = ss
1969         
1970 
1971 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
1972hunk ./src/allmydata/storage/server.py 29
1973 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
1974 # base-32 chars).
1975 
1976-# $SHARENUM matches this regex:
1977-NUM_RE=re.compile("^[0-9]+$")
1978 
1979 class StorageServer(service.MultiService, Referenceable):
1980     implements(RIStorageServer, IStatsProducer)
1981}
1982[checkpoint4
1983wilcoxjg@gmail.com**20110628202202
1984 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
1985] {
1986hunk ./src/allmydata/storage/backends/das/core.py 96
1987         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1988         return bw
1989 
1990+    def make_bucket_reader(self, share):
1991+        return BucketReader(self.ss, share)
1992+
1993     def set_storage_server(self, ss):
1994         self.ss = ss
1995         
1996hunk ./src/allmydata/storage/backends/das/core.py 138
1997         must not be None. """
1998         precondition((max_size is not None) or (not create), max_size, create)
1999         self.shnum = shnum
2000+        self.storage_index = storageindex
2001         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2002         self._max_size = max_size
2003         if create:
2004hunk ./src/allmydata/storage/backends/das/core.py 173
2005             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2006         self._data_offset = 0xc
2007 
2008+    def get_shnum(self):
2009+        return self.shnum
2010+
2011     def unlink(self):
2012         os.unlink(self.fname)
2013 
2014hunk ./src/allmydata/storage/backends/null/core.py 2
2015 from allmydata.storage.backends.base import Backend
2016+from allmydata.storage.immutable import BucketWriter, BucketReader
2017 
2018 class NullCore(Backend):
2019     def __init__(self):
2020hunk ./src/allmydata/storage/backends/null/core.py 17
2021     def get_share(self, storage_index, sharenum):
2022         return None
2023 
2024-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2025-        return NullBucketWriter()
2026+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2027+       
2028+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2029+
2030+    def set_storage_server(self, ss):
2031+        self.ss = ss
2032+
2033+class ImmutableShare:
2034+    sharetype = "immutable"
2035+
2036+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2037+        """ If max_size is not None then I won't allow more than
2038+        max_size to be written to me. If create=True then max_size
2039+        must not be None. """
2040+        precondition((max_size is not None) or (not create), max_size, create)
2041+        self.shnum = shnum
2042+        self.storage_index = storageindex
2043+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2044+        self._max_size = max_size
2045+        if create:
2046+            # touch the file, so later callers will see that we're working on
2047+            # it. Also construct the metadata.
2048+            assert not os.path.exists(self.fname)
2049+            fileutil.make_dirs(os.path.dirname(self.fname))
2050+            f = open(self.fname, 'wb')
2051+            # The second field -- the four-byte share data length -- is no
2052+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2053+            # there in case someone downgrades a storage server from >=
2054+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2055+            # server to another, etc. We do saturation -- a share data length
2056+            # larger than 2**32-1 (what can fit into the field) is marked as
2057+            # the largest length that can fit into the field. That way, even
2058+            # if this does happen, the old < v1.3.0 server will still allow
2059+            # clients to read the first part of the share.
2060+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2061+            f.close()
2062+            self._lease_offset = max_size + 0x0c
2063+            self._num_leases = 0
2064+        else:
2065+            f = open(self.fname, 'rb')
2066+            filesize = os.path.getsize(self.fname)
2067+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2068+            f.close()
2069+            if version != 1:
2070+                msg = "sharefile %s had version %d but we wanted 1" % \
2071+                      (self.fname, version)
2072+                raise UnknownImmutableContainerVersionError(msg)
2073+            self._num_leases = num_leases
2074+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2075+        self._data_offset = 0xc
2076+
2077+    def get_shnum(self):
2078+        return self.shnum
2079+
2080+    def unlink(self):
2081+        os.unlink(self.fname)
2082+
2083+    def read_share_data(self, offset, length):
2084+        precondition(offset >= 0)
2085+        # Reads beyond the end of the data are truncated. Reads that start
2086+        # beyond the end of the data return an empty string.
2087+        seekpos = self._data_offset+offset
2088+        fsize = os.path.getsize(self.fname)
2089+        actuallength = max(0, min(length, fsize-seekpos))
2090+        if actuallength == 0:
2091+            return ""
2092+        f = open(self.fname, 'rb')
2093+        f.seek(seekpos)
2094+        return f.read(actuallength)
2095+
2096+    def write_share_data(self, offset, data):
2097+        length = len(data)
2098+        precondition(offset >= 0, offset)
2099+        if self._max_size is not None and offset+length > self._max_size:
2100+            raise DataTooLargeError(self._max_size, offset, length)
2101+        f = open(self.fname, 'rb+')
2102+        real_offset = self._data_offset+offset
2103+        f.seek(real_offset)
2104+        assert f.tell() == real_offset
2105+        f.write(data)
2106+        f.close()
2107+
2108+    def _write_lease_record(self, f, lease_number, lease_info):
2109+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2110+        f.seek(offset)
2111+        assert f.tell() == offset
2112+        f.write(lease_info.to_immutable_data())
2113+
2114+    def _read_num_leases(self, f):
2115+        f.seek(0x08)
2116+        (num_leases,) = struct.unpack(">L", f.read(4))
2117+        return num_leases
2118+
2119+    def _write_num_leases(self, f, num_leases):
2120+        f.seek(0x08)
2121+        f.write(struct.pack(">L", num_leases))
2122+
2123+    def _truncate_leases(self, f, num_leases):
2124+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2125+
2126+    def get_leases(self):
2127+        """Yields a LeaseInfo instance for all leases."""
2128+        f = open(self.fname, 'rb')
2129+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2130+        f.seek(self._lease_offset)
2131+        for i in range(num_leases):
2132+            data = f.read(self.LEASE_SIZE)
2133+            if data:
2134+                yield LeaseInfo().from_immutable_data(data)
2135+
2136+    def add_lease(self, lease_info):
2137+        f = open(self.fname, 'rb+')
2138+        num_leases = self._read_num_leases(f)
2139+        self._write_lease_record(f, num_leases, lease_info)
2140+        self._write_num_leases(f, num_leases+1)
2141+        f.close()
2142+
2143+    def renew_lease(self, renew_secret, new_expire_time):
2144+        for i,lease in enumerate(self.get_leases()):
2145+            if constant_time_compare(lease.renew_secret, renew_secret):
2146+                # yup. See if we need to update the owner time.
2147+                if new_expire_time > lease.expiration_time:
2148+                    # yes
2149+                    lease.expiration_time = new_expire_time
2150+                    f = open(self.fname, 'rb+')
2151+                    self._write_lease_record(f, i, lease)
2152+                    f.close()
2153+                return
2154+        raise IndexError("unable to renew non-existent lease")
2155+
2156+    def add_or_renew_lease(self, lease_info):
2157+        try:
2158+            self.renew_lease(lease_info.renew_secret,
2159+                             lease_info.expiration_time)
2160+        except IndexError:
2161+            self.add_lease(lease_info)
2162+
2163+
2164+    def cancel_lease(self, cancel_secret):
2165+        """Remove a lease with the given cancel_secret. If the last lease is
2166+        cancelled, the file will be removed. Return the number of bytes that
2167+        were freed (by truncating the list of leases, and possibly by
2168+        deleting the file. Raise IndexError if there was no lease with the
2169+        given cancel_secret.
2170+        """
2171+
2172+        leases = list(self.get_leases())
2173+        num_leases_removed = 0
2174+        for i,lease in enumerate(leases):
2175+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2176+                leases[i] = None
2177+                num_leases_removed += 1
2178+        if not num_leases_removed:
2179+            raise IndexError("unable to find matching lease to cancel")
2180+        if num_leases_removed:
2181+            # pack and write out the remaining leases. We write these out in
2182+            # the same order as they were added, so that if we crash while
2183+            # doing this, we won't lose any non-cancelled leases.
2184+            leases = [l for l in leases if l] # remove the cancelled leases
2185+            f = open(self.fname, 'rb+')
2186+            for i,lease in enumerate(leases):
2187+                self._write_lease_record(f, i, lease)
2188+            self._write_num_leases(f, len(leases))
2189+            self._truncate_leases(f, len(leases))
2190+            f.close()
2191+        space_freed = self.LEASE_SIZE * num_leases_removed
2192+        if not len(leases):
2193+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2194+            self.unlink()
2195+        return space_freed
2196hunk ./src/allmydata/storage/immutable.py 114
2197 class BucketReader(Referenceable):
2198     implements(RIBucketReader)
2199 
2200-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2201+    def __init__(self, ss, share):
2202         self.ss = ss
2203hunk ./src/allmydata/storage/immutable.py 116
2204-        self._share_file = ShareFile(sharefname)
2205-        self.storage_index = storage_index
2206-        self.shnum = shnum
2207+        self._share_file = share
2208+        self.storage_index = share.storage_index
2209+        self.shnum = share.shnum
2210 
2211     def __repr__(self):
2212         return "<%s %s %s>" % (self.__class__.__name__,
2213hunk ./src/allmydata/storage/server.py 316
2214         si_s = si_b2a(storage_index)
2215         log.msg("storage: get_buckets %s" % si_s)
2216         bucketreaders = {} # k: sharenum, v: BucketReader
2217-        for shnum, filename in self.backend.get_shares(storage_index):
2218-            bucketreaders[shnum] = BucketReader(self, filename,
2219-                                                storage_index, shnum)
2220+        self.backend.set_storage_server(self)
2221+        for share in self.backend.get_shares(storage_index):
2222+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2223         self.add_latency("get", time.time() - start)
2224         return bucketreaders
2225 
2226hunk ./src/allmydata/test/test_backends.py 25
2227 tempdir = 'teststoredir'
2228 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2229 sharefname = os.path.join(sharedirname, '0')
2230+expiration_policy = {'enabled' : False,
2231+                     'mode' : 'age',
2232+                     'override_lease_duration' : None,
2233+                     'cutoff_date' : None,
2234+                     'sharetypes' : None}
2235 
2236 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2237     @mock.patch('time.time')
2238hunk ./src/allmydata/test/test_backends.py 43
2239         tries to read or write to the file system. """
2240 
2241         # Now begin the test.
2242-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2243+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2244 
2245         self.failIf(mockisdir.called)
2246         self.failIf(mocklistdir.called)
2247hunk ./src/allmydata/test/test_backends.py 74
2248         mockopen.side_effect = call_open
2249 
2250         # Now begin the test.
2251-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2252+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2253 
2254         self.failIf(mockisdir.called)
2255         self.failIf(mocklistdir.called)
2256hunk ./src/allmydata/test/test_backends.py 86
2257 
2258 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2259     def setUp(self):
2260-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2261+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2262 
2263     @mock.patch('os.mkdir')
2264     @mock.patch('__builtin__.open')
2265hunk ./src/allmydata/test/test_backends.py 136
2266             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2267                 return StringIO()
2268         mockopen.side_effect = call_open
2269-        expiration_policy = {'enabled' : False,
2270-                             'mode' : 'age',
2271-                             'override_lease_duration' : None,
2272-                             'cutoff_date' : None,
2273-                             'sharetypes' : None}
2274         testbackend = DASCore(tempdir, expiration_policy)
2275         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2276 
2277}
2278[checkpoint5
2279wilcoxjg@gmail.com**20110705034626
2280 Ignore-this: 255780bd58299b0aa33c027e9d008262
2281] {
2282addfile ./src/allmydata/storage/backends/base.py
2283hunk ./src/allmydata/storage/backends/base.py 1
2284+from twisted.application import service
2285+
2286+class Backend(service.MultiService):
2287+    def __init__(self):
2288+        service.MultiService.__init__(self)
2289hunk ./src/allmydata/storage/backends/null/core.py 19
2290 
2291     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2292         
2293+        immutableshare = ImmutableShare()
2294         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2295 
2296     def set_storage_server(self, ss):
2297hunk ./src/allmydata/storage/backends/null/core.py 28
2298 class ImmutableShare:
2299     sharetype = "immutable"
2300 
2301-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2302+    def __init__(self):
2303         """ If max_size is not None then I won't allow more than
2304         max_size to be written to me. If create=True then max_size
2305         must not be None. """
2306hunk ./src/allmydata/storage/backends/null/core.py 32
2307-        precondition((max_size is not None) or (not create), max_size, create)
2308-        self.shnum = shnum
2309-        self.storage_index = storageindex
2310-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2311-        self._max_size = max_size
2312-        if create:
2313-            # touch the file, so later callers will see that we're working on
2314-            # it. Also construct the metadata.
2315-            assert not os.path.exists(self.fname)
2316-            fileutil.make_dirs(os.path.dirname(self.fname))
2317-            f = open(self.fname, 'wb')
2318-            # The second field -- the four-byte share data length -- is no
2319-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2320-            # there in case someone downgrades a storage server from >=
2321-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2322-            # server to another, etc. We do saturation -- a share data length
2323-            # larger than 2**32-1 (what can fit into the field) is marked as
2324-            # the largest length that can fit into the field. That way, even
2325-            # if this does happen, the old < v1.3.0 server will still allow
2326-            # clients to read the first part of the share.
2327-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2328-            f.close()
2329-            self._lease_offset = max_size + 0x0c
2330-            self._num_leases = 0
2331-        else:
2332-            f = open(self.fname, 'rb')
2333-            filesize = os.path.getsize(self.fname)
2334-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2335-            f.close()
2336-            if version != 1:
2337-                msg = "sharefile %s had version %d but we wanted 1" % \
2338-                      (self.fname, version)
2339-                raise UnknownImmutableContainerVersionError(msg)
2340-            self._num_leases = num_leases
2341-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2342-        self._data_offset = 0xc
2343+        pass
2344 
2345     def get_shnum(self):
2346         return self.shnum
2347hunk ./src/allmydata/storage/backends/null/core.py 54
2348         return f.read(actuallength)
2349 
2350     def write_share_data(self, offset, data):
2351-        length = len(data)
2352-        precondition(offset >= 0, offset)
2353-        if self._max_size is not None and offset+length > self._max_size:
2354-            raise DataTooLargeError(self._max_size, offset, length)
2355-        f = open(self.fname, 'rb+')
2356-        real_offset = self._data_offset+offset
2357-        f.seek(real_offset)
2358-        assert f.tell() == real_offset
2359-        f.write(data)
2360-        f.close()
2361+        pass
2362 
2363     def _write_lease_record(self, f, lease_number, lease_info):
2364         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2365hunk ./src/allmydata/storage/backends/null/core.py 84
2366             if data:
2367                 yield LeaseInfo().from_immutable_data(data)
2368 
2369-    def add_lease(self, lease_info):
2370-        f = open(self.fname, 'rb+')
2371-        num_leases = self._read_num_leases(f)
2372-        self._write_lease_record(f, num_leases, lease_info)
2373-        self._write_num_leases(f, num_leases+1)
2374-        f.close()
2375+    def add_lease(self, lease):
2376+        pass
2377 
2378     def renew_lease(self, renew_secret, new_expire_time):
2379         for i,lease in enumerate(self.get_leases()):
2380hunk ./src/allmydata/test/test_backends.py 32
2381                      'sharetypes' : None}
2382 
2383 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2384-    @mock.patch('time.time')
2385-    @mock.patch('os.mkdir')
2386-    @mock.patch('__builtin__.open')
2387-    @mock.patch('os.listdir')
2388-    @mock.patch('os.path.isdir')
2389-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2390-        """ This tests whether a server instance can be constructed
2391-        with a null backend. The server instance fails the test if it
2392-        tries to read or write to the file system. """
2393-
2394-        # Now begin the test.
2395-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2396-
2397-        self.failIf(mockisdir.called)
2398-        self.failIf(mocklistdir.called)
2399-        self.failIf(mockopen.called)
2400-        self.failIf(mockmkdir.called)
2401-
2402-        # You passed!
2403-
2404     @mock.patch('time.time')
2405     @mock.patch('os.mkdir')
2406     @mock.patch('__builtin__.open')
2407hunk ./src/allmydata/test/test_backends.py 53
2408                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2409         mockopen.side_effect = call_open
2410 
2411-        # Now begin the test.
2412-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2413-
2414-        self.failIf(mockisdir.called)
2415-        self.failIf(mocklistdir.called)
2416-        self.failIf(mockopen.called)
2417-        self.failIf(mockmkdir.called)
2418-        self.failIf(mocktime.called)
2419-
2420-        # You passed!
2421-
2422-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2423-    def setUp(self):
2424-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2425-
2426-    @mock.patch('os.mkdir')
2427-    @mock.patch('__builtin__.open')
2428-    @mock.patch('os.listdir')
2429-    @mock.patch('os.path.isdir')
2430-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2431-        """ Write a new share. """
2432-
2433-        # Now begin the test.
2434-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2435-        bs[0].remote_write(0, 'a')
2436-        self.failIf(mockisdir.called)
2437-        self.failIf(mocklistdir.called)
2438-        self.failIf(mockopen.called)
2439-        self.failIf(mockmkdir.called)
2440+        def call_isdir(fname):
2441+            if fname == os.path.join(tempdir,'shares'):
2442+                return True
2443+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2444+                return True
2445+            else:
2446+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2447+        mockisdir.side_effect = call_isdir
2448 
2449hunk ./src/allmydata/test/test_backends.py 62
2450-    @mock.patch('os.path.exists')
2451-    @mock.patch('os.path.getsize')
2452-    @mock.patch('__builtin__.open')
2453-    @mock.patch('os.listdir')
2454-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2455-        """ This tests whether the code correctly finds and reads
2456-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2457-        servers. There is a similar test in test_download, but that one
2458-        is from the perspective of the client and exercises a deeper
2459-        stack of code. This one is for exercising just the
2460-        StorageServer object. """
2461+        def call_mkdir(fname, mode):
2462+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2463+            self.failUnlessEqual(0777, mode)
2464+            if fname == tempdir:
2465+                return None
2466+            elif fname == os.path.join(tempdir,'shares'):
2467+                return None
2468+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2469+                return None
2470+            else:
2471+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2472+        mockmkdir.side_effect = call_mkdir
2473 
2474         # Now begin the test.
2475hunk ./src/allmydata/test/test_backends.py 76
2476-        bs = self.s.remote_get_buckets('teststorage_index')
2477+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2478 
2479hunk ./src/allmydata/test/test_backends.py 78
2480-        self.failUnlessEqual(len(bs), 0)
2481-        self.failIf(mocklistdir.called)
2482-        self.failIf(mockopen.called)
2483-        self.failIf(mockgetsize.called)
2484-        self.failIf(mockexists.called)
2485+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2486 
2487 
2488 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2489hunk ./src/allmydata/test/test_backends.py 193
2490         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2491 
2492 
2493+
2494+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2495+    @mock.patch('time.time')
2496+    @mock.patch('os.mkdir')
2497+    @mock.patch('__builtin__.open')
2498+    @mock.patch('os.listdir')
2499+    @mock.patch('os.path.isdir')
2500+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2501+        """ This tests whether a file system backend instance can be
2502+        constructed. To pass the test, it has to use the
2503+        filesystem in only the prescribed ways. """
2504+
2505+        def call_open(fname, mode):
2506+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2507+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2508+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2509+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2510+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2511+                return StringIO()
2512+            else:
2513+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2514+        mockopen.side_effect = call_open
2515+
2516+        def call_isdir(fname):
2517+            if fname == os.path.join(tempdir,'shares'):
2518+                return True
2519+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2520+                return True
2521+            else:
2522+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2523+        mockisdir.side_effect = call_isdir
2524+
2525+        def call_mkdir(fname, mode):
2526+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2527+            self.failUnlessEqual(0777, mode)
2528+            if fname == tempdir:
2529+                return None
2530+            elif fname == os.path.join(tempdir,'shares'):
2531+                return None
2532+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2533+                return None
2534+            else:
2535+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2536+        mockmkdir.side_effect = call_mkdir
2537+
2538+        # Now begin the test.
2539+        DASCore('teststoredir', expiration_policy)
2540+
2541+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2542}
2543[checkpoint 6
2544wilcoxjg@gmail.com**20110706190824
2545 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2546] {
2547hunk ./src/allmydata/interfaces.py 100
2548                          renew_secret=LeaseRenewSecret,
2549                          cancel_secret=LeaseCancelSecret,
2550                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2551-                         allocated_size=Offset, canary=Referenceable):
2552+                         allocated_size=Offset,
2553+                         canary=Referenceable):
2554         """
2555hunk ./src/allmydata/interfaces.py 103
2556-        @param storage_index: the index of the bucket to be created or
2557+        @param storage_index: the index of the shares to be created or
2558                               increfed.
2559hunk ./src/allmydata/interfaces.py 105
2560-        @param sharenums: these are the share numbers (probably between 0 and
2561-                          99) that the sender is proposing to store on this
2562-                          server.
2563-        @param renew_secret: This is the secret used to protect bucket refresh
2564+        @param renew_secret: This is the secret used to protect shares refresh
2565                              This secret is generated by the client and
2566                              stored for later comparison by the server. Each
2567                              server is given a different secret.
2568hunk ./src/allmydata/interfaces.py 109
2569-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2570-        @param canary: If the canary is lost before close(), the bucket is
2571+        @param cancel_secret: Like renew_secret, but protects shares decref.
2572+        @param sharenums: these are the share numbers (probably between 0 and
2573+                          99) that the sender is proposing to store on this
2574+                          server.
2575+        @param allocated_size: XXX The size of the shares the client wishes to store.
2576+        @param canary: If the canary is lost before close(), the shares are
2577                        deleted.
2578hunk ./src/allmydata/interfaces.py 116
2579+
2580         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2581                  already have and allocated is what we hereby agree to accept.
2582                  New leases are added for shares in both lists.
2583hunk ./src/allmydata/interfaces.py 128
2584                   renew_secret=LeaseRenewSecret,
2585                   cancel_secret=LeaseCancelSecret):
2586         """
2587-        Add a new lease on the given bucket. If the renew_secret matches an
2588+        Add a new lease on the given shares. If the renew_secret matches an
2589         existing lease, that lease will be renewed instead. If there is no
2590         bucket for the given storage_index, return silently. (note that in
2591         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2592hunk ./src/allmydata/storage/server.py 17
2593 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2594      create_mutable_sharefile
2595 
2596-from zope.interface import implements
2597-
2598 # storage/
2599 # storage/shares/incoming
2600 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2601hunk ./src/allmydata/test/test_backends.py 6
2602 from StringIO import StringIO
2603 
2604 from allmydata.test.common_util import ReallyEqualMixin
2605+from allmydata.util.assertutil import _assert
2606 
2607 import mock, os
2608 
2609hunk ./src/allmydata/test/test_backends.py 92
2610                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2611             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2612                 return StringIO()
2613+            else:
2614+                _assert(False, "The tester code doesn't recognize this case.") 
2615+
2616         mockopen.side_effect = call_open
2617         testbackend = DASCore(tempdir, expiration_policy)
2618         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2619hunk ./src/allmydata/test/test_backends.py 109
2620 
2621         def call_listdir(dirname):
2622             self.failUnlessReallyEqual(dirname, sharedirname)
2623-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2624+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2625 
2626         mocklistdir.side_effect = call_listdir
2627 
2628hunk ./src/allmydata/test/test_backends.py 113
2629+        def call_isdir(dirname):
2630+            self.failUnlessReallyEqual(dirname, sharedirname)
2631+            return True
2632+
2633+        mockisdir.side_effect = call_isdir
2634+
2635+        def call_mkdir(dirname, permissions):
2636+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2637+                self.Fail
2638+            else:
2639+                return True
2640+
2641+        mockmkdir.side_effect = call_mkdir
2642+
2643         class MockFile:
2644             def __init__(self):
2645                 self.buffer = ''
2646hunk ./src/allmydata/test/test_backends.py 156
2647             return sharefile
2648 
2649         mockopen.side_effect = call_open
2650+
2651         # Now begin the test.
2652         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2653         bs[0].remote_write(0, 'a')
2654hunk ./src/allmydata/test/test_backends.py 161
2655         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2656+       
2657+        # Now test the allocated_size method.
2658+        spaceint = self.s.allocated_size()
2659 
2660     @mock.patch('os.path.exists')
2661     @mock.patch('os.path.getsize')
2662}
2663[checkpoint 7
2664wilcoxjg@gmail.com**20110706200820
2665 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2666] hunk ./src/allmydata/test/test_backends.py 164
2667         
2668         # Now test the allocated_size method.
2669         spaceint = self.s.allocated_size()
2670+        self.failUnlessReallyEqual(spaceint, 1)
2671 
2672     @mock.patch('os.path.exists')
2673     @mock.patch('os.path.getsize')
2674[checkpoint8
2675wilcoxjg@gmail.com**20110706223126
2676 Ignore-this: 97336180883cb798b16f15411179f827
2677   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2678] hunk ./src/allmydata/test/test_backends.py 32
2679                      'cutoff_date' : None,
2680                      'sharetypes' : None}
2681 
2682+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2683+    def setUp(self):
2684+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2685+
2686+    @mock.patch('os.mkdir')
2687+    @mock.patch('__builtin__.open')
2688+    @mock.patch('os.listdir')
2689+    @mock.patch('os.path.isdir')
2690+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2691+        """ Write a new share. """
2692+
2693+        # Now begin the test.
2694+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2695+        bs[0].remote_write(0, 'a')
2696+        self.failIf(mockisdir.called)
2697+        self.failIf(mocklistdir.called)
2698+        self.failIf(mockopen.called)
2699+        self.failIf(mockmkdir.called)
2700+
2701 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2702     @mock.patch('time.time')
2703     @mock.patch('os.mkdir')
2704[checkpoint 9
2705wilcoxjg@gmail.com**20110707042942
2706 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2707] {
2708hunk ./src/allmydata/storage/backends/das/core.py 88
2709                     filename = os.path.join(finalstoragedir, f)
2710                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2711         except OSError:
2712-            # Commonly caused by there being no buckets at all.
2713+            # Commonly caused by there being no shares at all.
2714             pass
2715         
2716     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2717hunk ./src/allmydata/storage/backends/das/core.py 141
2718         self.storage_index = storageindex
2719         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2720         self._max_size = max_size
2721+        self.incomingdir = os.path.join(sharedir, 'incoming')
2722+        si_dir = storage_index_to_dir(storageindex)
2723+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2724+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2725         if create:
2726             # touch the file, so later callers will see that we're working on
2727             # it. Also construct the metadata.
2728hunk ./src/allmydata/storage/backends/das/core.py 177
2729             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2730         self._data_offset = 0xc
2731 
2732+    def close(self):
2733+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2734+        fileutil.rename(self.incominghome, self.finalhome)
2735+        try:
2736+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2737+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2738+            # these directories lying around forever, but the delete might
2739+            # fail if we're working on another share for the same storage
2740+            # index (like ab/abcde/5). The alternative approach would be to
2741+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2742+            # ShareWriter), each of which is responsible for a single
2743+            # directory on disk, and have them use reference counting of
2744+            # their children to know when they should do the rmdir. This
2745+            # approach is simpler, but relies on os.rmdir refusing to delete
2746+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2747+            os.rmdir(os.path.dirname(self.incominghome))
2748+            # we also delete the grandparent (prefix) directory, .../ab ,
2749+            # again to avoid leaving directories lying around. This might
2750+            # fail if there is another bucket open that shares a prefix (like
2751+            # ab/abfff).
2752+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2753+            # we leave the great-grandparent (incoming/) directory in place.
2754+        except EnvironmentError:
2755+            # ignore the "can't rmdir because the directory is not empty"
2756+            # exceptions, those are normal consequences of the
2757+            # above-mentioned conditions.
2758+            pass
2759+        pass
2760+       
2761+    def stat(self):
2762+        return os.stat(self.finalhome)[stat.ST_SIZE]
2763+
2764     def get_shnum(self):
2765         return self.shnum
2766 
2767hunk ./src/allmydata/storage/immutable.py 7
2768 
2769 from zope.interface import implements
2770 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2771-from allmydata.util import base32, fileutil, log
2772+from allmydata.util import base32, log
2773 from allmydata.util.assertutil import precondition
2774 from allmydata.util.hashutil import constant_time_compare
2775 from allmydata.storage.lease import LeaseInfo
2776hunk ./src/allmydata/storage/immutable.py 44
2777     def remote_close(self):
2778         precondition(not self.closed)
2779         start = time.time()
2780-
2781-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2782-        fileutil.rename(self.incominghome, self.finalhome)
2783-        try:
2784-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2785-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2786-            # these directories lying around forever, but the delete might
2787-            # fail if we're working on another share for the same storage
2788-            # index (like ab/abcde/5). The alternative approach would be to
2789-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2790-            # ShareWriter), each of which is responsible for a single
2791-            # directory on disk, and have them use reference counting of
2792-            # their children to know when they should do the rmdir. This
2793-            # approach is simpler, but relies on os.rmdir refusing to delete
2794-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2795-            os.rmdir(os.path.dirname(self.incominghome))
2796-            # we also delete the grandparent (prefix) directory, .../ab ,
2797-            # again to avoid leaving directories lying around. This might
2798-            # fail if there is another bucket open that shares a prefix (like
2799-            # ab/abfff).
2800-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2801-            # we leave the great-grandparent (incoming/) directory in place.
2802-        except EnvironmentError:
2803-            # ignore the "can't rmdir because the directory is not empty"
2804-            # exceptions, those are normal consequences of the
2805-            # above-mentioned conditions.
2806-            pass
2807+        self._sharefile.close()
2808         self._sharefile = None
2809         self.closed = True
2810         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2811hunk ./src/allmydata/storage/immutable.py 49
2812 
2813-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2814+        filelen = self._sharefile.stat()
2815         self.ss.bucket_writer_closed(self, filelen)
2816         self.ss.add_latency("close", time.time() - start)
2817         self.ss.count("close")
2818hunk ./src/allmydata/storage/server.py 45
2819         self._active_writers = weakref.WeakKeyDictionary()
2820         self.backend = backend
2821         self.backend.setServiceParent(self)
2822+        self.backend.set_storage_server(self)
2823         log.msg("StorageServer created", facility="tahoe.storage")
2824 
2825         self.latencies = {"allocate": [], # immutable
2826hunk ./src/allmydata/storage/server.py 220
2827 
2828         for shnum in (sharenums - alreadygot):
2829             if (not limited) or (remaining_space >= max_space_per_bucket):
2830-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2831-                self.backend.set_storage_server(self)
2832                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2833                                                      max_space_per_bucket, lease_info, canary)
2834                 bucketwriters[shnum] = bw
2835hunk ./src/allmydata/test/test_backends.py 117
2836         mockopen.side_effect = call_open
2837         testbackend = DASCore(tempdir, expiration_policy)
2838         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2839-
2840+   
2841+    @mock.patch('allmydata.util.fileutil.get_available_space')
2842     @mock.patch('time.time')
2843     @mock.patch('os.mkdir')
2844     @mock.patch('__builtin__.open')
2845hunk ./src/allmydata/test/test_backends.py 124
2846     @mock.patch('os.listdir')
2847     @mock.patch('os.path.isdir')
2848-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2849+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2850+                             mockget_available_space):
2851         """ Write a new share. """
2852 
2853         def call_listdir(dirname):
2854hunk ./src/allmydata/test/test_backends.py 148
2855 
2856         mockmkdir.side_effect = call_mkdir
2857 
2858+        def call_get_available_space(storedir, reserved_space):
2859+            self.failUnlessReallyEqual(storedir, tempdir)
2860+            return 1
2861+
2862+        mockget_available_space.side_effect = call_get_available_space
2863+
2864         class MockFile:
2865             def __init__(self):
2866                 self.buffer = ''
2867hunk ./src/allmydata/test/test_backends.py 188
2868         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2869         bs[0].remote_write(0, 'a')
2870         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2871-       
2872+
2873+        # What happens when there's not enough space for the client's request?
2874+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2875+
2876         # Now test the allocated_size method.
2877         spaceint = self.s.allocated_size()
2878         self.failUnlessReallyEqual(spaceint, 1)
2879}
2880[checkpoint10
2881wilcoxjg@gmail.com**20110707172049
2882 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2883] {
2884hunk ./src/allmydata/test/test_backends.py 20
2885 # The following share file contents was generated with
2886 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2887 # with share data == 'a'.
2888-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2889+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2890+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2891+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2892 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2893 
2894hunk ./src/allmydata/test/test_backends.py 25
2895+testnodeid = 'testnodeidxxxxxxxxxx'
2896 tempdir = 'teststoredir'
2897 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2898 sharefname = os.path.join(sharedirname, '0')
2899hunk ./src/allmydata/test/test_backends.py 37
2900 
2901 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2902     def setUp(self):
2903-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2904+        self.s = StorageServer(testnodeid, backend=NullCore())
2905 
2906     @mock.patch('os.mkdir')
2907     @mock.patch('__builtin__.open')
2908hunk ./src/allmydata/test/test_backends.py 99
2909         mockmkdir.side_effect = call_mkdir
2910 
2911         # Now begin the test.
2912-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2913+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2914 
2915         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2916 
2917hunk ./src/allmydata/test/test_backends.py 119
2918 
2919         mockopen.side_effect = call_open
2920         testbackend = DASCore(tempdir, expiration_policy)
2921-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2922-   
2923+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
2924+       
2925+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
2926     @mock.patch('allmydata.util.fileutil.get_available_space')
2927     @mock.patch('time.time')
2928     @mock.patch('os.mkdir')
2929hunk ./src/allmydata/test/test_backends.py 129
2930     @mock.patch('os.listdir')
2931     @mock.patch('os.path.isdir')
2932     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2933-                             mockget_available_space):
2934+                             mockget_available_space, mockget_shares):
2935         """ Write a new share. """
2936 
2937         def call_listdir(dirname):
2938hunk ./src/allmydata/test/test_backends.py 139
2939         mocklistdir.side_effect = call_listdir
2940 
2941         def call_isdir(dirname):
2942+            #XXX Should there be any other tests here?
2943             self.failUnlessReallyEqual(dirname, sharedirname)
2944             return True
2945 
2946hunk ./src/allmydata/test/test_backends.py 159
2947 
2948         mockget_available_space.side_effect = call_get_available_space
2949 
2950+        mocktime.return_value = 0
2951+        class MockShare:
2952+            def __init__(self):
2953+                self.shnum = 1
2954+               
2955+            def add_or_renew_lease(elf, lease_info):
2956+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
2957+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
2958+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
2959+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
2960+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
2961+               
2962+
2963+        share = MockShare()
2964+        def call_get_shares(storageindex):
2965+            return [share]
2966+
2967+        mockget_shares.side_effect = call_get_shares
2968+
2969         class MockFile:
2970             def __init__(self):
2971                 self.buffer = ''
2972hunk ./src/allmydata/test/test_backends.py 199
2973             def tell(self):
2974                 return self.pos
2975 
2976-        mocktime.return_value = 0
2977 
2978         sharefile = MockFile()
2979         def call_open(fname, mode):
2980}
2981[jacp 11
2982wilcoxjg@gmail.com**20110708213919
2983 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
2984] {
2985hunk ./src/allmydata/storage/backends/das/core.py 144
2986         self.incomingdir = os.path.join(sharedir, 'incoming')
2987         si_dir = storage_index_to_dir(storageindex)
2988         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2989+        #XXX  self.fname and self.finalhome need to be resolve/merged.
2990         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2991         if create:
2992             # touch the file, so later callers will see that we're working on
2993hunk ./src/allmydata/storage/backends/das/core.py 208
2994         pass
2995         
2996     def stat(self):
2997-        return os.stat(self.finalhome)[stat.ST_SIZE]
2998+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
2999 
3000     def get_shnum(self):
3001         return self.shnum
3002hunk ./src/allmydata/storage/immutable.py 44
3003     def remote_close(self):
3004         precondition(not self.closed)
3005         start = time.time()
3006+
3007         self._sharefile.close()
3008hunk ./src/allmydata/storage/immutable.py 46
3009+        filelen = self._sharefile.stat()
3010         self._sharefile = None
3011hunk ./src/allmydata/storage/immutable.py 48
3012+
3013         self.closed = True
3014         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3015 
3016hunk ./src/allmydata/storage/immutable.py 52
3017-        filelen = self._sharefile.stat()
3018         self.ss.bucket_writer_closed(self, filelen)
3019         self.ss.add_latency("close", time.time() - start)
3020         self.ss.count("close")
3021hunk ./src/allmydata/storage/server.py 220
3022 
3023         for shnum in (sharenums - alreadygot):
3024             if (not limited) or (remaining_space >= max_space_per_bucket):
3025-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3026-                                                     max_space_per_bucket, lease_info, canary)
3027+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3028                 bucketwriters[shnum] = bw
3029                 self._active_writers[bw] = 1
3030                 if limited:
3031hunk ./src/allmydata/test/test_backends.py 20
3032 # The following share file contents was generated with
3033 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3034 # with share data == 'a'.
3035-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3036-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3037+renew_secret  = 'x'*32
3038+cancel_secret = 'y'*32
3039 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3040 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3041 
3042hunk ./src/allmydata/test/test_backends.py 27
3043 testnodeid = 'testnodeidxxxxxxxxxx'
3044 tempdir = 'teststoredir'
3045-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3046-sharefname = os.path.join(sharedirname, '0')
3047+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3048+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3049+shareincomingname = os.path.join(sharedirincomingname, '0')
3050+sharefname = os.path.join(sharedirfinalname, '0')
3051+
3052 expiration_policy = {'enabled' : False,
3053                      'mode' : 'age',
3054                      'override_lease_duration' : None,
3055hunk ./src/allmydata/test/test_backends.py 123
3056         mockopen.side_effect = call_open
3057         testbackend = DASCore(tempdir, expiration_policy)
3058         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3059-       
3060+
3061+    @mock.patch('allmydata.util.fileutil.rename')
3062+    @mock.patch('allmydata.util.fileutil.make_dirs')
3063+    @mock.patch('os.path.exists')
3064+    @mock.patch('os.stat')
3065     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3066     @mock.patch('allmydata.util.fileutil.get_available_space')
3067     @mock.patch('time.time')
3068hunk ./src/allmydata/test/test_backends.py 136
3069     @mock.patch('os.listdir')
3070     @mock.patch('os.path.isdir')
3071     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3072-                             mockget_available_space, mockget_shares):
3073+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3074+                             mockmake_dirs, mockrename):
3075         """ Write a new share. """
3076 
3077         def call_listdir(dirname):
3078hunk ./src/allmydata/test/test_backends.py 141
3079-            self.failUnlessReallyEqual(dirname, sharedirname)
3080+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3081             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3082 
3083         mocklistdir.side_effect = call_listdir
3084hunk ./src/allmydata/test/test_backends.py 148
3085 
3086         def call_isdir(dirname):
3087             #XXX Should there be any other tests here?
3088-            self.failUnlessReallyEqual(dirname, sharedirname)
3089+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3090             return True
3091 
3092         mockisdir.side_effect = call_isdir
3093hunk ./src/allmydata/test/test_backends.py 154
3094 
3095         def call_mkdir(dirname, permissions):
3096-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3097+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3098                 self.Fail
3099             else:
3100                 return True
3101hunk ./src/allmydata/test/test_backends.py 208
3102                 return self.pos
3103 
3104 
3105-        sharefile = MockFile()
3106+        fobj = MockFile()
3107         def call_open(fname, mode):
3108             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3109hunk ./src/allmydata/test/test_backends.py 211
3110-            return sharefile
3111+            return fobj
3112 
3113         mockopen.side_effect = call_open
3114 
3115hunk ./src/allmydata/test/test_backends.py 215
3116+        def call_make_dirs(dname):
3117+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3118+           
3119+        mockmake_dirs.side_effect = call_make_dirs
3120+
3121+        def call_rename(src, dst):
3122+           self.failUnlessReallyEqual(src, shareincomingname)
3123+           self.failUnlessReallyEqual(dst, sharefname)
3124+           
3125+        mockrename.side_effect = call_rename
3126+
3127+        def call_exists(fname):
3128+            self.failUnlessReallyEqual(fname, sharefname)
3129+
3130+        mockexists.side_effect = call_exists
3131+
3132         # Now begin the test.
3133         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3134         bs[0].remote_write(0, 'a')
3135hunk ./src/allmydata/test/test_backends.py 234
3136-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3137+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3138+        spaceint = self.s.allocated_size()
3139+        self.failUnlessReallyEqual(spaceint, 1)
3140+
3141+        bs[0].remote_close()
3142 
3143         # What happens when there's not enough space for the client's request?
3144hunk ./src/allmydata/test/test_backends.py 241
3145-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3146+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3147 
3148         # Now test the allocated_size method.
3149hunk ./src/allmydata/test/test_backends.py 244
3150-        spaceint = self.s.allocated_size()
3151-        self.failUnlessReallyEqual(spaceint, 1)
3152+        #self.failIf(mockexists.called, mockexists.call_args_list)
3153+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3154+        #self.failIf(mockrename.called, mockrename.call_args_list)
3155+        #self.failIf(mockstat.called, mockstat.call_args_list)
3156 
3157     @mock.patch('os.path.exists')
3158     @mock.patch('os.path.getsize')
3159}
3160[checkpoint12 testing correct behavior with regard to incoming and final
3161wilcoxjg@gmail.com**20110710191915
3162 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3163] {
3164hunk ./src/allmydata/storage/backends/das/core.py 74
3165         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3166         self.lease_checker.setServiceParent(self)
3167 
3168+    def get_incoming(self, storageindex):
3169+        return set((1,))
3170+
3171     def get_available_space(self):
3172         if self.readonly:
3173             return 0
3174hunk ./src/allmydata/storage/server.py 77
3175         """Return a dict, indexed by category, that contains a dict of
3176         latency numbers for each category. If there are sufficient samples
3177         for unambiguous interpretation, each dict will contain the
3178-        following keys: mean, 01_0_percentile, 10_0_percentile,
3179+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3180         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3181         99_0_percentile, 99_9_percentile.  If there are insufficient
3182         samples for a given percentile to be interpreted unambiguously
3183hunk ./src/allmydata/storage/server.py 120
3184 
3185     def get_stats(self):
3186         # remember: RIStatsProvider requires that our return dict
3187-        # contains numeric values.
3188+        # contains numeric, or None values.
3189         stats = { 'storage_server.allocated': self.allocated_size(), }
3190         stats['storage_server.reserved_space'] = self.reserved_space
3191         for category,ld in self.get_latencies().items():
3192hunk ./src/allmydata/storage/server.py 185
3193         start = time.time()
3194         self.count("allocate")
3195         alreadygot = set()
3196+        incoming = set()
3197         bucketwriters = {} # k: shnum, v: BucketWriter
3198 
3199         si_s = si_b2a(storage_index)
3200hunk ./src/allmydata/storage/server.py 219
3201             alreadygot.add(share.shnum)
3202             share.add_or_renew_lease(lease_info)
3203 
3204-        for shnum in (sharenums - alreadygot):
3205+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3206+        incoming = self.backend.get_incoming(storageindex)
3207+
3208+        for shnum in ((sharenums - alreadygot) - incoming):
3209             if (not limited) or (remaining_space >= max_space_per_bucket):
3210                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3211                 bucketwriters[shnum] = bw
3212hunk ./src/allmydata/storage/server.py 229
3213                 self._active_writers[bw] = 1
3214                 if limited:
3215                     remaining_space -= max_space_per_bucket
3216-
3217-        #XXX We SHOULD DOCUMENT LATER.
3218+            else:
3219+                # Bummer not enough space to accept this share.
3220+                pass
3221 
3222         self.add_latency("allocate", time.time() - start)
3223         return alreadygot, bucketwriters
3224hunk ./src/allmydata/storage/server.py 323
3225         self.add_latency("get", time.time() - start)
3226         return bucketreaders
3227 
3228-    def get_leases(self, storage_index):
3229+    def remote_get_incoming(self, storageindex):
3230+        incoming_share_set = self.backend.get_incoming(storageindex)
3231+        return incoming_share_set
3232+
3233+    def get_leases(self, storageindex):
3234         """Provide an iterator that yields all of the leases attached to this
3235         bucket. Each lease is returned as a LeaseInfo instance.
3236 
3237hunk ./src/allmydata/storage/server.py 337
3238         # since all shares get the same lease data, we just grab the leases
3239         # from the first share
3240         try:
3241-            shnum, filename = self._get_shares(storage_index).next()
3242+            shnum, filename = self._get_shares(storageindex).next()
3243             sf = ShareFile(filename)
3244             return sf.get_leases()
3245         except StopIteration:
3246hunk ./src/allmydata/test/test_backends.py 182
3247 
3248         share = MockShare()
3249         def call_get_shares(storageindex):
3250-            return [share]
3251+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3252+            return []#share]
3253 
3254         mockget_shares.side_effect = call_get_shares
3255 
3256hunk ./src/allmydata/test/test_backends.py 222
3257         mockmake_dirs.side_effect = call_make_dirs
3258 
3259         def call_rename(src, dst):
3260-           self.failUnlessReallyEqual(src, shareincomingname)
3261-           self.failUnlessReallyEqual(dst, sharefname)
3262+            self.failUnlessReallyEqual(src, shareincomingname)
3263+            self.failUnlessReallyEqual(dst, sharefname)
3264             
3265         mockrename.side_effect = call_rename
3266 
3267hunk ./src/allmydata/test/test_backends.py 233
3268         mockexists.side_effect = call_exists
3269 
3270         # Now begin the test.
3271+
3272+        # XXX (0) ???  Fail unless something is not properly set-up?
3273         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3274hunk ./src/allmydata/test/test_backends.py 236
3275+
3276+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3277+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3278+
3279+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3280+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3281+        # with the same si, until BucketWriter.remote_close() has been called.
3282+        # self.failIf(bsa)
3283+
3284+        # XXX (3) Inspect final and fail unless there's nothing there.
3285         bs[0].remote_write(0, 'a')
3286hunk ./src/allmydata/test/test_backends.py 247
3287+        # XXX (4a) Inspect final and fail unless share 0 is there.
3288+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3289         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3290         spaceint = self.s.allocated_size()
3291         self.failUnlessReallyEqual(spaceint, 1)
3292hunk ./src/allmydata/test/test_backends.py 253
3293 
3294+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3295         bs[0].remote_close()
3296 
3297         # What happens when there's not enough space for the client's request?
3298hunk ./src/allmydata/test/test_backends.py 260
3299         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3300 
3301         # Now test the allocated_size method.
3302-        #self.failIf(mockexists.called, mockexists.call_args_list)
3303+        # self.failIf(mockexists.called, mockexists.call_args_list)
3304         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3305         #self.failIf(mockrename.called, mockrename.call_args_list)
3306         #self.failIf(mockstat.called, mockstat.call_args_list)
3307}
3308[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3309wilcoxjg@gmail.com**20110710195139
3310 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3311] {
3312hunk ./src/allmydata/storage/server.py 220
3313             share.add_or_renew_lease(lease_info)
3314 
3315         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3316-        incoming = self.backend.get_incoming(storageindex)
3317+        incoming = self.backend.get_incoming(storage_index)
3318 
3319         for shnum in ((sharenums - alreadygot) - incoming):
3320             if (not limited) or (remaining_space >= max_space_per_bucket):
3321hunk ./src/allmydata/storage/server.py 323
3322         self.add_latency("get", time.time() - start)
3323         return bucketreaders
3324 
3325-    def remote_get_incoming(self, storageindex):
3326-        incoming_share_set = self.backend.get_incoming(storageindex)
3327+    def remote_get_incoming(self, storage_index):
3328+        incoming_share_set = self.backend.get_incoming(storage_index)
3329         return incoming_share_set
3330 
3331hunk ./src/allmydata/storage/server.py 327
3332-    def get_leases(self, storageindex):
3333+    def get_leases(self, storage_index):
3334         """Provide an iterator that yields all of the leases attached to this
3335         bucket. Each lease is returned as a LeaseInfo instance.
3336 
3337hunk ./src/allmydata/storage/server.py 337
3338         # since all shares get the same lease data, we just grab the leases
3339         # from the first share
3340         try:
3341-            shnum, filename = self._get_shares(storageindex).next()
3342+            shnum, filename = self._get_shares(storage_index).next()
3343             sf = ShareFile(filename)
3344             return sf.get_leases()
3345         except StopIteration:
3346replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3347}
3348[adding comments to clarify what I'm about to do.
3349wilcoxjg@gmail.com**20110710220623
3350 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3351] {
3352hunk ./src/allmydata/storage/backends/das/core.py 8
3353 
3354 import os, re, weakref, struct, time
3355 
3356-from foolscap.api import Referenceable
3357+#from foolscap.api import Referenceable
3358 from twisted.application import service
3359 
3360 from zope.interface import implements
3361hunk ./src/allmydata/storage/backends/das/core.py 12
3362-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3363+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3364 from allmydata.util import fileutil, idlib, log, time_format
3365 import allmydata # for __full_version__
3366 
3367hunk ./src/allmydata/storage/server.py 219
3368             alreadygot.add(share.shnum)
3369             share.add_or_renew_lease(lease_info)
3370 
3371-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3372+        # fill incoming with all shares that are incoming use a set operation
3373+        # since there's no need to operate on individual pieces
3374         incoming = self.backend.get_incoming(storageindex)
3375 
3376         for shnum in ((sharenums - alreadygot) - incoming):
3377hunk ./src/allmydata/test/test_backends.py 245
3378         # with the same si, until BucketWriter.remote_close() has been called.
3379         # self.failIf(bsa)
3380 
3381-        # XXX (3) Inspect final and fail unless there's nothing there.
3382         bs[0].remote_write(0, 'a')
3383hunk ./src/allmydata/test/test_backends.py 246
3384-        # XXX (4a) Inspect final and fail unless share 0 is there.
3385-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3386         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3387         spaceint = self.s.allocated_size()
3388         self.failUnlessReallyEqual(spaceint, 1)
3389hunk ./src/allmydata/test/test_backends.py 250
3390 
3391-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3392+        # XXX (3) Inspect final and fail unless there's nothing there.
3393         bs[0].remote_close()
3394hunk ./src/allmydata/test/test_backends.py 252
3395+        # XXX (4a) Inspect final and fail unless share 0 is there.
3396+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3397 
3398         # What happens when there's not enough space for the client's request?
3399         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3400}
3401[branching back, no longer attempting to mock inside TestServerFSBackend
3402wilcoxjg@gmail.com**20110711190849
3403 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3404] {
3405hunk ./src/allmydata/storage/backends/das/core.py 75
3406         self.lease_checker.setServiceParent(self)
3407 
3408     def get_incoming(self, storageindex):
3409-        return set((1,))
3410-
3411-    def get_available_space(self):
3412-        if self.readonly:
3413-            return 0
3414-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3415+        """Return the set of incoming shnums."""
3416+        return set(os.listdir(self.incomingdir))
3417 
3418     def get_shares(self, storage_index):
3419         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3420hunk ./src/allmydata/storage/backends/das/core.py 90
3421             # Commonly caused by there being no shares at all.
3422             pass
3423         
3424+    def get_available_space(self):
3425+        if self.readonly:
3426+            return 0
3427+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3428+
3429     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3430         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3431         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3432hunk ./src/allmydata/test/test_backends.py 27
3433 
3434 testnodeid = 'testnodeidxxxxxxxxxx'
3435 tempdir = 'teststoredir'
3436-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3437-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3438+basedir = os.path.join(tempdir, 'shares')
3439+baseincdir = os.path.join(basedir, 'incoming')
3440+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3441+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3442 shareincomingname = os.path.join(sharedirincomingname, '0')
3443 sharefname = os.path.join(sharedirfinalname, '0')
3444 
3445hunk ./src/allmydata/test/test_backends.py 142
3446                              mockmake_dirs, mockrename):
3447         """ Write a new share. """
3448 
3449-        def call_listdir(dirname):
3450-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3451-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3452-
3453-        mocklistdir.side_effect = call_listdir
3454-
3455-        def call_isdir(dirname):
3456-            #XXX Should there be any other tests here?
3457-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3458-            return True
3459-
3460-        mockisdir.side_effect = call_isdir
3461-
3462-        def call_mkdir(dirname, permissions):
3463-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3464-                self.Fail
3465-            else:
3466-                return True
3467-
3468-        mockmkdir.side_effect = call_mkdir
3469-
3470-        def call_get_available_space(storedir, reserved_space):
3471-            self.failUnlessReallyEqual(storedir, tempdir)
3472-            return 1
3473-
3474-        mockget_available_space.side_effect = call_get_available_space
3475-
3476-        mocktime.return_value = 0
3477         class MockShare:
3478             def __init__(self):
3479                 self.shnum = 1
3480hunk ./src/allmydata/test/test_backends.py 152
3481                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3482                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3483                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3484-               
3485 
3486         share = MockShare()
3487hunk ./src/allmydata/test/test_backends.py 154
3488-        def call_get_shares(storageindex):
3489-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3490-            return []#share]
3491-
3492-        mockget_shares.side_effect = call_get_shares
3493 
3494         class MockFile:
3495             def __init__(self):
3496hunk ./src/allmydata/test/test_backends.py 176
3497             def tell(self):
3498                 return self.pos
3499 
3500-
3501         fobj = MockFile()
3502hunk ./src/allmydata/test/test_backends.py 177
3503+
3504+        directories = {}
3505+        def call_listdir(dirname):
3506+            if dirname not in directories:
3507+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3508+            else:
3509+                return directories[dirname].get_contents()
3510+
3511+        mocklistdir.side_effect = call_listdir
3512+
3513+        class MockDir:
3514+            def __init__(self, dirname):
3515+                self.name = dirname
3516+                self.contents = []
3517+   
3518+            def get_contents(self):
3519+                return self.contents
3520+
3521+        def call_isdir(dirname):
3522+            #XXX Should there be any other tests here?
3523+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3524+            return True
3525+
3526+        mockisdir.side_effect = call_isdir
3527+
3528+        def call_mkdir(dirname, permissions):
3529+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3530+                self.Fail
3531+            if dirname in directories:
3532+                raise OSError(17, "File exists: '%s'" % dirname)
3533+                self.Fail
3534+            elif dirname not in directories:
3535+                directories[dirname] = MockDir(dirname)
3536+                return True
3537+
3538+        mockmkdir.side_effect = call_mkdir
3539+
3540+        def call_get_available_space(storedir, reserved_space):
3541+            self.failUnlessReallyEqual(storedir, tempdir)
3542+            return 1
3543+
3544+        mockget_available_space.side_effect = call_get_available_space
3545+
3546+        mocktime.return_value = 0
3547+        def call_get_shares(storageindex):
3548+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3549+            return []#share]
3550+
3551+        mockget_shares.side_effect = call_get_shares
3552+
3553         def call_open(fname, mode):
3554             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3555             return fobj
3556}
3557[checkpoint12 TestServerFSBackend no longer mocks filesystem
3558wilcoxjg@gmail.com**20110711193357
3559 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3560] {
3561hunk ./src/allmydata/storage/backends/das/core.py 23
3562      create_mutable_sharefile
3563 from allmydata.storage.immutable import BucketWriter, BucketReader
3564 from allmydata.storage.crawler import FSBucketCountingCrawler
3565+from allmydata.util.hashutil import constant_time_compare
3566 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3567 
3568 from zope.interface import implements
3569hunk ./src/allmydata/storage/backends/das/core.py 28
3570 
3571+# storage/
3572+# storage/shares/incoming
3573+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3574+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3575+# storage/shares/$START/$STORAGEINDEX
3576+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3577+
3578+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3579+# base-32 chars).
3580 # $SHARENUM matches this regex:
3581 NUM_RE=re.compile("^[0-9]+$")
3582 
3583hunk ./src/allmydata/test/test_backends.py 126
3584         testbackend = DASCore(tempdir, expiration_policy)
3585         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3586 
3587-    @mock.patch('allmydata.util.fileutil.rename')
3588-    @mock.patch('allmydata.util.fileutil.make_dirs')
3589-    @mock.patch('os.path.exists')
3590-    @mock.patch('os.stat')
3591-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3592-    @mock.patch('allmydata.util.fileutil.get_available_space')
3593     @mock.patch('time.time')
3594hunk ./src/allmydata/test/test_backends.py 127
3595-    @mock.patch('os.mkdir')
3596-    @mock.patch('__builtin__.open')
3597-    @mock.patch('os.listdir')
3598-    @mock.patch('os.path.isdir')
3599-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3600-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3601-                             mockmake_dirs, mockrename):
3602+    def test_write_share(self, mocktime):
3603         """ Write a new share. """
3604 
3605         class MockShare:
3606hunk ./src/allmydata/test/test_backends.py 143
3607 
3608         share = MockShare()
3609 
3610-        class MockFile:
3611-            def __init__(self):
3612-                self.buffer = ''
3613-                self.pos = 0
3614-            def write(self, instring):
3615-                begin = self.pos
3616-                padlen = begin - len(self.buffer)
3617-                if padlen > 0:
3618-                    self.buffer += '\x00' * padlen
3619-                end = self.pos + len(instring)
3620-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3621-                self.pos = end
3622-            def close(self):
3623-                pass
3624-            def seek(self, pos):
3625-                self.pos = pos
3626-            def read(self, numberbytes):
3627-                return self.buffer[self.pos:self.pos+numberbytes]
3628-            def tell(self):
3629-                return self.pos
3630-
3631-        fobj = MockFile()
3632-
3633-        directories = {}
3634-        def call_listdir(dirname):
3635-            if dirname not in directories:
3636-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3637-            else:
3638-                return directories[dirname].get_contents()
3639-
3640-        mocklistdir.side_effect = call_listdir
3641-
3642-        class MockDir:
3643-            def __init__(self, dirname):
3644-                self.name = dirname
3645-                self.contents = []
3646-   
3647-            def get_contents(self):
3648-                return self.contents
3649-
3650-        def call_isdir(dirname):
3651-            #XXX Should there be any other tests here?
3652-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3653-            return True
3654-
3655-        mockisdir.side_effect = call_isdir
3656-
3657-        def call_mkdir(dirname, permissions):
3658-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3659-                self.Fail
3660-            if dirname in directories:
3661-                raise OSError(17, "File exists: '%s'" % dirname)
3662-                self.Fail
3663-            elif dirname not in directories:
3664-                directories[dirname] = MockDir(dirname)
3665-                return True
3666-
3667-        mockmkdir.side_effect = call_mkdir
3668-
3669-        def call_get_available_space(storedir, reserved_space):
3670-            self.failUnlessReallyEqual(storedir, tempdir)
3671-            return 1
3672-
3673-        mockget_available_space.side_effect = call_get_available_space
3674-
3675-        mocktime.return_value = 0
3676-        def call_get_shares(storageindex):
3677-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3678-            return []#share]
3679-
3680-        mockget_shares.side_effect = call_get_shares
3681-
3682-        def call_open(fname, mode):
3683-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3684-            return fobj
3685-
3686-        mockopen.side_effect = call_open
3687-
3688-        def call_make_dirs(dname):
3689-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3690-           
3691-        mockmake_dirs.side_effect = call_make_dirs
3692-
3693-        def call_rename(src, dst):
3694-            self.failUnlessReallyEqual(src, shareincomingname)
3695-            self.failUnlessReallyEqual(dst, sharefname)
3696-           
3697-        mockrename.side_effect = call_rename
3698-
3699-        def call_exists(fname):
3700-            self.failUnlessReallyEqual(fname, sharefname)
3701-
3702-        mockexists.side_effect = call_exists
3703-
3704         # Now begin the test.
3705 
3706         # XXX (0) ???  Fail unless something is not properly set-up?
3707}
3708[JACP
3709wilcoxjg@gmail.com**20110711194407
3710 Ignore-this: b54745de777c4bb58d68d708f010bbb
3711] {
3712hunk ./src/allmydata/storage/backends/das/core.py 86
3713 
3714     def get_incoming(self, storageindex):
3715         """Return the set of incoming shnums."""
3716-        return set(os.listdir(self.incomingdir))
3717+        try:
3718+            incominglist = os.listdir(self.incomingdir)
3719+            print "incominglist: ", incominglist
3720+            return set(incominglist)
3721+        except OSError:
3722+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3723+            pass
3724 
3725     def get_shares(self, storage_index):
3726         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3727hunk ./src/allmydata/storage/server.py 17
3728 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3729      create_mutable_sharefile
3730 
3731-# storage/
3732-# storage/shares/incoming
3733-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3734-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3735-# storage/shares/$START/$STORAGEINDEX
3736-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3737-
3738-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3739-# base-32 chars).
3740-
3741-
3742 class StorageServer(service.MultiService, Referenceable):
3743     implements(RIStorageServer, IStatsProducer)
3744     name = 'storage'
3745}
3746[testing get incoming
3747wilcoxjg@gmail.com**20110711210224
3748 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3749] {
3750hunk ./src/allmydata/storage/backends/das/core.py 87
3751     def get_incoming(self, storageindex):
3752         """Return the set of incoming shnums."""
3753         try:
3754-            incominglist = os.listdir(self.incomingdir)
3755+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3756+            incominglist = os.listdir(incomingsharesdir)
3757             print "incominglist: ", incominglist
3758             return set(incominglist)
3759         except OSError:
3760hunk ./src/allmydata/storage/backends/das/core.py 92
3761-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3762-            pass
3763-
3764+            # XXX I'd like to make this more specific. If there are no shares at all.
3765+            return set()
3766+           
3767     def get_shares(self, storage_index):
3768         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3769         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3770hunk ./src/allmydata/test/test_backends.py 149
3771         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3772 
3773         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3774+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3775         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3776 
3777hunk ./src/allmydata/test/test_backends.py 152
3778-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3779         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3780         # with the same si, until BucketWriter.remote_close() has been called.
3781         # self.failIf(bsa)
3782}
3783[ImmutableShareFile does not know its StorageIndex
3784wilcoxjg@gmail.com**20110711211424
3785 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3786] {
3787hunk ./src/allmydata/storage/backends/das/core.py 112
3788             return 0
3789         return fileutil.get_available_space(self.storedir, self.reserved_space)
3790 
3791-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3792-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3793+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3794+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3795+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3796+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3797         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3798         return bw
3799 
3800hunk ./src/allmydata/storage/backends/das/core.py 155
3801     LEASE_SIZE = struct.calcsize(">L32s32sL")
3802     sharetype = "immutable"
3803 
3804-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3805+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3806         """ If max_size is not None then I won't allow more than
3807         max_size to be written to me. If create=True then max_size
3808         must not be None. """
3809}
3810[get_incoming correctly reports the 0 share after it has arrived
3811wilcoxjg@gmail.com**20110712025157
3812 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3813] {
3814hunk ./src/allmydata/storage/backends/das/core.py 1
3815+import os, re, weakref, struct, time, stat
3816+
3817 from allmydata.interfaces import IStorageBackend
3818 from allmydata.storage.backends.base import Backend
3819 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3820hunk ./src/allmydata/storage/backends/das/core.py 8
3821 from allmydata.util.assertutil import precondition
3822 
3823-import os, re, weakref, struct, time
3824-
3825 #from foolscap.api import Referenceable
3826 from twisted.application import service
3827 
3828hunk ./src/allmydata/storage/backends/das/core.py 89
3829         try:
3830             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3831             incominglist = os.listdir(incomingsharesdir)
3832-            print "incominglist: ", incominglist
3833-            return set(incominglist)
3834+            incomingshnums = [int(x) for x in incominglist]
3835+            return set(incomingshnums)
3836         except OSError:
3837             # XXX I'd like to make this more specific. If there are no shares at all.
3838             return set()
3839hunk ./src/allmydata/storage/backends/das/core.py 113
3840         return fileutil.get_available_space(self.storedir, self.reserved_space)
3841 
3842     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3843-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3844-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3845-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3846+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3847+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3848+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3849         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3850         return bw
3851 
3852hunk ./src/allmydata/storage/backends/das/core.py 160
3853         max_size to be written to me. If create=True then max_size
3854         must not be None. """
3855         precondition((max_size is not None) or (not create), max_size, create)
3856-        self.shnum = shnum
3857-        self.storage_index = storageindex
3858-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3859         self._max_size = max_size
3860hunk ./src/allmydata/storage/backends/das/core.py 161
3861-        self.incomingdir = os.path.join(sharedir, 'incoming')
3862-        si_dir = storage_index_to_dir(storageindex)
3863-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3864-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3865-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3866+        self.incominghome = incominghome
3867+        self.finalhome = finalhome
3868         if create:
3869             # touch the file, so later callers will see that we're working on
3870             # it. Also construct the metadata.
3871hunk ./src/allmydata/storage/backends/das/core.py 166
3872-            assert not os.path.exists(self.fname)
3873-            fileutil.make_dirs(os.path.dirname(self.fname))
3874-            f = open(self.fname, 'wb')
3875+            assert not os.path.exists(self.finalhome)
3876+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3877+            f = open(self.incominghome, 'wb')
3878             # The second field -- the four-byte share data length -- is no
3879             # longer used as of Tahoe v1.3.0, but we continue to write it in
3880             # there in case someone downgrades a storage server from >=
3881hunk ./src/allmydata/storage/backends/das/core.py 183
3882             self._lease_offset = max_size + 0x0c
3883             self._num_leases = 0
3884         else:
3885-            f = open(self.fname, 'rb')
3886-            filesize = os.path.getsize(self.fname)
3887+            f = open(self.finalhome, 'rb')
3888+            filesize = os.path.getsize(self.finalhome)
3889             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3890             f.close()
3891             if version != 1:
3892hunk ./src/allmydata/storage/backends/das/core.py 189
3893                 msg = "sharefile %s had version %d but we wanted 1" % \
3894-                      (self.fname, version)
3895+                      (self.finalhome, version)
3896                 raise UnknownImmutableContainerVersionError(msg)
3897             self._num_leases = num_leases
3898             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3899hunk ./src/allmydata/storage/backends/das/core.py 225
3900         pass
3901         
3902     def stat(self):
3903-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3904+        return os.stat(self.finalhome)[stat.ST_SIZE]
3905+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
3906 
3907     def get_shnum(self):
3908         return self.shnum
3909hunk ./src/allmydata/storage/backends/das/core.py 232
3910 
3911     def unlink(self):
3912-        os.unlink(self.fname)
3913+        os.unlink(self.finalhome)
3914 
3915     def read_share_data(self, offset, length):
3916         precondition(offset >= 0)
3917hunk ./src/allmydata/storage/backends/das/core.py 239
3918         # Reads beyond the end of the data are truncated. Reads that start
3919         # beyond the end of the data return an empty string.
3920         seekpos = self._data_offset+offset
3921-        fsize = os.path.getsize(self.fname)
3922+        fsize = os.path.getsize(self.finalhome)
3923         actuallength = max(0, min(length, fsize-seekpos))
3924         if actuallength == 0:
3925             return ""
3926hunk ./src/allmydata/storage/backends/das/core.py 243
3927-        f = open(self.fname, 'rb')
3928+        f = open(self.finalhome, 'rb')
3929         f.seek(seekpos)
3930         return f.read(actuallength)
3931 
3932hunk ./src/allmydata/storage/backends/das/core.py 252
3933         precondition(offset >= 0, offset)
3934         if self._max_size is not None and offset+length > self._max_size:
3935             raise DataTooLargeError(self._max_size, offset, length)
3936-        f = open(self.fname, 'rb+')
3937+        f = open(self.incominghome, 'rb+')
3938         real_offset = self._data_offset+offset
3939         f.seek(real_offset)
3940         assert f.tell() == real_offset
3941hunk ./src/allmydata/storage/backends/das/core.py 279
3942 
3943     def get_leases(self):
3944         """Yields a LeaseInfo instance for all leases."""
3945-        f = open(self.fname, 'rb')
3946+        f = open(self.finalhome, 'rb')
3947         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3948         f.seek(self._lease_offset)
3949         for i in range(num_leases):
3950hunk ./src/allmydata/storage/backends/das/core.py 288
3951                 yield LeaseInfo().from_immutable_data(data)
3952 
3953     def add_lease(self, lease_info):
3954-        f = open(self.fname, 'rb+')
3955+        f = open(self.incominghome, 'rb+')
3956         num_leases = self._read_num_leases(f)
3957         self._write_lease_record(f, num_leases, lease_info)
3958         self._write_num_leases(f, num_leases+1)
3959hunk ./src/allmydata/storage/backends/das/core.py 301
3960                 if new_expire_time > lease.expiration_time:
3961                     # yes
3962                     lease.expiration_time = new_expire_time
3963-                    f = open(self.fname, 'rb+')
3964+                    f = open(self.finalhome, 'rb+')
3965                     self._write_lease_record(f, i, lease)
3966                     f.close()
3967                 return
3968hunk ./src/allmydata/storage/backends/das/core.py 336
3969             # the same order as they were added, so that if we crash while
3970             # doing this, we won't lose any non-cancelled leases.
3971             leases = [l for l in leases if l] # remove the cancelled leases
3972-            f = open(self.fname, 'rb+')
3973+            f = open(self.finalhome, 'rb+')
3974             for i,lease in enumerate(leases):
3975                 self._write_lease_record(f, i, lease)
3976             self._write_num_leases(f, len(leases))
3977hunk ./src/allmydata/storage/backends/das/core.py 344
3978             f.close()
3979         space_freed = self.LEASE_SIZE * num_leases_removed
3980         if not len(leases):
3981-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
3982+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
3983             self.unlink()
3984         return space_freed
3985hunk ./src/allmydata/test/test_backends.py 129
3986     @mock.patch('time.time')
3987     def test_write_share(self, mocktime):
3988         """ Write a new share. """
3989-
3990-        class MockShare:
3991-            def __init__(self):
3992-                self.shnum = 1
3993-               
3994-            def add_or_renew_lease(elf, lease_info):
3995-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3996-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3997-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3998-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3999-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4000-
4001-        share = MockShare()
4002-
4003         # Now begin the test.
4004 
4005         # XXX (0) ???  Fail unless something is not properly set-up?
4006hunk ./src/allmydata/test/test_backends.py 143
4007         # self.failIf(bsa)
4008 
4009         bs[0].remote_write(0, 'a')
4010-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4011+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4012         spaceint = self.s.allocated_size()
4013         self.failUnlessReallyEqual(spaceint, 1)
4014 
4015hunk ./src/allmydata/test/test_backends.py 161
4016         #self.failIf(mockrename.called, mockrename.call_args_list)
4017         #self.failIf(mockstat.called, mockstat.call_args_list)
4018 
4019+    def test_handle_incoming(self):
4020+        incomingset = self.s.backend.get_incoming('teststorage_index')
4021+        self.failUnlessReallyEqual(incomingset, set())
4022+
4023+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4024+       
4025+        incomingset = self.s.backend.get_incoming('teststorage_index')
4026+        self.failUnlessReallyEqual(incomingset, set((0,)))
4027+
4028+        bs[0].remote_close()
4029+        self.failUnlessReallyEqual(incomingset, set())
4030+
4031     @mock.patch('os.path.exists')
4032     @mock.patch('os.path.getsize')
4033     @mock.patch('__builtin__.open')
4034hunk ./src/allmydata/test/test_backends.py 223
4035         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4036 
4037 
4038-
4039 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4040     @mock.patch('time.time')
4041     @mock.patch('os.mkdir')
4042hunk ./src/allmydata/test/test_backends.py 271
4043         DASCore('teststoredir', expiration_policy)
4044 
4045         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4046+
4047}
4048[jacp14
4049wilcoxjg@gmail.com**20110712061211
4050 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4051] {
4052hunk ./src/allmydata/storage/backends/das/core.py 95
4053             # XXX I'd like to make this more specific. If there are no shares at all.
4054             return set()
4055             
4056-    def get_shares(self, storage_index):
4057+    def get_shares(self, storageindex):
4058         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4059hunk ./src/allmydata/storage/backends/das/core.py 97
4060-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4061+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4062         try:
4063             for f in os.listdir(finalstoragedir):
4064                 if NUM_RE.match(f):
4065hunk ./src/allmydata/storage/backends/das/core.py 102
4066                     filename = os.path.join(finalstoragedir, f)
4067-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4068+                    yield ImmutableShare(filename, storageindex, f)
4069         except OSError:
4070             # Commonly caused by there being no shares at all.
4071             pass
4072hunk ./src/allmydata/storage/backends/das/core.py 115
4073     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4074         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4075         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4076-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4077+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4078         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4079         return bw
4080 
4081hunk ./src/allmydata/storage/backends/das/core.py 155
4082     LEASE_SIZE = struct.calcsize(">L32s32sL")
4083     sharetype = "immutable"
4084 
4085-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4086+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4087         """ If max_size is not None then I won't allow more than
4088         max_size to be written to me. If create=True then max_size
4089         must not be None. """
4090hunk ./src/allmydata/storage/backends/das/core.py 160
4091         precondition((max_size is not None) or (not create), max_size, create)
4092+        self.storageindex = storageindex
4093         self._max_size = max_size
4094         self.incominghome = incominghome
4095         self.finalhome = finalhome
4096hunk ./src/allmydata/storage/backends/das/core.py 164
4097+        self.shnum = shnum
4098         if create:
4099             # touch the file, so later callers will see that we're working on
4100             # it. Also construct the metadata.
4101hunk ./src/allmydata/storage/backends/das/core.py 212
4102             # their children to know when they should do the rmdir. This
4103             # approach is simpler, but relies on os.rmdir refusing to delete
4104             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4105+            #print "os.path.dirname(self.incominghome): "
4106+            #print os.path.dirname(self.incominghome)
4107             os.rmdir(os.path.dirname(self.incominghome))
4108             # we also delete the grandparent (prefix) directory, .../ab ,
4109             # again to avoid leaving directories lying around. This might
4110hunk ./src/allmydata/storage/immutable.py 93
4111     def __init__(self, ss, share):
4112         self.ss = ss
4113         self._share_file = share
4114-        self.storage_index = share.storage_index
4115+        self.storageindex = share.storageindex
4116         self.shnum = share.shnum
4117 
4118     def __repr__(self):
4119hunk ./src/allmydata/storage/immutable.py 98
4120         return "<%s %s %s>" % (self.__class__.__name__,
4121-                               base32.b2a_l(self.storage_index[:8], 60),
4122+                               base32.b2a_l(self.storageindex[:8], 60),
4123                                self.shnum)
4124 
4125     def remote_read(self, offset, length):
4126hunk ./src/allmydata/storage/immutable.py 110
4127 
4128     def remote_advise_corrupt_share(self, reason):
4129         return self.ss.remote_advise_corrupt_share("immutable",
4130-                                                   self.storage_index,
4131+                                                   self.storageindex,
4132                                                    self.shnum,
4133                                                    reason)
4134hunk ./src/allmydata/test/test_backends.py 20
4135 # The following share file contents was generated with
4136 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4137 # with share data == 'a'.
4138-renew_secret  = 'x'*32
4139-cancel_secret = 'y'*32
4140-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4141-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4142+shareversionnumber = '\x00\x00\x00\x01'
4143+sharedatalength = '\x00\x00\x00\x01'
4144+numberofleases = '\x00\x00\x00\x01'
4145+shareinputdata = 'a'
4146+ownernumber = '\x00\x00\x00\x00'
4147+renewsecret  = 'x'*32
4148+cancelsecret = 'y'*32
4149+expirationtime = '\x00(\xde\x80'
4150+nextlease = ''
4151+containerdata = shareversionnumber + sharedatalength + numberofleases
4152+client_data = shareinputdata + ownernumber + renewsecret + \
4153+    cancelsecret + expirationtime + nextlease
4154+share_data = containerdata + client_data
4155+
4156 
4157 testnodeid = 'testnodeidxxxxxxxxxx'
4158 tempdir = 'teststoredir'
4159hunk ./src/allmydata/test/test_backends.py 52
4160 
4161 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4162     def setUp(self):
4163-        self.s = StorageServer(testnodeid, backend=NullCore())
4164+        self.ss = StorageServer(testnodeid, backend=NullCore())
4165 
4166     @mock.patch('os.mkdir')
4167     @mock.patch('__builtin__.open')
4168hunk ./src/allmydata/test/test_backends.py 62
4169         """ Write a new share. """
4170 
4171         # Now begin the test.
4172-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4173+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4174         bs[0].remote_write(0, 'a')
4175         self.failIf(mockisdir.called)
4176         self.failIf(mocklistdir.called)
4177hunk ./src/allmydata/test/test_backends.py 133
4178                 _assert(False, "The tester code doesn't recognize this case.") 
4179 
4180         mockopen.side_effect = call_open
4181-        testbackend = DASCore(tempdir, expiration_policy)
4182-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4183+        self.backend = DASCore(tempdir, expiration_policy)
4184+        self.ss = StorageServer(testnodeid, self.backend)
4185+        self.ssinf = StorageServer(testnodeid, self.backend)
4186 
4187     @mock.patch('time.time')
4188     def test_write_share(self, mocktime):
4189hunk ./src/allmydata/test/test_backends.py 142
4190         """ Write a new share. """
4191         # Now begin the test.
4192 
4193-        # XXX (0) ???  Fail unless something is not properly set-up?
4194-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4195+        mocktime.return_value = 0
4196+        # Inspect incoming and fail unless it's empty.
4197+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4198+        self.failUnlessReallyEqual(incomingset, set())
4199+       
4200+        # Among other things, populate incoming with the sharenum: 0.
4201+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4202 
4203hunk ./src/allmydata/test/test_backends.py 150
4204-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4205-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4206-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4207+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4208+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4209+       
4210+        # Attempt to create a second share writer with the same share.
4211+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4212 
4213hunk ./src/allmydata/test/test_backends.py 156
4214-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4215+        # Show that no sharewriter results from a remote_allocate_buckets
4216         # with the same si, until BucketWriter.remote_close() has been called.
4217hunk ./src/allmydata/test/test_backends.py 158
4218-        # self.failIf(bsa)
4219+        self.failIf(bsa)
4220 
4221hunk ./src/allmydata/test/test_backends.py 160
4222+        # Write 'a' to shnum 0. Only tested together with close and read.
4223         bs[0].remote_write(0, 'a')
4224hunk ./src/allmydata/test/test_backends.py 162
4225-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4226-        spaceint = self.s.allocated_size()
4227+
4228+        # Test allocated size.
4229+        spaceint = self.ss.allocated_size()
4230         self.failUnlessReallyEqual(spaceint, 1)
4231 
4232         # XXX (3) Inspect final and fail unless there's nothing there.
4233hunk ./src/allmydata/test/test_backends.py 168
4234+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4235         bs[0].remote_close()
4236         # XXX (4a) Inspect final and fail unless share 0 is there.
4237hunk ./src/allmydata/test/test_backends.py 171
4238+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4239+        #contents = sharesinfinal[0].read_share_data(0,999)
4240+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4241         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4242 
4243         # What happens when there's not enough space for the client's request?
4244hunk ./src/allmydata/test/test_backends.py 177
4245-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4246+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4247 
4248         # Now test the allocated_size method.
4249         # self.failIf(mockexists.called, mockexists.call_args_list)
4250hunk ./src/allmydata/test/test_backends.py 185
4251         #self.failIf(mockrename.called, mockrename.call_args_list)
4252         #self.failIf(mockstat.called, mockstat.call_args_list)
4253 
4254-    def test_handle_incoming(self):
4255-        incomingset = self.s.backend.get_incoming('teststorage_index')
4256-        self.failUnlessReallyEqual(incomingset, set())
4257-
4258-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4259-       
4260-        incomingset = self.s.backend.get_incoming('teststorage_index')
4261-        self.failUnlessReallyEqual(incomingset, set((0,)))
4262-
4263-        bs[0].remote_close()
4264-        self.failUnlessReallyEqual(incomingset, set())
4265-
4266     @mock.patch('os.path.exists')
4267     @mock.patch('os.path.getsize')
4268     @mock.patch('__builtin__.open')
4269hunk ./src/allmydata/test/test_backends.py 208
4270             self.failUnless('r' in mode, mode)
4271             self.failUnless('b' in mode, mode)
4272 
4273-            return StringIO(share_file_data)
4274+            return StringIO(share_data)
4275         mockopen.side_effect = call_open
4276 
4277hunk ./src/allmydata/test/test_backends.py 211
4278-        datalen = len(share_file_data)
4279+        datalen = len(share_data)
4280         def call_getsize(fname):
4281             self.failUnlessReallyEqual(fname, sharefname)
4282             return datalen
4283hunk ./src/allmydata/test/test_backends.py 223
4284         mockexists.side_effect = call_exists
4285 
4286         # Now begin the test.
4287-        bs = self.s.remote_get_buckets('teststorage_index')
4288+        bs = self.ss.remote_get_buckets('teststorage_index')
4289 
4290         self.failUnlessEqual(len(bs), 1)
4291hunk ./src/allmydata/test/test_backends.py 226
4292-        b = bs[0]
4293+        b = bs['0']
4294         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4295hunk ./src/allmydata/test/test_backends.py 228
4296-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4297+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4298         # If you try to read past the end you get the as much data as is there.
4299hunk ./src/allmydata/test/test_backends.py 230
4300-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4301+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4302         # If you start reading past the end of the file you get the empty string.
4303         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4304 
4305}
4306[jacp14 or so
4307wilcoxjg@gmail.com**20110713060346
4308 Ignore-this: 7026810f60879d65b525d450e43ff87a
4309] {
4310hunk ./src/allmydata/storage/backends/das/core.py 102
4311             for f in os.listdir(finalstoragedir):
4312                 if NUM_RE.match(f):
4313                     filename = os.path.join(finalstoragedir, f)
4314-                    yield ImmutableShare(filename, storageindex, f)
4315+                    yield ImmutableShare(filename, storageindex, int(f))
4316         except OSError:
4317             # Commonly caused by there being no shares at all.
4318             pass
4319hunk ./src/allmydata/storage/backends/null/core.py 25
4320     def set_storage_server(self, ss):
4321         self.ss = ss
4322 
4323+    def get_incoming(self, storageindex):
4324+        return set()
4325+
4326 class ImmutableShare:
4327     sharetype = "immutable"
4328 
4329hunk ./src/allmydata/storage/immutable.py 19
4330 
4331     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4332         self.ss = ss
4333-        self._max_size = max_size # don't allow the client to write more than this
4334+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4335+
4336         self._canary = canary
4337         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4338         self.closed = False
4339hunk ./src/allmydata/test/test_backends.py 135
4340         mockopen.side_effect = call_open
4341         self.backend = DASCore(tempdir, expiration_policy)
4342         self.ss = StorageServer(testnodeid, self.backend)
4343-        self.ssinf = StorageServer(testnodeid, self.backend)
4344+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4345+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4346 
4347     @mock.patch('time.time')
4348     def test_write_share(self, mocktime):
4349hunk ./src/allmydata/test/test_backends.py 161
4350         # with the same si, until BucketWriter.remote_close() has been called.
4351         self.failIf(bsa)
4352 
4353-        # Write 'a' to shnum 0. Only tested together with close and read.
4354-        bs[0].remote_write(0, 'a')
4355-
4356         # Test allocated size.
4357         spaceint = self.ss.allocated_size()
4358         self.failUnlessReallyEqual(spaceint, 1)
4359hunk ./src/allmydata/test/test_backends.py 165
4360 
4361-        # XXX (3) Inspect final and fail unless there's nothing there.
4362+        # Write 'a' to shnum 0. Only tested together with close and read.
4363+        bs[0].remote_write(0, 'a')
4364+       
4365+        # Preclose: Inspect final, failUnless nothing there.
4366         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4367         bs[0].remote_close()
4368hunk ./src/allmydata/test/test_backends.py 171
4369-        # XXX (4a) Inspect final and fail unless share 0 is there.
4370-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4371-        #contents = sharesinfinal[0].read_share_data(0,999)
4372-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4373-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4374 
4375hunk ./src/allmydata/test/test_backends.py 172
4376-        # What happens when there's not enough space for the client's request?
4377-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4378+        # Postclose: (Omnibus) failUnless written data is in final.
4379+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4380+        contents = sharesinfinal[0].read_share_data(0,73)
4381+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4382 
4383hunk ./src/allmydata/test/test_backends.py 177
4384-        # Now test the allocated_size method.
4385-        # self.failIf(mockexists.called, mockexists.call_args_list)
4386-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4387-        #self.failIf(mockrename.called, mockrename.call_args_list)
4388-        #self.failIf(mockstat.called, mockstat.call_args_list)
4389+        # Cover interior of for share in get_shares loop.
4390+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4391+       
4392+    @mock.patch('time.time')
4393+    @mock.patch('allmydata.util.fileutil.get_available_space')
4394+    def test_out_of_space(self, mockget_available_space, mocktime):
4395+        mocktime.return_value = 0
4396+       
4397+        def call_get_available_space(dir, reserve):
4398+            return 0
4399+
4400+        mockget_available_space.side_effect = call_get_available_space
4401+       
4402+       
4403+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4404 
4405     @mock.patch('os.path.exists')
4406     @mock.patch('os.path.getsize')
4407hunk ./src/allmydata/test/test_backends.py 234
4408         bs = self.ss.remote_get_buckets('teststorage_index')
4409 
4410         self.failUnlessEqual(len(bs), 1)
4411-        b = bs['0']
4412+        b = bs[0]
4413         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4414         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4415         # If you try to read past the end you get the as much data as is there.
4416}
4417[temporary work-in-progress patch to be unrecorded
4418zooko@zooko.com**20110714003008
4419 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4420 tidy up a few tests, work done in pair-programming with Zancas
4421] {
4422hunk ./src/allmydata/storage/backends/das/core.py 65
4423         self._clean_incomplete()
4424 
4425     def _clean_incomplete(self):
4426-        fileutil.rm_dir(self.incomingdir)
4427+        fileutil.rmtree(self.incomingdir)
4428         fileutil.make_dirs(self.incomingdir)
4429 
4430     def _setup_corruption_advisory(self):
4431hunk ./src/allmydata/storage/immutable.py 1
4432-import os, stat, struct, time
4433+import os, time
4434 
4435 from foolscap.api import Referenceable
4436 
4437hunk ./src/allmydata/storage/server.py 1
4438-import os, re, weakref, struct, time
4439+import os, weakref, struct, time
4440 
4441 from foolscap.api import Referenceable
4442 from twisted.application import service
4443hunk ./src/allmydata/storage/server.py 7
4444 
4445 from zope.interface import implements
4446-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4447+from allmydata.interfaces import RIStorageServer, IStatsProducer
4448 from allmydata.util import fileutil, idlib, log, time_format
4449 import allmydata # for __full_version__
4450 
4451hunk ./src/allmydata/storage/server.py 313
4452         self.add_latency("get", time.time() - start)
4453         return bucketreaders
4454 
4455-    def remote_get_incoming(self, storageindex):
4456-        incoming_share_set = self.backend.get_incoming(storageindex)
4457-        return incoming_share_set
4458-
4459     def get_leases(self, storageindex):
4460         """Provide an iterator that yields all of the leases attached to this
4461         bucket. Each lease is returned as a LeaseInfo instance.
4462hunk ./src/allmydata/test/test_backends.py 3
4463 from twisted.trial import unittest
4464 
4465+from twisted.path.filepath import FilePath
4466+
4467 from StringIO import StringIO
4468 
4469 from allmydata.test.common_util import ReallyEqualMixin
4470hunk ./src/allmydata/test/test_backends.py 38
4471 
4472 
4473 testnodeid = 'testnodeidxxxxxxxxxx'
4474-tempdir = 'teststoredir'
4475-basedir = os.path.join(tempdir, 'shares')
4476+storedir = 'teststoredir'
4477+storedirfp = FilePath(storedir)
4478+basedir = os.path.join(storedir, 'shares')
4479 baseincdir = os.path.join(basedir, 'incoming')
4480 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4481 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4482hunk ./src/allmydata/test/test_backends.py 53
4483                      'cutoff_date' : None,
4484                      'sharetypes' : None}
4485 
4486-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4487+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4488+    """ NullBackend is just for testing and executable documentation, so
4489+    this test is actually a test of StorageServer in which we're using
4490+    NullBackend as helper code for the test, rather than a test of
4491+    NullBackend. """
4492     def setUp(self):
4493         self.ss = StorageServer(testnodeid, backend=NullCore())
4494 
4495hunk ./src/allmydata/test/test_backends.py 62
4496     @mock.patch('os.mkdir')
4497+
4498     @mock.patch('__builtin__.open')
4499     @mock.patch('os.listdir')
4500     @mock.patch('os.path.isdir')
4501hunk ./src/allmydata/test/test_backends.py 69
4502     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4503         """ Write a new share. """
4504 
4505-        # Now begin the test.
4506         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4507         bs[0].remote_write(0, 'a')
4508         self.failIf(mockisdir.called)
4509hunk ./src/allmydata/test/test_backends.py 83
4510     @mock.patch('os.listdir')
4511     @mock.patch('os.path.isdir')
4512     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4513-        """ This tests whether a server instance can be constructed
4514-        with a filesystem backend. To pass the test, it has to use the
4515-        filesystem in only the prescribed ways. """
4516+        """ This tests whether a server instance can be constructed with a
4517+        filesystem backend. To pass the test, it mustn't use the filesystem
4518+        outside of its configured storedir. """
4519 
4520         def call_open(fname, mode):
4521hunk ./src/allmydata/test/test_backends.py 88
4522-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4523-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4524-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4525-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4526-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4527+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4528+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4529+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4530+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4531+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4532                 return StringIO()
4533             else:
4534hunk ./src/allmydata/test/test_backends.py 95
4535-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4536+                fnamefp = FilePath(fname)
4537+                self.failUnless(storedirfp in fnamefp.parents(),
4538+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4539         mockopen.side_effect = call_open
4540 
4541         def call_isdir(fname):
4542hunk ./src/allmydata/test/test_backends.py 101
4543-            if fname == os.path.join(tempdir,'shares'):
4544+            if fname == os.path.join(storedir, 'shares'):
4545                 return True
4546hunk ./src/allmydata/test/test_backends.py 103
4547-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4548+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4549                 return True
4550             else:
4551                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4552hunk ./src/allmydata/test/test_backends.py 109
4553         mockisdir.side_effect = call_isdir
4554 
4555+        mocklistdir.return_value = []
4556+
4557         def call_mkdir(fname, mode):
4558hunk ./src/allmydata/test/test_backends.py 112
4559-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4560             self.failUnlessEqual(0777, mode)
4561hunk ./src/allmydata/test/test_backends.py 113
4562-            if fname == tempdir:
4563-                return None
4564-            elif fname == os.path.join(tempdir,'shares'):
4565-                return None
4566-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4567-                return None
4568-            else:
4569-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4570+            self.failUnlessIn(fname,
4571+                              [storedir,
4572+                               os.path.join(storedir, 'shares'),
4573+                               os.path.join(storedir, 'shares', 'incoming')],
4574+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4575         mockmkdir.side_effect = call_mkdir
4576 
4577         # Now begin the test.
4578hunk ./src/allmydata/test/test_backends.py 121
4579-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4580+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4581 
4582         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4583 
4584hunk ./src/allmydata/test/test_backends.py 126
4585 
4586-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4587+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4588+    """ This tests both the StorageServer xyz """
4589     @mock.patch('__builtin__.open')
4590     def setUp(self, mockopen):
4591         def call_open(fname, mode):
4592hunk ./src/allmydata/test/test_backends.py 131
4593-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4594-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4595-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4596-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4597-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4598+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4599+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4600+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4601+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4602+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4603                 return StringIO()
4604             else:
4605                 _assert(False, "The tester code doesn't recognize this case.") 
4606hunk ./src/allmydata/test/test_backends.py 141
4607 
4608         mockopen.side_effect = call_open
4609-        self.backend = DASCore(tempdir, expiration_policy)
4610+        self.backend = DASCore(storedir, expiration_policy)
4611         self.ss = StorageServer(testnodeid, self.backend)
4612hunk ./src/allmydata/test/test_backends.py 143
4613-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4614+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4615         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4616 
4617     @mock.patch('time.time')
4618hunk ./src/allmydata/test/test_backends.py 147
4619-    def test_write_share(self, mocktime):
4620-        """ Write a new share. """
4621-        # Now begin the test.
4622+    def test_write_and_read_share(self, mocktime):
4623+        """
4624+        Write a new share, read it, and test the server's (and FS backend's)
4625+        handling of simultaneous and successive attempts to write the same
4626+        share.
4627+        """
4628 
4629         mocktime.return_value = 0
4630         # Inspect incoming and fail unless it's empty.
4631hunk ./src/allmydata/test/test_backends.py 159
4632         incomingset = self.ss.backend.get_incoming('teststorage_index')
4633         self.failUnlessReallyEqual(incomingset, set())
4634         
4635-        # Among other things, populate incoming with the sharenum: 0.
4636+        # Populate incoming with the sharenum: 0.
4637         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4638 
4639         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4640hunk ./src/allmydata/test/test_backends.py 163
4641-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4642+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4643         
4644hunk ./src/allmydata/test/test_backends.py 165
4645-        # Attempt to create a second share writer with the same share.
4646+        # Attempt to create a second share writer with the same sharenum.
4647         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4648 
4649         # Show that no sharewriter results from a remote_allocate_buckets
4650hunk ./src/allmydata/test/test_backends.py 169
4651-        # with the same si, until BucketWriter.remote_close() has been called.
4652+        # with the same si and sharenum, until BucketWriter.remote_close()
4653+        # has been called.
4654         self.failIf(bsa)
4655 
4656         # Test allocated size.
4657hunk ./src/allmydata/test/test_backends.py 187
4658         # Postclose: (Omnibus) failUnless written data is in final.
4659         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4660         contents = sharesinfinal[0].read_share_data(0,73)
4661-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4662+        self.failUnlessReallyEqual(contents, client_data)
4663 
4664hunk ./src/allmydata/test/test_backends.py 189
4665-        # Cover interior of for share in get_shares loop.
4666-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4667+        # Exercise the case that the share we're asking to allocate is
4668+        # already (completely) uploaded.
4669+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4670         
4671     @mock.patch('time.time')
4672     @mock.patch('allmydata.util.fileutil.get_available_space')
4673hunk ./src/allmydata/test/test_backends.py 210
4674     @mock.patch('os.path.getsize')
4675     @mock.patch('__builtin__.open')
4676     @mock.patch('os.listdir')
4677-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4678+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4679         """ This tests whether the code correctly finds and reads
4680         shares written out by old (Tahoe-LAFS <= v1.8.2)
4681         servers. There is a similar test in test_download, but that one
4682hunk ./src/allmydata/test/test_backends.py 219
4683         StorageServer object. """
4684 
4685         def call_listdir(dirname):
4686-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4687+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4688             return ['0']
4689 
4690         mocklistdir.side_effect = call_listdir
4691hunk ./src/allmydata/test/test_backends.py 226
4692 
4693         def call_open(fname, mode):
4694             self.failUnlessReallyEqual(fname, sharefname)
4695-            self.failUnless('r' in mode, mode)
4696+            self.failUnlessEqual(mode[0], 'r', mode)
4697             self.failUnless('b' in mode, mode)
4698 
4699             return StringIO(share_data)
4700hunk ./src/allmydata/test/test_backends.py 268
4701         filesystem in only the prescribed ways. """
4702 
4703         def call_open(fname, mode):
4704-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4705-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4706-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4707-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4708-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4709+            if fname == os.path.join(storedir,'bucket_counter.state'):
4710+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4711+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4712+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4713+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4714                 return StringIO()
4715             else:
4716                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4717hunk ./src/allmydata/test/test_backends.py 279
4718         mockopen.side_effect = call_open
4719 
4720         def call_isdir(fname):
4721-            if fname == os.path.join(tempdir,'shares'):
4722+            if fname == os.path.join(storedir,'shares'):
4723                 return True
4724hunk ./src/allmydata/test/test_backends.py 281
4725-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4726+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4727                 return True
4728             else:
4729                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4730hunk ./src/allmydata/test/test_backends.py 290
4731         def call_mkdir(fname, mode):
4732             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4733             self.failUnlessEqual(0777, mode)
4734-            if fname == tempdir:
4735+            if fname == storedir:
4736                 return None
4737hunk ./src/allmydata/test/test_backends.py 292
4738-            elif fname == os.path.join(tempdir,'shares'):
4739+            elif fname == os.path.join(storedir,'shares'):
4740                 return None
4741hunk ./src/allmydata/test/test_backends.py 294
4742-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4743+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4744                 return None
4745             else:
4746                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4747hunk ./src/allmydata/util/fileutil.py 5
4748 Futz with files like a pro.
4749 """
4750 
4751-import sys, exceptions, os, stat, tempfile, time, binascii
4752+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4753 
4754 from twisted.python import log
4755 
4756hunk ./src/allmydata/util/fileutil.py 186
4757             raise tx
4758         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4759 
4760-def rm_dir(dirname):
4761+def rmtree(dirname):
4762     """
4763     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4764     already gone, do nothing and return without raising an exception.  If this
4765hunk ./src/allmydata/util/fileutil.py 205
4766             else:
4767                 remove(fullname)
4768         os.rmdir(dirname)
4769-    except Exception, le:
4770-        # Ignore "No such file or directory"
4771-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4772+    except EnvironmentError, le:
4773+        # Ignore "No such file or directory", collect any other exception.
4774+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4775             excs.append(le)
4776hunk ./src/allmydata/util/fileutil.py 209
4777+    except Exception, le:
4778+        excs.append(le)
4779 
4780     # Okay, now we've recursively removed everything, ignoring any "No
4781     # such file or directory" errors, and collecting any other errors.
4782hunk ./src/allmydata/util/fileutil.py 222
4783             raise OSError, "Failed to remove dir for unknown reason."
4784         raise OSError, excs
4785 
4786+def rm_dir(dirname):
4787+    # Renamed to be like shutil.rmtree and unlike rmdir.
4788+    return rmtree(dirname)
4789 
4790 def remove_if_possible(f):
4791     try:
4792}
4793
4794Context:
4795
4796[docs: add missing link in NEWS.rst
4797zooko@zooko.com**20110712153307
4798 Ignore-this: be7b7eb81c03700b739daa1027d72b35
4799]
4800[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
4801zooko@zooko.com**20110712153229
4802 Ignore-this: 723c4f9e2211027c79d711715d972c5
4803 Also remove a couple of vestigial references to figleaf, which is long gone.
4804 fixes #1409 (remove contrib/fuse)
4805]
4806[add Protovis.js-based download-status timeline visualization
4807Brian Warner <warner@lothar.com>**20110629222606
4808 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
4809 
4810 provide status overlap info on the webapi t=json output, add decode/decrypt
4811 rate tooltips, add zoomin/zoomout buttons
4812]
4813[add more download-status data, fix tests
4814Brian Warner <warner@lothar.com>**20110629222555
4815 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
4816]
4817[prepare for viz: improve DownloadStatus events
4818Brian Warner <warner@lothar.com>**20110629222542
4819 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
4820 
4821 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
4822]
4823[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
4824zooko@zooko.com**20110629185711
4825 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
4826]
4827[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
4828david-sarah@jacaranda.org**20110130235809
4829 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
4830]
4831[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
4832david-sarah@jacaranda.org**20110626054124
4833 Ignore-this: abb864427a1b91bd10d5132b4589fd90
4834]
4835[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
4836david-sarah@jacaranda.org**20110623205528
4837 Ignore-this: c63e23146c39195de52fb17c7c49b2da
4838]
4839[Rename test_package_initialization.py to (much shorter) test_import.py .
4840Brian Warner <warner@lothar.com>**20110611190234
4841 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
4842 
4843 The former name was making my 'ls' listings hard to read, by forcing them
4844 down to just two columns.
4845]
4846[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
4847zooko@zooko.com**20110611163741
4848 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
4849 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
4850 fixes #1412
4851]
4852[wui: right-align the size column in the WUI
4853zooko@zooko.com**20110611153758
4854 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
4855 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
4856 fixes #1412
4857]
4858[docs: three minor fixes
4859zooko@zooko.com**20110610121656
4860 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
4861 CREDITS for arc for stats tweak
4862 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
4863 English usage tweak
4864]
4865[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
4866david-sarah@jacaranda.org**20110609223719
4867 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
4868]
4869[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
4870wilcoxjg@gmail.com**20110527120135
4871 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
4872 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
4873 NEWS.rst, stats.py: documentation of change to get_latencies
4874 stats.rst: now documents percentile modification in get_latencies
4875 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
4876 fixes #1392
4877]
4878[corrected "k must never be smaller than N" to "k must never be greater than N"
4879secorp@allmydata.org**20110425010308
4880 Ignore-this: 233129505d6c70860087f22541805eac
4881]
4882[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
4883david-sarah@jacaranda.org**20110517011214
4884 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
4885]
4886[docs: convert NEWS to NEWS.rst and change all references to it.
4887david-sarah@jacaranda.org**20110517010255
4888 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
4889]
4890[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
4891david-sarah@jacaranda.org**20110512140559
4892 Ignore-this: 784548fc5367fac5450df1c46890876d
4893]
4894[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
4895david-sarah@jacaranda.org**20110130164923
4896 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
4897]
4898[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
4899zooko@zooko.com**20110128142006
4900 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
4901 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
4902]
4903[M-x whitespace-cleanup
4904zooko@zooko.com**20110510193653
4905 Ignore-this: dea02f831298c0f65ad096960e7df5c7
4906]
4907[docs: fix typo in running.rst, thanks to arch_o_median
4908zooko@zooko.com**20110510193633
4909 Ignore-this: ca06de166a46abbc61140513918e79e8
4910]
4911[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
4912david-sarah@jacaranda.org**20110204204902
4913 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
4914]
4915[relnotes.txt: forseeable -> foreseeable. refs #1342
4916david-sarah@jacaranda.org**20110204204116
4917 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
4918]
4919[replace remaining .html docs with .rst docs
4920zooko@zooko.com**20110510191650
4921 Ignore-this: d557d960a986d4ac8216d1677d236399
4922 Remove install.html (long since deprecated).
4923 Also replace some obsolete references to install.html with references to quickstart.rst.
4924 Fix some broken internal references within docs/historical/historical_known_issues.txt.
4925 Thanks to Ravi Pinjala and Patrick McDonald.
4926 refs #1227
4927]
4928[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
4929zooko@zooko.com**20110428055232
4930 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
4931]
4932[munin tahoe_files plugin: fix incorrect file count
4933francois@ctrlaltdel.ch**20110428055312
4934 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
4935 fixes #1391
4936]
4937[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
4938david-sarah@jacaranda.org**20110411190738
4939 Ignore-this: 7847d26bc117c328c679f08a7baee519
4940]
4941[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
4942david-sarah@jacaranda.org**20110410155844
4943 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
4944]
4945[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
4946david-sarah@jacaranda.org**20110410155705
4947 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
4948]
4949[remove unused variable detected by pyflakes
4950zooko@zooko.com**20110407172231
4951 Ignore-this: 7344652d5e0720af822070d91f03daf9
4952]
4953[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
4954david-sarah@jacaranda.org**20110401202750
4955 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
4956]
4957[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
4958Brian Warner <warner@lothar.com>**20110325232511
4959 Ignore-this: d5307faa6900f143193bfbe14e0f01a
4960]
4961[control.py: remove all uses of s.get_serverid()
4962warner@lothar.com**20110227011203
4963 Ignore-this: f80a787953bd7fa3d40e828bde00e855
4964]
4965[web: remove some uses of s.get_serverid(), not all
4966warner@lothar.com**20110227011159
4967 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
4968]
4969[immutable/downloader/fetcher.py: remove all get_serverid() calls
4970warner@lothar.com**20110227011156
4971 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
4972]
4973[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
4974warner@lothar.com**20110227011153
4975 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
4976 
4977 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
4978 _shares_from_server dict was being popped incorrectly (using shnum as the
4979 index instead of serverid). I'm still thinking through the consequences of
4980 this bug. It was probably benign and really hard to detect. I think it would
4981 cause us to incorrectly believe that we're pulling too many shares from a
4982 server, and thus prefer a different server rather than asking for a second
4983 share from the first server. The diversity code is intended to spread out the
4984 number of shares simultaneously being requested from each server, but with
4985 this bug, it might be spreading out the total number of shares requested at
4986 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
4987 segment, so the effect doesn't last very long).
4988]
4989[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
4990warner@lothar.com**20110227011150
4991 Ignore-this: d8d56dd8e7b280792b40105e13664554
4992 
4993 test_download.py: create+check MyShare instances better, make sure they share
4994 Server objects, now that finder.py cares
4995]
4996[immutable/downloader/finder.py: reduce use of get_serverid(), one left
4997warner@lothar.com**20110227011146
4998 Ignore-this: 5785be173b491ae8a78faf5142892020
4999]
5000[immutable/offloaded.py: reduce use of get_serverid() a bit more
5001warner@lothar.com**20110227011142
5002 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
5003]
5004[immutable/upload.py: reduce use of get_serverid()
5005warner@lothar.com**20110227011138
5006 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
5007]
5008[immutable/checker.py: remove some uses of s.get_serverid(), not all
5009warner@lothar.com**20110227011134
5010 Ignore-this: e480a37efa9e94e8016d826c492f626e
5011]
5012[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
5013warner@lothar.com**20110227011132
5014 Ignore-this: 6078279ddf42b179996a4b53bee8c421
5015 MockIServer stubs
5016]
5017[upload.py: rearrange _make_trackers a bit, no behavior changes
5018warner@lothar.com**20110227011128
5019 Ignore-this: 296d4819e2af452b107177aef6ebb40f
5020]
5021[happinessutil.py: finally rename merge_peers to merge_servers
5022warner@lothar.com**20110227011124
5023 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
5024]
5025[test_upload.py: factor out FakeServerTracker
5026warner@lothar.com**20110227011120
5027 Ignore-this: 6c182cba90e908221099472cc159325b
5028]
5029[test_upload.py: server-vs-tracker cleanup
5030warner@lothar.com**20110227011115
5031 Ignore-this: 2915133be1a3ba456e8603885437e03
5032]
5033[happinessutil.py: server-vs-tracker cleanup
5034warner@lothar.com**20110227011111
5035 Ignore-this: b856c84033562d7d718cae7cb01085a9
5036]
5037[upload.py: more tracker-vs-server cleanup
5038warner@lothar.com**20110227011107
5039 Ignore-this: bb75ed2afef55e47c085b35def2de315
5040]
5041[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
5042warner@lothar.com**20110227011103
5043 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
5044]
5045[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
5046warner@lothar.com**20110227011100
5047 Ignore-this: 7ea858755cbe5896ac212a925840fe68
5048 
5049 No behavioral changes, just updating variable/method names and log messages.
5050 The effects outside these three files should be minimal: some exception
5051 messages changed (to say "server" instead of "peer"), and some internal class
5052 names were changed. A few things still use "peer" to minimize external
5053 changes, like UploadResults.timings["peer_selection"] and
5054 happinessutil.merge_peers, which can be changed later.
5055]
5056[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
5057warner@lothar.com**20110227011056
5058 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
5059]
5060[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
5061warner@lothar.com**20110227011051
5062 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
5063]
5064[test: increase timeout on a network test because Francois's ARM machine hit that timeout
5065zooko@zooko.com**20110317165909
5066 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
5067 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
5068]
5069[docs/configuration.rst: add a "Frontend Configuration" section
5070Brian Warner <warner@lothar.com>**20110222014323
5071 Ignore-this: 657018aa501fe4f0efef9851628444ca
5072 
5073 this points to docs/frontends/*.rst, which were previously underlinked
5074]
5075[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
5076"Brian Warner <warner@lothar.com>"**20110221061544
5077 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
5078]
5079[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
5080david-sarah@jacaranda.org**20110221015817
5081 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
5082]
5083[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
5084david-sarah@jacaranda.org**20110221020125
5085 Ignore-this: b0744ed58f161bf188e037bad077fc48
5086]
5087[Refactor StorageFarmBroker handling of servers
5088Brian Warner <warner@lothar.com>**20110221015804
5089 Ignore-this: 842144ed92f5717699b8f580eab32a51
5090 
5091 Pass around IServer instance instead of (peerid, rref) tuple. Replace
5092 "descriptor" with "server". Other replacements:
5093 
5094  get_all_servers -> get_connected_servers/get_known_servers
5095  get_servers_for_index -> get_servers_for_psi (now returns IServers)
5096 
5097 This change still needs to be pushed further down: lots of code is now
5098 getting the IServer and then distributing (peerid, rref) internally.
5099 Instead, it ought to distribute the IServer internally and delay
5100 extracting a serverid or rref until the last moment.
5101 
5102 no_network.py was updated to retain parallelism.
5103]
5104[TAG allmydata-tahoe-1.8.2
5105warner@lothar.com**20110131020101]
5106Patch bundle hash:
510765a181a6fb02bdbe757fffe838b288e845ebfa5e