Ticket #999: consistentifysi.darcs.patch

File consistentifysi.darcs.patch, 158.0 KB (added by arch_o_median, at 2011-07-10T19:55:45Z)

all storage_index (word tokens) to storageindex in storage/server.py

Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49New patches:
50
51[storage: new mocking tests of storage server read and write
52wilcoxjg@gmail.com**20110325203514
53 Ignore-this: df65c3c4f061dd1516f88662023fdb41
54 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
55] {
56addfile ./src/allmydata/test/test_server.py
57hunk ./src/allmydata/test/test_server.py 1
58+from twisted.trial import unittest
59+
60+from StringIO import StringIO
61+
62+from allmydata.test.common_util import ReallyEqualMixin
63+
64+import mock
65+
66+# This is the code that we're going to be testing.
67+from allmydata.storage.server import StorageServer
68+
69+# The following share file contents was generated with
70+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
71+# with share data == 'a'.
72+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
73+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
74+
75+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
76+
77+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
78+    @mock.patch('__builtin__.open')
79+    def test_create_server(self, mockopen):
80+        """ This tests whether a server instance can be constructed. """
81+
82+        def call_open(fname, mode):
83+            if fname == 'testdir/bucket_counter.state':
84+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
85+            elif fname == 'testdir/lease_checker.state':
86+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
87+            elif fname == 'testdir/lease_checker.history':
88+                return StringIO()
89+        mockopen.side_effect = call_open
90+
91+        # Now begin the test.
92+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
93+
94+        # You passed!
95+
96+class TestServer(unittest.TestCase, ReallyEqualMixin):
97+    @mock.patch('__builtin__.open')
98+    def setUp(self, mockopen):
99+        def call_open(fname, mode):
100+            if fname == 'testdir/bucket_counter.state':
101+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
102+            elif fname == 'testdir/lease_checker.state':
103+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
104+            elif fname == 'testdir/lease_checker.history':
105+                return StringIO()
106+        mockopen.side_effect = call_open
107+
108+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
109+
110+
111+    @mock.patch('time.time')
112+    @mock.patch('os.mkdir')
113+    @mock.patch('__builtin__.open')
114+    @mock.patch('os.listdir')
115+    @mock.patch('os.path.isdir')
116+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
117+        """Handle a report of corruption."""
118+
119+        def call_listdir(dirname):
120+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
121+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
122+
123+        mocklistdir.side_effect = call_listdir
124+
125+        class MockFile:
126+            def __init__(self):
127+                self.buffer = ''
128+                self.pos = 0
129+            def write(self, instring):
130+                begin = self.pos
131+                padlen = begin - len(self.buffer)
132+                if padlen > 0:
133+                    self.buffer += '\x00' * padlen
134+                end = self.pos + len(instring)
135+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
136+                self.pos = end
137+            def close(self):
138+                pass
139+            def seek(self, pos):
140+                self.pos = pos
141+            def read(self, numberbytes):
142+                return self.buffer[self.pos:self.pos+numberbytes]
143+            def tell(self):
144+                return self.pos
145+
146+        mocktime.return_value = 0
147+
148+        sharefile = MockFile()
149+        def call_open(fname, mode):
150+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
151+            return sharefile
152+
153+        mockopen.side_effect = call_open
154+        # Now begin the test.
155+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
156+        print bs
157+        bs[0].remote_write(0, 'a')
158+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
159+
160+
161+    @mock.patch('os.path.exists')
162+    @mock.patch('os.path.getsize')
163+    @mock.patch('__builtin__.open')
164+    @mock.patch('os.listdir')
165+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
166+        """ This tests whether the code correctly finds and reads
167+        shares written out by old (Tahoe-LAFS <= v1.8.2)
168+        servers. There is a similar test in test_download, but that one
169+        is from the perspective of the client and exercises a deeper
170+        stack of code. This one is for exercising just the
171+        StorageServer object. """
172+
173+        def call_listdir(dirname):
174+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
175+            return ['0']
176+
177+        mocklistdir.side_effect = call_listdir
178+
179+        def call_open(fname, mode):
180+            self.failUnlessReallyEqual(fname, sharefname)
181+            self.failUnless('r' in mode, mode)
182+            self.failUnless('b' in mode, mode)
183+
184+            return StringIO(share_file_data)
185+        mockopen.side_effect = call_open
186+
187+        datalen = len(share_file_data)
188+        def call_getsize(fname):
189+            self.failUnlessReallyEqual(fname, sharefname)
190+            return datalen
191+        mockgetsize.side_effect = call_getsize
192+
193+        def call_exists(fname):
194+            self.failUnlessReallyEqual(fname, sharefname)
195+            return True
196+        mockexists.side_effect = call_exists
197+
198+        # Now begin the test.
199+        bs = self.s.remote_get_buckets('teststorage_index')
200+
201+        self.failUnlessEqual(len(bs), 1)
202+        b = bs[0]
203+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
204+        # If you try to read past the end you get the as much data as is there.
205+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
206+        # If you start reading past the end of the file you get the empty string.
207+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
208}
209[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
210wilcoxjg@gmail.com**20110624202850
211 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
212 sloppy not for production
213] {
214move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
215hunk ./src/allmydata/storage/crawler.py 13
216     pass
217 
218 class ShareCrawler(service.MultiService):
219-    """A ShareCrawler subclass is attached to a StorageServer, and
220+    """A subcless of ShareCrawler is attached to a StorageServer, and
221     periodically walks all of its shares, processing each one in some
222     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
223     since large servers can easily have a terabyte of shares, in several
224hunk ./src/allmydata/storage/crawler.py 31
225     We assume that the normal upload/download/get_buckets traffic of a tahoe
226     grid will cause the prefixdir contents to be mostly cached in the kernel,
227     or that the number of buckets in each prefixdir will be small enough to
228-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
229+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
230     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
231     prefix. On this server, each prefixdir took 130ms-200ms to list the first
232     time, and 17ms to list the second time.
233hunk ./src/allmydata/storage/crawler.py 68
234     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
235     minimum_cycle_time = 300 # don't run a cycle faster than this
236 
237-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
238+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
239         service.MultiService.__init__(self)
240         if allowed_cpu_percentage is not None:
241             self.allowed_cpu_percentage = allowed_cpu_percentage
242hunk ./src/allmydata/storage/crawler.py 72
243-        self.server = server
244-        self.sharedir = server.sharedir
245-        self.statefile = statefile
246+        self.backend = backend
247         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
248                          for i in range(2**10)]
249         self.prefixes.sort()
250hunk ./src/allmydata/storage/crawler.py 446
251 
252     minimum_cycle_time = 60*60 # we don't need this more than once an hour
253 
254-    def __init__(self, server, statefile, num_sample_prefixes=1):
255-        ShareCrawler.__init__(self, server, statefile)
256+    def __init__(self, statefile, num_sample_prefixes=1):
257+        ShareCrawler.__init__(self, statefile)
258         self.num_sample_prefixes = num_sample_prefixes
259 
260     def add_initial_state(self):
261hunk ./src/allmydata/storage/expirer.py 15
262     removed.
263 
264     I collect statistics on the leases and make these available to a web
265-    status page, including::
266+    status page, including:
267 
268     Space recovered during this cycle-so-far:
269      actual (only if expiration_enabled=True):
270hunk ./src/allmydata/storage/expirer.py 51
271     slow_start = 360 # wait 6 minutes after startup
272     minimum_cycle_time = 12*60*60 # not more than twice per day
273 
274-    def __init__(self, server, statefile, historyfile,
275+    def __init__(self, statefile, historyfile,
276                  expiration_enabled, mode,
277                  override_lease_duration, # used if expiration_mode=="age"
278                  cutoff_date, # used if expiration_mode=="cutoff-date"
279hunk ./src/allmydata/storage/expirer.py 71
280         else:
281             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
282         self.sharetypes_to_expire = sharetypes
283-        ShareCrawler.__init__(self, server, statefile)
284+        ShareCrawler.__init__(self, statefile)
285 
286     def add_initial_state(self):
287         # we fill ["cycle-to-date"] here (even though they will be reset in
288hunk ./src/allmydata/storage/immutable.py 44
289     sharetype = "immutable"
290 
291     def __init__(self, filename, max_size=None, create=False):
292-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
293+        """ If max_size is not None then I won't allow more than
294+        max_size to be written to me. If create=True then max_size
295+        must not be None. """
296         precondition((max_size is not None) or (not create), max_size, create)
297         self.home = filename
298         self._max_size = max_size
299hunk ./src/allmydata/storage/immutable.py 87
300 
301     def read_share_data(self, offset, length):
302         precondition(offset >= 0)
303-        # reads beyond the end of the data are truncated. Reads that start
304-        # beyond the end of the data return an empty string. I wonder why
305-        # Python doesn't do the following computation for me?
306+        # Reads beyond the end of the data are truncated. Reads that start
307+        # beyond the end of the data return an empty string.
308         seekpos = self._data_offset+offset
309         fsize = os.path.getsize(self.home)
310         actuallength = max(0, min(length, fsize-seekpos))
311hunk ./src/allmydata/storage/immutable.py 198
312             space_freed += os.stat(self.home)[stat.ST_SIZE]
313             self.unlink()
314         return space_freed
315+class NullBucketWriter(Referenceable):
316+    implements(RIBucketWriter)
317 
318hunk ./src/allmydata/storage/immutable.py 201
319+    def remote_write(self, offset, data):
320+        return
321 
322 class BucketWriter(Referenceable):
323     implements(RIBucketWriter)
324hunk ./src/allmydata/storage/server.py 7
325 from twisted.application import service
326 
327 from zope.interface import implements
328-from allmydata.interfaces import RIStorageServer, IStatsProducer
329+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
330 from allmydata.util import fileutil, idlib, log, time_format
331 import allmydata # for __full_version__
332 
333hunk ./src/allmydata/storage/server.py 16
334 from allmydata.storage.lease import LeaseInfo
335 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
336      create_mutable_sharefile
337-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
338+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
339 from allmydata.storage.crawler import BucketCountingCrawler
340 from allmydata.storage.expirer import LeaseCheckingCrawler
341 
342hunk ./src/allmydata/storage/server.py 20
343+from zope.interface import implements
344+
345+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
346+# be started and stopped.
347+class Backend(service.MultiService):
348+    implements(IStatsProducer)
349+    def __init__(self):
350+        service.MultiService.__init__(self)
351+
352+    def get_bucket_shares(self):
353+        """XXX"""
354+        raise NotImplementedError
355+
356+    def get_share(self):
357+        """XXX"""
358+        raise NotImplementedError
359+
360+    def make_bucket_writer(self):
361+        """XXX"""
362+        raise NotImplementedError
363+
364+class NullBackend(Backend):
365+    def __init__(self):
366+        Backend.__init__(self)
367+
368+    def get_available_space(self):
369+        return None
370+
371+    def get_bucket_shares(self, storage_index):
372+        return set()
373+
374+    def get_share(self, storage_index, sharenum):
375+        return None
376+
377+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
378+        return NullBucketWriter()
379+
380+class FSBackend(Backend):
381+    def __init__(self, storedir, readonly=False, reserved_space=0):
382+        Backend.__init__(self)
383+
384+        self._setup_storage(storedir, readonly, reserved_space)
385+        self._setup_corruption_advisory()
386+        self._setup_bucket_counter()
387+        self._setup_lease_checkerf()
388+
389+    def _setup_storage(self, storedir, readonly, reserved_space):
390+        self.storedir = storedir
391+        self.readonly = readonly
392+        self.reserved_space = int(reserved_space)
393+        if self.reserved_space:
394+            if self.get_available_space() is None:
395+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
396+                        umid="0wZ27w", level=log.UNUSUAL)
397+
398+        self.sharedir = os.path.join(self.storedir, "shares")
399+        fileutil.make_dirs(self.sharedir)
400+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
401+        self._clean_incomplete()
402+
403+    def _clean_incomplete(self):
404+        fileutil.rm_dir(self.incomingdir)
405+        fileutil.make_dirs(self.incomingdir)
406+
407+    def _setup_corruption_advisory(self):
408+        # we don't actually create the corruption-advisory dir until necessary
409+        self.corruption_advisory_dir = os.path.join(self.storedir,
410+                                                    "corruption-advisories")
411+
412+    def _setup_bucket_counter(self):
413+        statefile = os.path.join(self.storedir, "bucket_counter.state")
414+        self.bucket_counter = BucketCountingCrawler(statefile)
415+        self.bucket_counter.setServiceParent(self)
416+
417+    def _setup_lease_checkerf(self):
418+        statefile = os.path.join(self.storedir, "lease_checker.state")
419+        historyfile = os.path.join(self.storedir, "lease_checker.history")
420+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
421+                                   expiration_enabled, expiration_mode,
422+                                   expiration_override_lease_duration,
423+                                   expiration_cutoff_date,
424+                                   expiration_sharetypes)
425+        self.lease_checker.setServiceParent(self)
426+
427+    def get_available_space(self):
428+        if self.readonly:
429+            return 0
430+        return fileutil.get_available_space(self.storedir, self.reserved_space)
431+
432+    def get_bucket_shares(self, storage_index):
433+        """Return a list of (shnum, pathname) tuples for files that hold
434+        shares for this storage_index. In each tuple, 'shnum' will always be
435+        the integer form of the last component of 'pathname'."""
436+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
437+        try:
438+            for f in os.listdir(storagedir):
439+                if NUM_RE.match(f):
440+                    filename = os.path.join(storagedir, f)
441+                    yield (int(f), filename)
442+        except OSError:
443+            # Commonly caused by there being no buckets at all.
444+            pass
445+
446 # storage/
447 # storage/shares/incoming
448 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
449hunk ./src/allmydata/storage/server.py 143
450     name = 'storage'
451     LeaseCheckerClass = LeaseCheckingCrawler
452 
453-    def __init__(self, storedir, nodeid, reserved_space=0,
454-                 discard_storage=False, readonly_storage=False,
455+    def __init__(self, nodeid, backend, reserved_space=0,
456+                 readonly_storage=False,
457                  stats_provider=None,
458                  expiration_enabled=False,
459                  expiration_mode="age",
460hunk ./src/allmydata/storage/server.py 155
461         assert isinstance(nodeid, str)
462         assert len(nodeid) == 20
463         self.my_nodeid = nodeid
464-        self.storedir = storedir
465-        sharedir = os.path.join(storedir, "shares")
466-        fileutil.make_dirs(sharedir)
467-        self.sharedir = sharedir
468-        # we don't actually create the corruption-advisory dir until necessary
469-        self.corruption_advisory_dir = os.path.join(storedir,
470-                                                    "corruption-advisories")
471-        self.reserved_space = int(reserved_space)
472-        self.no_storage = discard_storage
473-        self.readonly_storage = readonly_storage
474         self.stats_provider = stats_provider
475         if self.stats_provider:
476             self.stats_provider.register_producer(self)
477hunk ./src/allmydata/storage/server.py 158
478-        self.incomingdir = os.path.join(sharedir, 'incoming')
479-        self._clean_incomplete()
480-        fileutil.make_dirs(self.incomingdir)
481         self._active_writers = weakref.WeakKeyDictionary()
482hunk ./src/allmydata/storage/server.py 159
483+        self.backend = backend
484+        self.backend.setServiceParent(self)
485         log.msg("StorageServer created", facility="tahoe.storage")
486 
487hunk ./src/allmydata/storage/server.py 163
488-        if reserved_space:
489-            if self.get_available_space() is None:
490-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
491-                        umin="0wZ27w", level=log.UNUSUAL)
492-
493         self.latencies = {"allocate": [], # immutable
494                           "write": [],
495                           "close": [],
496hunk ./src/allmydata/storage/server.py 174
497                           "renew": [],
498                           "cancel": [],
499                           }
500-        self.add_bucket_counter()
501-
502-        statefile = os.path.join(self.storedir, "lease_checker.state")
503-        historyfile = os.path.join(self.storedir, "lease_checker.history")
504-        klass = self.LeaseCheckerClass
505-        self.lease_checker = klass(self, statefile, historyfile,
506-                                   expiration_enabled, expiration_mode,
507-                                   expiration_override_lease_duration,
508-                                   expiration_cutoff_date,
509-                                   expiration_sharetypes)
510-        self.lease_checker.setServiceParent(self)
511 
512     def __repr__(self):
513         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
514hunk ./src/allmydata/storage/server.py 178
515 
516-    def add_bucket_counter(self):
517-        statefile = os.path.join(self.storedir, "bucket_counter.state")
518-        self.bucket_counter = BucketCountingCrawler(self, statefile)
519-        self.bucket_counter.setServiceParent(self)
520-
521     def count(self, name, delta=1):
522         if self.stats_provider:
523             self.stats_provider.count("storage_server." + name, delta)
524hunk ./src/allmydata/storage/server.py 233
525             kwargs["facility"] = "tahoe.storage"
526         return log.msg(*args, **kwargs)
527 
528-    def _clean_incomplete(self):
529-        fileutil.rm_dir(self.incomingdir)
530-
531     def get_stats(self):
532         # remember: RIStatsProvider requires that our return dict
533         # contains numeric values.
534hunk ./src/allmydata/storage/server.py 269
535             stats['storage_server.total_bucket_count'] = bucket_count
536         return stats
537 
538-    def get_available_space(self):
539-        """Returns available space for share storage in bytes, or None if no
540-        API to get this information is available."""
541-
542-        if self.readonly_storage:
543-            return 0
544-        return fileutil.get_available_space(self.storedir, self.reserved_space)
545-
546     def allocated_size(self):
547         space = 0
548         for bw in self._active_writers:
549hunk ./src/allmydata/storage/server.py 276
550         return space
551 
552     def remote_get_version(self):
553-        remaining_space = self.get_available_space()
554+        remaining_space = self.backend.get_available_space()
555         if remaining_space is None:
556             # We're on a platform that has no API to get disk stats.
557             remaining_space = 2**64
558hunk ./src/allmydata/storage/server.py 301
559         self.count("allocate")
560         alreadygot = set()
561         bucketwriters = {} # k: shnum, v: BucketWriter
562-        si_dir = storage_index_to_dir(storage_index)
563-        si_s = si_b2a(storage_index)
564 
565hunk ./src/allmydata/storage/server.py 302
566+        si_s = si_b2a(storage_index)
567         log.msg("storage: allocate_buckets %s" % si_s)
568 
569         # in this implementation, the lease information (including secrets)
570hunk ./src/allmydata/storage/server.py 316
571 
572         max_space_per_bucket = allocated_size
573 
574-        remaining_space = self.get_available_space()
575+        remaining_space = self.backend.get_available_space()
576         limited = remaining_space is not None
577         if limited:
578             # this is a bit conservative, since some of this allocated_size()
579hunk ./src/allmydata/storage/server.py 329
580         # they asked about: this will save them a lot of work. Add or update
581         # leases for all of them: if they want us to hold shares for this
582         # file, they'll want us to hold leases for this file.
583-        for (shnum, fn) in self._get_bucket_shares(storage_index):
584+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
585             alreadygot.add(shnum)
586             sf = ShareFile(fn)
587             sf.add_or_renew_lease(lease_info)
588hunk ./src/allmydata/storage/server.py 335
589 
590         for shnum in sharenums:
591-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
592-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
593-            if os.path.exists(finalhome):
594+            share = self.backend.get_share(storage_index, shnum)
595+
596+            if not share:
597+                if (not limited) or (remaining_space >= max_space_per_bucket):
598+                    # ok! we need to create the new share file.
599+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
600+                                      max_space_per_bucket, lease_info, canary)
601+                    bucketwriters[shnum] = bw
602+                    self._active_writers[bw] = 1
603+                    if limited:
604+                        remaining_space -= max_space_per_bucket
605+                else:
606+                    # bummer! not enough space to accept this bucket
607+                    pass
608+
609+            elif share.is_complete():
610                 # great! we already have it. easy.
611                 pass
612hunk ./src/allmydata/storage/server.py 353
613-            elif os.path.exists(incominghome):
614+            elif not share.is_complete():
615                 # Note that we don't create BucketWriters for shnums that
616                 # have a partial share (in incoming/), so if a second upload
617                 # occurs while the first is still in progress, the second
618hunk ./src/allmydata/storage/server.py 359
619                 # uploader will use different storage servers.
620                 pass
621-            elif (not limited) or (remaining_space >= max_space_per_bucket):
622-                # ok! we need to create the new share file.
623-                bw = BucketWriter(self, incominghome, finalhome,
624-                                  max_space_per_bucket, lease_info, canary)
625-                if self.no_storage:
626-                    bw.throw_out_all_data = True
627-                bucketwriters[shnum] = bw
628-                self._active_writers[bw] = 1
629-                if limited:
630-                    remaining_space -= max_space_per_bucket
631-            else:
632-                # bummer! not enough space to accept this bucket
633-                pass
634-
635-        if bucketwriters:
636-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
637 
638         self.add_latency("allocate", time.time() - start)
639         return alreadygot, bucketwriters
640hunk ./src/allmydata/storage/server.py 437
641             self.stats_provider.count('storage_server.bytes_added', consumed_size)
642         del self._active_writers[bw]
643 
644-    def _get_bucket_shares(self, storage_index):
645-        """Return a list of (shnum, pathname) tuples for files that hold
646-        shares for this storage_index. In each tuple, 'shnum' will always be
647-        the integer form of the last component of 'pathname'."""
648-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
649-        try:
650-            for f in os.listdir(storagedir):
651-                if NUM_RE.match(f):
652-                    filename = os.path.join(storagedir, f)
653-                    yield (int(f), filename)
654-        except OSError:
655-            # Commonly caused by there being no buckets at all.
656-            pass
657 
658     def remote_get_buckets(self, storage_index):
659         start = time.time()
660hunk ./src/allmydata/storage/server.py 444
661         si_s = si_b2a(storage_index)
662         log.msg("storage: get_buckets %s" % si_s)
663         bucketreaders = {} # k: sharenum, v: BucketReader
664-        for shnum, filename in self._get_bucket_shares(storage_index):
665+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
666             bucketreaders[shnum] = BucketReader(self, filename,
667                                                 storage_index, shnum)
668         self.add_latency("get", time.time() - start)
669hunk ./src/allmydata/test/test_backends.py 10
670 import mock
671 
672 # This is the code that we're going to be testing.
673-from allmydata.storage.server import StorageServer
674+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
675 
676 # The following share file contents was generated with
677 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
678hunk ./src/allmydata/test/test_backends.py 21
679 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
680 
681 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
682+    @mock.patch('time.time')
683+    @mock.patch('os.mkdir')
684+    @mock.patch('__builtin__.open')
685+    @mock.patch('os.listdir')
686+    @mock.patch('os.path.isdir')
687+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
688+        """ This tests whether a server instance can be constructed
689+        with a null backend. The server instance fails the test if it
690+        tries to read or write to the file system. """
691+
692+        # Now begin the test.
693+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
694+
695+        self.failIf(mockisdir.called)
696+        self.failIf(mocklistdir.called)
697+        self.failIf(mockopen.called)
698+        self.failIf(mockmkdir.called)
699+
700+        # You passed!
701+
702+    @mock.patch('time.time')
703+    @mock.patch('os.mkdir')
704     @mock.patch('__builtin__.open')
705hunk ./src/allmydata/test/test_backends.py 44
706-    def test_create_server(self, mockopen):
707-        """ This tests whether a server instance can be constructed. """
708+    @mock.patch('os.listdir')
709+    @mock.patch('os.path.isdir')
710+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
711+        """ This tests whether a server instance can be constructed
712+        with a filesystem backend. To pass the test, it has to use the
713+        filesystem in only the prescribed ways. """
714 
715         def call_open(fname, mode):
716             if fname == 'testdir/bucket_counter.state':
717hunk ./src/allmydata/test/test_backends.py 58
718                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
719             elif fname == 'testdir/lease_checker.history':
720                 return StringIO()
721+            else:
722+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
723         mockopen.side_effect = call_open
724 
725         # Now begin the test.
726hunk ./src/allmydata/test/test_backends.py 63
727-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
728+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
729+
730+        self.failIf(mockisdir.called)
731+        self.failIf(mocklistdir.called)
732+        self.failIf(mockopen.called)
733+        self.failIf(mockmkdir.called)
734+        self.failIf(mocktime.called)
735 
736         # You passed!
737 
738hunk ./src/allmydata/test/test_backends.py 73
739-class TestServer(unittest.TestCase, ReallyEqualMixin):
740+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
741+    def setUp(self):
742+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
743+
744+    @mock.patch('os.mkdir')
745+    @mock.patch('__builtin__.open')
746+    @mock.patch('os.listdir')
747+    @mock.patch('os.path.isdir')
748+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
749+        """ Write a new share. """
750+
751+        # Now begin the test.
752+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
753+        bs[0].remote_write(0, 'a')
754+        self.failIf(mockisdir.called)
755+        self.failIf(mocklistdir.called)
756+        self.failIf(mockopen.called)
757+        self.failIf(mockmkdir.called)
758+
759+    @mock.patch('os.path.exists')
760+    @mock.patch('os.path.getsize')
761+    @mock.patch('__builtin__.open')
762+    @mock.patch('os.listdir')
763+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
764+        """ This tests whether the code correctly finds and reads
765+        shares written out by old (Tahoe-LAFS <= v1.8.2)
766+        servers. There is a similar test in test_download, but that one
767+        is from the perspective of the client and exercises a deeper
768+        stack of code. This one is for exercising just the
769+        StorageServer object. """
770+
771+        # Now begin the test.
772+        bs = self.s.remote_get_buckets('teststorage_index')
773+
774+        self.failUnlessEqual(len(bs), 0)
775+        self.failIf(mocklistdir.called)
776+        self.failIf(mockopen.called)
777+        self.failIf(mockgetsize.called)
778+        self.failIf(mockexists.called)
779+
780+
781+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
782     @mock.patch('__builtin__.open')
783     def setUp(self, mockopen):
784         def call_open(fname, mode):
785hunk ./src/allmydata/test/test_backends.py 126
786                 return StringIO()
787         mockopen.side_effect = call_open
788 
789-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
790-
791+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
792 
793     @mock.patch('time.time')
794     @mock.patch('os.mkdir')
795hunk ./src/allmydata/test/test_backends.py 134
796     @mock.patch('os.listdir')
797     @mock.patch('os.path.isdir')
798     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
799-        """Handle a report of corruption."""
800+        """ Write a new share. """
801 
802         def call_listdir(dirname):
803             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
804hunk ./src/allmydata/test/test_backends.py 173
805         mockopen.side_effect = call_open
806         # Now begin the test.
807         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
808-        print bs
809         bs[0].remote_write(0, 'a')
810         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
811 
812hunk ./src/allmydata/test/test_backends.py 176
813-
814     @mock.patch('os.path.exists')
815     @mock.patch('os.path.getsize')
816     @mock.patch('__builtin__.open')
817hunk ./src/allmydata/test/test_backends.py 218
818 
819         self.failUnlessEqual(len(bs), 1)
820         b = bs[0]
821+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
822         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
823         # If you try to read past the end you get the as much data as is there.
824         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
825hunk ./src/allmydata/test/test_backends.py 224
826         # If you start reading past the end of the file you get the empty string.
827         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
828+
829+
830}
831[a temp patch used as a snapshot
832wilcoxjg@gmail.com**20110626052732
833 Ignore-this: 95f05e314eaec870afa04c76d979aa44
834] {
835hunk ./docs/configuration.rst 637
836   [storage]
837   enabled = True
838   readonly = True
839-  sizelimit = 10000000000
840 
841 
842   [helper]
843hunk ./docs/garbage-collection.rst 16
844 
845 When a file or directory in the virtual filesystem is no longer referenced,
846 the space that its shares occupied on each storage server can be freed,
847-making room for other shares. Tahoe currently uses a garbage collection
848+making room for other shares. Tahoe uses a garbage collection
849 ("GC") mechanism to implement this space-reclamation process. Each share has
850 one or more "leases", which are managed by clients who want the
851 file/directory to be retained. The storage server accepts each share for a
852hunk ./docs/garbage-collection.rst 34
853 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
854 If lease renewal occurs quickly and with 100% reliability, than any renewal
855 time that is shorter than the lease duration will suffice, but a larger ratio
856-of duration-over-renewal-time will be more robust in the face of occasional
857+of lease duration to renewal time will be more robust in the face of occasional
858 delays or failures.
859 
860 The current recommended values for a small Tahoe grid are to renew the leases
861replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
862hunk ./src/allmydata/client.py 260
863             sharetypes.append("mutable")
864         expiration_sharetypes = tuple(sharetypes)
865 
866+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
867+            xyz
868+        xyz
869         ss = StorageServer(storedir, self.nodeid,
870                            reserved_space=reserved,
871                            discard_storage=discard,
872hunk ./src/allmydata/storage/crawler.py 234
873         f = open(tmpfile, "wb")
874         pickle.dump(self.state, f)
875         f.close()
876-        fileutil.move_into_place(tmpfile, self.statefile)
877+        fileutil.move_into_place(tmpfile, self.statefname)
878 
879     def startService(self):
880         # arrange things to look like we were just sleeping, so
881}
882[snapshot of progress on backend implementation (not suitable for trunk)
883wilcoxjg@gmail.com**20110626053244
884 Ignore-this: 50c764af791c2b99ada8289546806a0a
885] {
886adddir ./src/allmydata/storage/backends
887adddir ./src/allmydata/storage/backends/das
888move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
889adddir ./src/allmydata/storage/backends/null
890hunk ./src/allmydata/interfaces.py 270
891         store that on disk.
892         """
893 
894+class IStorageBackend(Interface):
895+    """
896+    Objects of this kind live on the server side and are used by the
897+    storage server object.
898+    """
899+    def get_available_space(self, reserved_space):
900+        """ Returns available space for share storage in bytes, or
901+        None if this information is not available or if the available
902+        space is unlimited.
903+
904+        If the backend is configured for read-only mode then this will
905+        return 0.
906+
907+        reserved_space is how many bytes to subtract from the answer, so
908+        you can pass how many bytes you would like to leave unused on this
909+        filesystem as reserved_space. """
910+
911+    def get_bucket_shares(self):
912+        """XXX"""
913+
914+    def get_share(self):
915+        """XXX"""
916+
917+    def make_bucket_writer(self):
918+        """XXX"""
919+
920+class IStorageBackendShare(Interface):
921+    """
922+    This object contains as much as all of the share data.  It is intended
923+    for lazy evaluation such that in many use cases substantially less than
924+    all of the share data will be accessed.
925+    """
926+    def is_complete(self):
927+        """
928+        Returns the share state, or None if the share does not exist.
929+        """
930+
931 class IStorageBucketWriter(Interface):
932     """
933     Objects of this kind live on the client side.
934hunk ./src/allmydata/interfaces.py 2492
935 
936 class EmptyPathnameComponentError(Exception):
937     """The webapi disallows empty pathname components."""
938+
939+class IShareStore(Interface):
940+    pass
941+
942addfile ./src/allmydata/storage/backends/__init__.py
943addfile ./src/allmydata/storage/backends/das/__init__.py
944addfile ./src/allmydata/storage/backends/das/core.py
945hunk ./src/allmydata/storage/backends/das/core.py 1
946+from allmydata.interfaces import IStorageBackend
947+from allmydata.storage.backends.base import Backend
948+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
949+from allmydata.util.assertutil import precondition
950+
951+import os, re, weakref, struct, time
952+
953+from foolscap.api import Referenceable
954+from twisted.application import service
955+
956+from zope.interface import implements
957+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
958+from allmydata.util import fileutil, idlib, log, time_format
959+import allmydata # for __full_version__
960+
961+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
962+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
963+from allmydata.storage.lease import LeaseInfo
964+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
965+     create_mutable_sharefile
966+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
967+from allmydata.storage.crawler import FSBucketCountingCrawler
968+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
969+
970+from zope.interface import implements
971+
972+class DASCore(Backend):
973+    implements(IStorageBackend)
974+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
975+        Backend.__init__(self)
976+
977+        self._setup_storage(storedir, readonly, reserved_space)
978+        self._setup_corruption_advisory()
979+        self._setup_bucket_counter()
980+        self._setup_lease_checkerf(expiration_policy)
981+
982+    def _setup_storage(self, storedir, readonly, reserved_space):
983+        self.storedir = storedir
984+        self.readonly = readonly
985+        self.reserved_space = int(reserved_space)
986+        if self.reserved_space:
987+            if self.get_available_space() is None:
988+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
989+                        umid="0wZ27w", level=log.UNUSUAL)
990+
991+        self.sharedir = os.path.join(self.storedir, "shares")
992+        fileutil.make_dirs(self.sharedir)
993+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
994+        self._clean_incomplete()
995+
996+    def _clean_incomplete(self):
997+        fileutil.rm_dir(self.incomingdir)
998+        fileutil.make_dirs(self.incomingdir)
999+
1000+    def _setup_corruption_advisory(self):
1001+        # we don't actually create the corruption-advisory dir until necessary
1002+        self.corruption_advisory_dir = os.path.join(self.storedir,
1003+                                                    "corruption-advisories")
1004+
1005+    def _setup_bucket_counter(self):
1006+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1007+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1008+        self.bucket_counter.setServiceParent(self)
1009+
1010+    def _setup_lease_checkerf(self, expiration_policy):
1011+        statefile = os.path.join(self.storedir, "lease_checker.state")
1012+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1013+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1014+        self.lease_checker.setServiceParent(self)
1015+
1016+    def get_available_space(self):
1017+        if self.readonly:
1018+            return 0
1019+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1020+
1021+    def get_shares(self, storage_index):
1022+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1023+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1024+        try:
1025+            for f in os.listdir(finalstoragedir):
1026+                if NUM_RE.match(f):
1027+                    filename = os.path.join(finalstoragedir, f)
1028+                    yield FSBShare(filename, int(f))
1029+        except OSError:
1030+            # Commonly caused by there being no buckets at all.
1031+            pass
1032+       
1033+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1034+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1035+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1036+        return bw
1037+       
1038+
1039+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1040+# and share data. The share data is accessed by RIBucketWriter.write and
1041+# RIBucketReader.read . The lease information is not accessible through these
1042+# interfaces.
1043+
1044+# The share file has the following layout:
1045+#  0x00: share file version number, four bytes, current version is 1
1046+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1047+#  0x08: number of leases, four bytes big-endian
1048+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1049+#  A+0x0c = B: first lease. Lease format is:
1050+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1051+#   B+0x04: renew secret, 32 bytes (SHA256)
1052+#   B+0x24: cancel secret, 32 bytes (SHA256)
1053+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1054+#   B+0x48: next lease, or end of record
1055+
1056+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1057+# but it is still filled in by storage servers in case the storage server
1058+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1059+# share file is moved from one storage server to another. The value stored in
1060+# this field is truncated, so if the actual share data length is >= 2**32,
1061+# then the value stored in this field will be the actual share data length
1062+# modulo 2**32.
1063+
1064+class ImmutableShare:
1065+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1066+    sharetype = "immutable"
1067+
1068+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1069+        """ If max_size is not None then I won't allow more than
1070+        max_size to be written to me. If create=True then max_size
1071+        must not be None. """
1072+        precondition((max_size is not None) or (not create), max_size, create)
1073+        self.shnum = shnum
1074+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1075+        self._max_size = max_size
1076+        if create:
1077+            # touch the file, so later callers will see that we're working on
1078+            # it. Also construct the metadata.
1079+            assert not os.path.exists(self.fname)
1080+            fileutil.make_dirs(os.path.dirname(self.fname))
1081+            f = open(self.fname, 'wb')
1082+            # The second field -- the four-byte share data length -- is no
1083+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1084+            # there in case someone downgrades a storage server from >=
1085+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1086+            # server to another, etc. We do saturation -- a share data length
1087+            # larger than 2**32-1 (what can fit into the field) is marked as
1088+            # the largest length that can fit into the field. That way, even
1089+            # if this does happen, the old < v1.3.0 server will still allow
1090+            # clients to read the first part of the share.
1091+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1092+            f.close()
1093+            self._lease_offset = max_size + 0x0c
1094+            self._num_leases = 0
1095+        else:
1096+            f = open(self.fname, 'rb')
1097+            filesize = os.path.getsize(self.fname)
1098+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1099+            f.close()
1100+            if version != 1:
1101+                msg = "sharefile %s had version %d but we wanted 1" % \
1102+                      (self.fname, version)
1103+                raise UnknownImmutableContainerVersionError(msg)
1104+            self._num_leases = num_leases
1105+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1106+        self._data_offset = 0xc
1107+
1108+    def unlink(self):
1109+        os.unlink(self.fname)
1110+
1111+    def read_share_data(self, offset, length):
1112+        precondition(offset >= 0)
1113+        # Reads beyond the end of the data are truncated. Reads that start
1114+        # beyond the end of the data return an empty string.
1115+        seekpos = self._data_offset+offset
1116+        fsize = os.path.getsize(self.fname)
1117+        actuallength = max(0, min(length, fsize-seekpos))
1118+        if actuallength == 0:
1119+            return ""
1120+        f = open(self.fname, 'rb')
1121+        f.seek(seekpos)
1122+        return f.read(actuallength)
1123+
1124+    def write_share_data(self, offset, data):
1125+        length = len(data)
1126+        precondition(offset >= 0, offset)
1127+        if self._max_size is not None and offset+length > self._max_size:
1128+            raise DataTooLargeError(self._max_size, offset, length)
1129+        f = open(self.fname, 'rb+')
1130+        real_offset = self._data_offset+offset
1131+        f.seek(real_offset)
1132+        assert f.tell() == real_offset
1133+        f.write(data)
1134+        f.close()
1135+
1136+    def _write_lease_record(self, f, lease_number, lease_info):
1137+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1138+        f.seek(offset)
1139+        assert f.tell() == offset
1140+        f.write(lease_info.to_immutable_data())
1141+
1142+    def _read_num_leases(self, f):
1143+        f.seek(0x08)
1144+        (num_leases,) = struct.unpack(">L", f.read(4))
1145+        return num_leases
1146+
1147+    def _write_num_leases(self, f, num_leases):
1148+        f.seek(0x08)
1149+        f.write(struct.pack(">L", num_leases))
1150+
1151+    def _truncate_leases(self, f, num_leases):
1152+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1153+
1154+    def get_leases(self):
1155+        """Yields a LeaseInfo instance for all leases."""
1156+        f = open(self.fname, 'rb')
1157+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1158+        f.seek(self._lease_offset)
1159+        for i in range(num_leases):
1160+            data = f.read(self.LEASE_SIZE)
1161+            if data:
1162+                yield LeaseInfo().from_immutable_data(data)
1163+
1164+    def add_lease(self, lease_info):
1165+        f = open(self.fname, 'rb+')
1166+        num_leases = self._read_num_leases(f)
1167+        self._write_lease_record(f, num_leases, lease_info)
1168+        self._write_num_leases(f, num_leases+1)
1169+        f.close()
1170+
1171+    def renew_lease(self, renew_secret, new_expire_time):
1172+        for i,lease in enumerate(self.get_leases()):
1173+            if constant_time_compare(lease.renew_secret, renew_secret):
1174+                # yup. See if we need to update the owner time.
1175+                if new_expire_time > lease.expiration_time:
1176+                    # yes
1177+                    lease.expiration_time = new_expire_time
1178+                    f = open(self.fname, 'rb+')
1179+                    self._write_lease_record(f, i, lease)
1180+                    f.close()
1181+                return
1182+        raise IndexError("unable to renew non-existent lease")
1183+
1184+    def add_or_renew_lease(self, lease_info):
1185+        try:
1186+            self.renew_lease(lease_info.renew_secret,
1187+                             lease_info.expiration_time)
1188+        except IndexError:
1189+            self.add_lease(lease_info)
1190+
1191+
1192+    def cancel_lease(self, cancel_secret):
1193+        """Remove a lease with the given cancel_secret. If the last lease is
1194+        cancelled, the file will be removed. Return the number of bytes that
1195+        were freed (by truncating the list of leases, and possibly by
1196+        deleting the file. Raise IndexError if there was no lease with the
1197+        given cancel_secret.
1198+        """
1199+
1200+        leases = list(self.get_leases())
1201+        num_leases_removed = 0
1202+        for i,lease in enumerate(leases):
1203+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1204+                leases[i] = None
1205+                num_leases_removed += 1
1206+        if not num_leases_removed:
1207+            raise IndexError("unable to find matching lease to cancel")
1208+        if num_leases_removed:
1209+            # pack and write out the remaining leases. We write these out in
1210+            # the same order as they were added, so that if we crash while
1211+            # doing this, we won't lose any non-cancelled leases.
1212+            leases = [l for l in leases if l] # remove the cancelled leases
1213+            f = open(self.fname, 'rb+')
1214+            for i,lease in enumerate(leases):
1215+                self._write_lease_record(f, i, lease)
1216+            self._write_num_leases(f, len(leases))
1217+            self._truncate_leases(f, len(leases))
1218+            f.close()
1219+        space_freed = self.LEASE_SIZE * num_leases_removed
1220+        if not len(leases):
1221+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1222+            self.unlink()
1223+        return space_freed
1224hunk ./src/allmydata/storage/backends/das/expirer.py 2
1225 import time, os, pickle, struct
1226-from allmydata.storage.crawler import ShareCrawler
1227-from allmydata.storage.shares import get_share_file
1228+from allmydata.storage.crawler import FSShareCrawler
1229 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1230      UnknownImmutableContainerVersionError
1231 from twisted.python import log as twlog
1232hunk ./src/allmydata/storage/backends/das/expirer.py 7
1233 
1234-class LeaseCheckingCrawler(ShareCrawler):
1235+class FSLeaseCheckingCrawler(FSShareCrawler):
1236     """I examine the leases on all shares, determining which are still valid
1237     and which have expired. I can remove the expired leases (if so
1238     configured), and the share will be deleted when the last lease is
1239hunk ./src/allmydata/storage/backends/das/expirer.py 50
1240     slow_start = 360 # wait 6 minutes after startup
1241     minimum_cycle_time = 12*60*60 # not more than twice per day
1242 
1243-    def __init__(self, statefile, historyfile,
1244-                 expiration_enabled, mode,
1245-                 override_lease_duration, # used if expiration_mode=="age"
1246-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1247-                 sharetypes):
1248+    def __init__(self, statefile, historyfile, expiration_policy):
1249         self.historyfile = historyfile
1250hunk ./src/allmydata/storage/backends/das/expirer.py 52
1251-        self.expiration_enabled = expiration_enabled
1252-        self.mode = mode
1253+        self.expiration_enabled = expiration_policy['enabled']
1254+        self.mode = expiration_policy['mode']
1255         self.override_lease_duration = None
1256         self.cutoff_date = None
1257         if self.mode == "age":
1258hunk ./src/allmydata/storage/backends/das/expirer.py 57
1259-            assert isinstance(override_lease_duration, (int, type(None)))
1260-            self.override_lease_duration = override_lease_duration # seconds
1261+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1262+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1263         elif self.mode == "cutoff-date":
1264hunk ./src/allmydata/storage/backends/das/expirer.py 60
1265-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1266+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1267             assert cutoff_date is not None
1268hunk ./src/allmydata/storage/backends/das/expirer.py 62
1269-            self.cutoff_date = cutoff_date
1270+            self.cutoff_date = expiration_policy['cutoff_date']
1271         else:
1272hunk ./src/allmydata/storage/backends/das/expirer.py 64
1273-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1274-        self.sharetypes_to_expire = sharetypes
1275-        ShareCrawler.__init__(self, statefile)
1276+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1277+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1278+        FSShareCrawler.__init__(self, statefile)
1279 
1280     def add_initial_state(self):
1281         # we fill ["cycle-to-date"] here (even though they will be reset in
1282hunk ./src/allmydata/storage/backends/das/expirer.py 156
1283 
1284     def process_share(self, sharefilename):
1285         # first, find out what kind of a share it is
1286-        sf = get_share_file(sharefilename)
1287+        f = open(sharefilename, "rb")
1288+        prefix = f.read(32)
1289+        f.close()
1290+        if prefix == MutableShareFile.MAGIC:
1291+            sf = MutableShareFile(sharefilename)
1292+        else:
1293+            # otherwise assume it's immutable
1294+            sf = FSBShare(sharefilename)
1295         sharetype = sf.sharetype
1296         now = time.time()
1297         s = self.stat(sharefilename)
1298addfile ./src/allmydata/storage/backends/null/__init__.py
1299addfile ./src/allmydata/storage/backends/null/core.py
1300hunk ./src/allmydata/storage/backends/null/core.py 1
1301+from allmydata.storage.backends.base import Backend
1302+
1303+class NullCore(Backend):
1304+    def __init__(self):
1305+        Backend.__init__(self)
1306+
1307+    def get_available_space(self):
1308+        return None
1309+
1310+    def get_shares(self, storage_index):
1311+        return set()
1312+
1313+    def get_share(self, storage_index, sharenum):
1314+        return None
1315+
1316+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1317+        return NullBucketWriter()
1318hunk ./src/allmydata/storage/crawler.py 12
1319 class TimeSliceExceeded(Exception):
1320     pass
1321 
1322-class ShareCrawler(service.MultiService):
1323+class FSShareCrawler(service.MultiService):
1324     """A subcless of ShareCrawler is attached to a StorageServer, and
1325     periodically walks all of its shares, processing each one in some
1326     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1327hunk ./src/allmydata/storage/crawler.py 68
1328     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1329     minimum_cycle_time = 300 # don't run a cycle faster than this
1330 
1331-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1332+    def __init__(self, statefname, allowed_cpu_percentage=None):
1333         service.MultiService.__init__(self)
1334         if allowed_cpu_percentage is not None:
1335             self.allowed_cpu_percentage = allowed_cpu_percentage
1336hunk ./src/allmydata/storage/crawler.py 72
1337-        self.backend = backend
1338+        self.statefname = statefname
1339         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1340                          for i in range(2**10)]
1341         self.prefixes.sort()
1342hunk ./src/allmydata/storage/crawler.py 192
1343         #                            of the last bucket to be processed, or
1344         #                            None if we are sleeping between cycles
1345         try:
1346-            f = open(self.statefile, "rb")
1347+            f = open(self.statefname, "rb")
1348             state = pickle.load(f)
1349             f.close()
1350         except EnvironmentError:
1351hunk ./src/allmydata/storage/crawler.py 230
1352         else:
1353             last_complete_prefix = self.prefixes[lcpi]
1354         self.state["last-complete-prefix"] = last_complete_prefix
1355-        tmpfile = self.statefile + ".tmp"
1356+        tmpfile = self.statefname + ".tmp"
1357         f = open(tmpfile, "wb")
1358         pickle.dump(self.state, f)
1359         f.close()
1360hunk ./src/allmydata/storage/crawler.py 433
1361         pass
1362 
1363 
1364-class BucketCountingCrawler(ShareCrawler):
1365+class FSBucketCountingCrawler(FSShareCrawler):
1366     """I keep track of how many buckets are being managed by this server.
1367     This is equivalent to the number of distributed files and directories for
1368     which I am providing storage. The actual number of files+directories in
1369hunk ./src/allmydata/storage/crawler.py 446
1370 
1371     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1372 
1373-    def __init__(self, statefile, num_sample_prefixes=1):
1374-        ShareCrawler.__init__(self, statefile)
1375+    def __init__(self, statefname, num_sample_prefixes=1):
1376+        FSShareCrawler.__init__(self, statefname)
1377         self.num_sample_prefixes = num_sample_prefixes
1378 
1379     def add_initial_state(self):
1380hunk ./src/allmydata/storage/immutable.py 14
1381 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1382      DataTooLargeError
1383 
1384-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1385-# and share data. The share data is accessed by RIBucketWriter.write and
1386-# RIBucketReader.read . The lease information is not accessible through these
1387-# interfaces.
1388-
1389-# The share file has the following layout:
1390-#  0x00: share file version number, four bytes, current version is 1
1391-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1392-#  0x08: number of leases, four bytes big-endian
1393-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1394-#  A+0x0c = B: first lease. Lease format is:
1395-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1396-#   B+0x04: renew secret, 32 bytes (SHA256)
1397-#   B+0x24: cancel secret, 32 bytes (SHA256)
1398-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1399-#   B+0x48: next lease, or end of record
1400-
1401-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1402-# but it is still filled in by storage servers in case the storage server
1403-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1404-# share file is moved from one storage server to another. The value stored in
1405-# this field is truncated, so if the actual share data length is >= 2**32,
1406-# then the value stored in this field will be the actual share data length
1407-# modulo 2**32.
1408-
1409-class ShareFile:
1410-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1411-    sharetype = "immutable"
1412-
1413-    def __init__(self, filename, max_size=None, create=False):
1414-        """ If max_size is not None then I won't allow more than
1415-        max_size to be written to me. If create=True then max_size
1416-        must not be None. """
1417-        precondition((max_size is not None) or (not create), max_size, create)
1418-        self.home = filename
1419-        self._max_size = max_size
1420-        if create:
1421-            # touch the file, so later callers will see that we're working on
1422-            # it. Also construct the metadata.
1423-            assert not os.path.exists(self.home)
1424-            fileutil.make_dirs(os.path.dirname(self.home))
1425-            f = open(self.home, 'wb')
1426-            # The second field -- the four-byte share data length -- is no
1427-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1428-            # there in case someone downgrades a storage server from >=
1429-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1430-            # server to another, etc. We do saturation -- a share data length
1431-            # larger than 2**32-1 (what can fit into the field) is marked as
1432-            # the largest length that can fit into the field. That way, even
1433-            # if this does happen, the old < v1.3.0 server will still allow
1434-            # clients to read the first part of the share.
1435-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1436-            f.close()
1437-            self._lease_offset = max_size + 0x0c
1438-            self._num_leases = 0
1439-        else:
1440-            f = open(self.home, 'rb')
1441-            filesize = os.path.getsize(self.home)
1442-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1443-            f.close()
1444-            if version != 1:
1445-                msg = "sharefile %s had version %d but we wanted 1" % \
1446-                      (filename, version)
1447-                raise UnknownImmutableContainerVersionError(msg)
1448-            self._num_leases = num_leases
1449-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1450-        self._data_offset = 0xc
1451-
1452-    def unlink(self):
1453-        os.unlink(self.home)
1454-
1455-    def read_share_data(self, offset, length):
1456-        precondition(offset >= 0)
1457-        # Reads beyond the end of the data are truncated. Reads that start
1458-        # beyond the end of the data return an empty string.
1459-        seekpos = self._data_offset+offset
1460-        fsize = os.path.getsize(self.home)
1461-        actuallength = max(0, min(length, fsize-seekpos))
1462-        if actuallength == 0:
1463-            return ""
1464-        f = open(self.home, 'rb')
1465-        f.seek(seekpos)
1466-        return f.read(actuallength)
1467-
1468-    def write_share_data(self, offset, data):
1469-        length = len(data)
1470-        precondition(offset >= 0, offset)
1471-        if self._max_size is not None and offset+length > self._max_size:
1472-            raise DataTooLargeError(self._max_size, offset, length)
1473-        f = open(self.home, 'rb+')
1474-        real_offset = self._data_offset+offset
1475-        f.seek(real_offset)
1476-        assert f.tell() == real_offset
1477-        f.write(data)
1478-        f.close()
1479-
1480-    def _write_lease_record(self, f, lease_number, lease_info):
1481-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1482-        f.seek(offset)
1483-        assert f.tell() == offset
1484-        f.write(lease_info.to_immutable_data())
1485-
1486-    def _read_num_leases(self, f):
1487-        f.seek(0x08)
1488-        (num_leases,) = struct.unpack(">L", f.read(4))
1489-        return num_leases
1490-
1491-    def _write_num_leases(self, f, num_leases):
1492-        f.seek(0x08)
1493-        f.write(struct.pack(">L", num_leases))
1494-
1495-    def _truncate_leases(self, f, num_leases):
1496-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1497-
1498-    def get_leases(self):
1499-        """Yields a LeaseInfo instance for all leases."""
1500-        f = open(self.home, 'rb')
1501-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1502-        f.seek(self._lease_offset)
1503-        for i in range(num_leases):
1504-            data = f.read(self.LEASE_SIZE)
1505-            if data:
1506-                yield LeaseInfo().from_immutable_data(data)
1507-
1508-    def add_lease(self, lease_info):
1509-        f = open(self.home, 'rb+')
1510-        num_leases = self._read_num_leases(f)
1511-        self._write_lease_record(f, num_leases, lease_info)
1512-        self._write_num_leases(f, num_leases+1)
1513-        f.close()
1514-
1515-    def renew_lease(self, renew_secret, new_expire_time):
1516-        for i,lease in enumerate(self.get_leases()):
1517-            if constant_time_compare(lease.renew_secret, renew_secret):
1518-                # yup. See if we need to update the owner time.
1519-                if new_expire_time > lease.expiration_time:
1520-                    # yes
1521-                    lease.expiration_time = new_expire_time
1522-                    f = open(self.home, 'rb+')
1523-                    self._write_lease_record(f, i, lease)
1524-                    f.close()
1525-                return
1526-        raise IndexError("unable to renew non-existent lease")
1527-
1528-    def add_or_renew_lease(self, lease_info):
1529-        try:
1530-            self.renew_lease(lease_info.renew_secret,
1531-                             lease_info.expiration_time)
1532-        except IndexError:
1533-            self.add_lease(lease_info)
1534-
1535-
1536-    def cancel_lease(self, cancel_secret):
1537-        """Remove a lease with the given cancel_secret. If the last lease is
1538-        cancelled, the file will be removed. Return the number of bytes that
1539-        were freed (by truncating the list of leases, and possibly by
1540-        deleting the file. Raise IndexError if there was no lease with the
1541-        given cancel_secret.
1542-        """
1543-
1544-        leases = list(self.get_leases())
1545-        num_leases_removed = 0
1546-        for i,lease in enumerate(leases):
1547-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1548-                leases[i] = None
1549-                num_leases_removed += 1
1550-        if not num_leases_removed:
1551-            raise IndexError("unable to find matching lease to cancel")
1552-        if num_leases_removed:
1553-            # pack and write out the remaining leases. We write these out in
1554-            # the same order as they were added, so that if we crash while
1555-            # doing this, we won't lose any non-cancelled leases.
1556-            leases = [l for l in leases if l] # remove the cancelled leases
1557-            f = open(self.home, 'rb+')
1558-            for i,lease in enumerate(leases):
1559-                self._write_lease_record(f, i, lease)
1560-            self._write_num_leases(f, len(leases))
1561-            self._truncate_leases(f, len(leases))
1562-            f.close()
1563-        space_freed = self.LEASE_SIZE * num_leases_removed
1564-        if not len(leases):
1565-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1566-            self.unlink()
1567-        return space_freed
1568-class NullBucketWriter(Referenceable):
1569-    implements(RIBucketWriter)
1570-
1571-    def remote_write(self, offset, data):
1572-        return
1573-
1574 class BucketWriter(Referenceable):
1575     implements(RIBucketWriter)
1576 
1577hunk ./src/allmydata/storage/immutable.py 17
1578-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1579+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1580         self.ss = ss
1581hunk ./src/allmydata/storage/immutable.py 19
1582-        self.incominghome = incominghome
1583-        self.finalhome = finalhome
1584         self._max_size = max_size # don't allow the client to write more than this
1585         self._canary = canary
1586         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1587hunk ./src/allmydata/storage/immutable.py 24
1588         self.closed = False
1589         self.throw_out_all_data = False
1590-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1591+        self._sharefile = immutableshare
1592         # also, add our lease to the file now, so that other ones can be
1593         # added by simultaneous uploaders
1594         self._sharefile.add_lease(lease_info)
1595hunk ./src/allmydata/storage/server.py 16
1596 from allmydata.storage.lease import LeaseInfo
1597 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1598      create_mutable_sharefile
1599-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1600-from allmydata.storage.crawler import BucketCountingCrawler
1601-from allmydata.storage.expirer import LeaseCheckingCrawler
1602 
1603 from zope.interface import implements
1604 
1605hunk ./src/allmydata/storage/server.py 19
1606-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1607-# be started and stopped.
1608-class Backend(service.MultiService):
1609-    implements(IStatsProducer)
1610-    def __init__(self):
1611-        service.MultiService.__init__(self)
1612-
1613-    def get_bucket_shares(self):
1614-        """XXX"""
1615-        raise NotImplementedError
1616-
1617-    def get_share(self):
1618-        """XXX"""
1619-        raise NotImplementedError
1620-
1621-    def make_bucket_writer(self):
1622-        """XXX"""
1623-        raise NotImplementedError
1624-
1625-class NullBackend(Backend):
1626-    def __init__(self):
1627-        Backend.__init__(self)
1628-
1629-    def get_available_space(self):
1630-        return None
1631-
1632-    def get_bucket_shares(self, storage_index):
1633-        return set()
1634-
1635-    def get_share(self, storage_index, sharenum):
1636-        return None
1637-
1638-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1639-        return NullBucketWriter()
1640-
1641-class FSBackend(Backend):
1642-    def __init__(self, storedir, readonly=False, reserved_space=0):
1643-        Backend.__init__(self)
1644-
1645-        self._setup_storage(storedir, readonly, reserved_space)
1646-        self._setup_corruption_advisory()
1647-        self._setup_bucket_counter()
1648-        self._setup_lease_checkerf()
1649-
1650-    def _setup_storage(self, storedir, readonly, reserved_space):
1651-        self.storedir = storedir
1652-        self.readonly = readonly
1653-        self.reserved_space = int(reserved_space)
1654-        if self.reserved_space:
1655-            if self.get_available_space() is None:
1656-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1657-                        umid="0wZ27w", level=log.UNUSUAL)
1658-
1659-        self.sharedir = os.path.join(self.storedir, "shares")
1660-        fileutil.make_dirs(self.sharedir)
1661-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1662-        self._clean_incomplete()
1663-
1664-    def _clean_incomplete(self):
1665-        fileutil.rm_dir(self.incomingdir)
1666-        fileutil.make_dirs(self.incomingdir)
1667-
1668-    def _setup_corruption_advisory(self):
1669-        # we don't actually create the corruption-advisory dir until necessary
1670-        self.corruption_advisory_dir = os.path.join(self.storedir,
1671-                                                    "corruption-advisories")
1672-
1673-    def _setup_bucket_counter(self):
1674-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1675-        self.bucket_counter = BucketCountingCrawler(statefile)
1676-        self.bucket_counter.setServiceParent(self)
1677-
1678-    def _setup_lease_checkerf(self):
1679-        statefile = os.path.join(self.storedir, "lease_checker.state")
1680-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1681-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1682-                                   expiration_enabled, expiration_mode,
1683-                                   expiration_override_lease_duration,
1684-                                   expiration_cutoff_date,
1685-                                   expiration_sharetypes)
1686-        self.lease_checker.setServiceParent(self)
1687-
1688-    def get_available_space(self):
1689-        if self.readonly:
1690-            return 0
1691-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1692-
1693-    def get_bucket_shares(self, storage_index):
1694-        """Return a list of (shnum, pathname) tuples for files that hold
1695-        shares for this storage_index. In each tuple, 'shnum' will always be
1696-        the integer form of the last component of 'pathname'."""
1697-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1698-        try:
1699-            for f in os.listdir(storagedir):
1700-                if NUM_RE.match(f):
1701-                    filename = os.path.join(storagedir, f)
1702-                    yield (int(f), filename)
1703-        except OSError:
1704-            # Commonly caused by there being no buckets at all.
1705-            pass
1706-
1707 # storage/
1708 # storage/shares/incoming
1709 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1710hunk ./src/allmydata/storage/server.py 32
1711 # $SHARENUM matches this regex:
1712 NUM_RE=re.compile("^[0-9]+$")
1713 
1714-
1715-
1716 class StorageServer(service.MultiService, Referenceable):
1717     implements(RIStorageServer, IStatsProducer)
1718     name = 'storage'
1719hunk ./src/allmydata/storage/server.py 35
1720-    LeaseCheckerClass = LeaseCheckingCrawler
1721 
1722     def __init__(self, nodeid, backend, reserved_space=0,
1723                  readonly_storage=False,
1724hunk ./src/allmydata/storage/server.py 38
1725-                 stats_provider=None,
1726-                 expiration_enabled=False,
1727-                 expiration_mode="age",
1728-                 expiration_override_lease_duration=None,
1729-                 expiration_cutoff_date=None,
1730-                 expiration_sharetypes=("mutable", "immutable")):
1731+                 stats_provider=None ):
1732         service.MultiService.__init__(self)
1733         assert isinstance(nodeid, str)
1734         assert len(nodeid) == 20
1735hunk ./src/allmydata/storage/server.py 217
1736         # they asked about: this will save them a lot of work. Add or update
1737         # leases for all of them: if they want us to hold shares for this
1738         # file, they'll want us to hold leases for this file.
1739-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1740-            alreadygot.add(shnum)
1741-            sf = ShareFile(fn)
1742-            sf.add_or_renew_lease(lease_info)
1743-
1744-        for shnum in sharenums:
1745-            share = self.backend.get_share(storage_index, shnum)
1746+        for share in self.backend.get_shares(storage_index):
1747+            alreadygot.add(share.shnum)
1748+            share.add_or_renew_lease(lease_info)
1749 
1750hunk ./src/allmydata/storage/server.py 221
1751-            if not share:
1752-                if (not limited) or (remaining_space >= max_space_per_bucket):
1753-                    # ok! we need to create the new share file.
1754-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1755-                                      max_space_per_bucket, lease_info, canary)
1756-                    bucketwriters[shnum] = bw
1757-                    self._active_writers[bw] = 1
1758-                    if limited:
1759-                        remaining_space -= max_space_per_bucket
1760-                else:
1761-                    # bummer! not enough space to accept this bucket
1762-                    pass
1763+        for shnum in (sharenums - alreadygot):
1764+            if (not limited) or (remaining_space >= max_space_per_bucket):
1765+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1766+                self.backend.set_storage_server(self)
1767+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1768+                                                     max_space_per_bucket, lease_info, canary)
1769+                bucketwriters[shnum] = bw
1770+                self._active_writers[bw] = 1
1771+                if limited:
1772+                    remaining_space -= max_space_per_bucket
1773 
1774hunk ./src/allmydata/storage/server.py 232
1775-            elif share.is_complete():
1776-                # great! we already have it. easy.
1777-                pass
1778-            elif not share.is_complete():
1779-                # Note that we don't create BucketWriters for shnums that
1780-                # have a partial share (in incoming/), so if a second upload
1781-                # occurs while the first is still in progress, the second
1782-                # uploader will use different storage servers.
1783-                pass
1784+        #XXX We SHOULD DOCUMENT LATER.
1785 
1786         self.add_latency("allocate", time.time() - start)
1787         return alreadygot, bucketwriters
1788hunk ./src/allmydata/storage/server.py 238
1789 
1790     def _iter_share_files(self, storage_index):
1791-        for shnum, filename in self._get_bucket_shares(storage_index):
1792+        for shnum, filename in self._get_shares(storage_index):
1793             f = open(filename, 'rb')
1794             header = f.read(32)
1795             f.close()
1796hunk ./src/allmydata/storage/server.py 318
1797         si_s = si_b2a(storage_index)
1798         log.msg("storage: get_buckets %s" % si_s)
1799         bucketreaders = {} # k: sharenum, v: BucketReader
1800-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1801+        for shnum, filename in self.backend.get_shares(storage_index):
1802             bucketreaders[shnum] = BucketReader(self, filename,
1803                                                 storage_index, shnum)
1804         self.add_latency("get", time.time() - start)
1805hunk ./src/allmydata/storage/server.py 334
1806         # since all shares get the same lease data, we just grab the leases
1807         # from the first share
1808         try:
1809-            shnum, filename = self._get_bucket_shares(storage_index).next()
1810+            shnum, filename = self._get_shares(storage_index).next()
1811             sf = ShareFile(filename)
1812             return sf.get_leases()
1813         except StopIteration:
1814hunk ./src/allmydata/storage/shares.py 1
1815-#! /usr/bin/python
1816-
1817-from allmydata.storage.mutable import MutableShareFile
1818-from allmydata.storage.immutable import ShareFile
1819-
1820-def get_share_file(filename):
1821-    f = open(filename, "rb")
1822-    prefix = f.read(32)
1823-    f.close()
1824-    if prefix == MutableShareFile.MAGIC:
1825-        return MutableShareFile(filename)
1826-    # otherwise assume it's immutable
1827-    return ShareFile(filename)
1828-
1829rmfile ./src/allmydata/storage/shares.py
1830hunk ./src/allmydata/test/common_util.py 20
1831 
1832 def flip_one_bit(s, offset=0, size=None):
1833     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1834-    than offset+size. """
1835+    than offset+size. Return the new string. """
1836     if size is None:
1837         size=len(s)-offset
1838     i = randrange(offset, offset+size)
1839hunk ./src/allmydata/test/test_backends.py 7
1840 
1841 from allmydata.test.common_util import ReallyEqualMixin
1842 
1843-import mock
1844+import mock, os
1845 
1846 # This is the code that we're going to be testing.
1847hunk ./src/allmydata/test/test_backends.py 10
1848-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1849+from allmydata.storage.server import StorageServer
1850+
1851+from allmydata.storage.backends.das.core import DASCore
1852+from allmydata.storage.backends.null.core import NullCore
1853+
1854 
1855 # The following share file contents was generated with
1856 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1857hunk ./src/allmydata/test/test_backends.py 22
1858 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1859 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1860 
1861-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1862+tempdir = 'teststoredir'
1863+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1864+sharefname = os.path.join(sharedirname, '0')
1865 
1866 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1867     @mock.patch('time.time')
1868hunk ./src/allmydata/test/test_backends.py 58
1869         filesystem in only the prescribed ways. """
1870 
1871         def call_open(fname, mode):
1872-            if fname == 'testdir/bucket_counter.state':
1873-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1874-            elif fname == 'testdir/lease_checker.state':
1875-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1876-            elif fname == 'testdir/lease_checker.history':
1877+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1878+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1879+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1880+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1881+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1882                 return StringIO()
1883             else:
1884                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1885hunk ./src/allmydata/test/test_backends.py 124
1886     @mock.patch('__builtin__.open')
1887     def setUp(self, mockopen):
1888         def call_open(fname, mode):
1889-            if fname == 'testdir/bucket_counter.state':
1890-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1891-            elif fname == 'testdir/lease_checker.state':
1892-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1893-            elif fname == 'testdir/lease_checker.history':
1894+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1895+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1896+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1897+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1898+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1899                 return StringIO()
1900         mockopen.side_effect = call_open
1901hunk ./src/allmydata/test/test_backends.py 131
1902-
1903-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1904+        expiration_policy = {'enabled' : False,
1905+                             'mode' : 'age',
1906+                             'override_lease_duration' : None,
1907+                             'cutoff_date' : None,
1908+                             'sharetypes' : None}
1909+        testbackend = DASCore(tempdir, expiration_policy)
1910+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1911 
1912     @mock.patch('time.time')
1913     @mock.patch('os.mkdir')
1914hunk ./src/allmydata/test/test_backends.py 148
1915         """ Write a new share. """
1916 
1917         def call_listdir(dirname):
1918-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1919-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1920+            self.failUnlessReallyEqual(dirname, sharedirname)
1921+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1922 
1923         mocklistdir.side_effect = call_listdir
1924 
1925hunk ./src/allmydata/test/test_backends.py 178
1926 
1927         sharefile = MockFile()
1928         def call_open(fname, mode):
1929-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1930+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1931             return sharefile
1932 
1933         mockopen.side_effect = call_open
1934hunk ./src/allmydata/test/test_backends.py 200
1935         StorageServer object. """
1936 
1937         def call_listdir(dirname):
1938-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1939+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1940             return ['0']
1941 
1942         mocklistdir.side_effect = call_listdir
1943}
1944[checkpoint patch
1945wilcoxjg@gmail.com**20110626165715
1946 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1947] {
1948hunk ./src/allmydata/storage/backends/das/core.py 21
1949 from allmydata.storage.lease import LeaseInfo
1950 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1951      create_mutable_sharefile
1952-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1953+from allmydata.storage.immutable import BucketWriter, BucketReader
1954 from allmydata.storage.crawler import FSBucketCountingCrawler
1955 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1956 
1957hunk ./src/allmydata/storage/backends/das/core.py 27
1958 from zope.interface import implements
1959 
1960+# $SHARENUM matches this regex:
1961+NUM_RE=re.compile("^[0-9]+$")
1962+
1963 class DASCore(Backend):
1964     implements(IStorageBackend)
1965     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1966hunk ./src/allmydata/storage/backends/das/core.py 80
1967         return fileutil.get_available_space(self.storedir, self.reserved_space)
1968 
1969     def get_shares(self, storage_index):
1970-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1971+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
1972         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1973         try:
1974             for f in os.listdir(finalstoragedir):
1975hunk ./src/allmydata/storage/backends/das/core.py 86
1976                 if NUM_RE.match(f):
1977                     filename = os.path.join(finalstoragedir, f)
1978-                    yield FSBShare(filename, int(f))
1979+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
1980         except OSError:
1981             # Commonly caused by there being no buckets at all.
1982             pass
1983hunk ./src/allmydata/storage/backends/das/core.py 95
1984         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1985         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1986         return bw
1987+
1988+    def set_storage_server(self, ss):
1989+        self.ss = ss
1990         
1991 
1992 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
1993hunk ./src/allmydata/storage/server.py 29
1994 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
1995 # base-32 chars).
1996 
1997-# $SHARENUM matches this regex:
1998-NUM_RE=re.compile("^[0-9]+$")
1999 
2000 class StorageServer(service.MultiService, Referenceable):
2001     implements(RIStorageServer, IStatsProducer)
2002}
2003[checkpoint4
2004wilcoxjg@gmail.com**20110628202202
2005 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2006] {
2007hunk ./src/allmydata/storage/backends/das/core.py 96
2008         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2009         return bw
2010 
2011+    def make_bucket_reader(self, share):
2012+        return BucketReader(self.ss, share)
2013+
2014     def set_storage_server(self, ss):
2015         self.ss = ss
2016         
2017hunk ./src/allmydata/storage/backends/das/core.py 138
2018         must not be None. """
2019         precondition((max_size is not None) or (not create), max_size, create)
2020         self.shnum = shnum
2021+        self.storage_index = storageindex
2022         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2023         self._max_size = max_size
2024         if create:
2025hunk ./src/allmydata/storage/backends/das/core.py 173
2026             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2027         self._data_offset = 0xc
2028 
2029+    def get_shnum(self):
2030+        return self.shnum
2031+
2032     def unlink(self):
2033         os.unlink(self.fname)
2034 
2035hunk ./src/allmydata/storage/backends/null/core.py 2
2036 from allmydata.storage.backends.base import Backend
2037+from allmydata.storage.immutable import BucketWriter, BucketReader
2038 
2039 class NullCore(Backend):
2040     def __init__(self):
2041hunk ./src/allmydata/storage/backends/null/core.py 17
2042     def get_share(self, storage_index, sharenum):
2043         return None
2044 
2045-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2046-        return NullBucketWriter()
2047+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2048+       
2049+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2050+
2051+    def set_storage_server(self, ss):
2052+        self.ss = ss
2053+
2054+class ImmutableShare:
2055+    sharetype = "immutable"
2056+
2057+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2058+        """ If max_size is not None then I won't allow more than
2059+        max_size to be written to me. If create=True then max_size
2060+        must not be None. """
2061+        precondition((max_size is not None) or (not create), max_size, create)
2062+        self.shnum = shnum
2063+        self.storage_index = storageindex
2064+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2065+        self._max_size = max_size
2066+        if create:
2067+            # touch the file, so later callers will see that we're working on
2068+            # it. Also construct the metadata.
2069+            assert not os.path.exists(self.fname)
2070+            fileutil.make_dirs(os.path.dirname(self.fname))
2071+            f = open(self.fname, 'wb')
2072+            # The second field -- the four-byte share data length -- is no
2073+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2074+            # there in case someone downgrades a storage server from >=
2075+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2076+            # server to another, etc. We do saturation -- a share data length
2077+            # larger than 2**32-1 (what can fit into the field) is marked as
2078+            # the largest length that can fit into the field. That way, even
2079+            # if this does happen, the old < v1.3.0 server will still allow
2080+            # clients to read the first part of the share.
2081+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2082+            f.close()
2083+            self._lease_offset = max_size + 0x0c
2084+            self._num_leases = 0
2085+        else:
2086+            f = open(self.fname, 'rb')
2087+            filesize = os.path.getsize(self.fname)
2088+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2089+            f.close()
2090+            if version != 1:
2091+                msg = "sharefile %s had version %d but we wanted 1" % \
2092+                      (self.fname, version)
2093+                raise UnknownImmutableContainerVersionError(msg)
2094+            self._num_leases = num_leases
2095+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2096+        self._data_offset = 0xc
2097+
2098+    def get_shnum(self):
2099+        return self.shnum
2100+
2101+    def unlink(self):
2102+        os.unlink(self.fname)
2103+
2104+    def read_share_data(self, offset, length):
2105+        precondition(offset >= 0)
2106+        # Reads beyond the end of the data are truncated. Reads that start
2107+        # beyond the end of the data return an empty string.
2108+        seekpos = self._data_offset+offset
2109+        fsize = os.path.getsize(self.fname)
2110+        actuallength = max(0, min(length, fsize-seekpos))
2111+        if actuallength == 0:
2112+            return ""
2113+        f = open(self.fname, 'rb')
2114+        f.seek(seekpos)
2115+        return f.read(actuallength)
2116+
2117+    def write_share_data(self, offset, data):
2118+        length = len(data)
2119+        precondition(offset >= 0, offset)
2120+        if self._max_size is not None and offset+length > self._max_size:
2121+            raise DataTooLargeError(self._max_size, offset, length)
2122+        f = open(self.fname, 'rb+')
2123+        real_offset = self._data_offset+offset
2124+        f.seek(real_offset)
2125+        assert f.tell() == real_offset
2126+        f.write(data)
2127+        f.close()
2128+
2129+    def _write_lease_record(self, f, lease_number, lease_info):
2130+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2131+        f.seek(offset)
2132+        assert f.tell() == offset
2133+        f.write(lease_info.to_immutable_data())
2134+
2135+    def _read_num_leases(self, f):
2136+        f.seek(0x08)
2137+        (num_leases,) = struct.unpack(">L", f.read(4))
2138+        return num_leases
2139+
2140+    def _write_num_leases(self, f, num_leases):
2141+        f.seek(0x08)
2142+        f.write(struct.pack(">L", num_leases))
2143+
2144+    def _truncate_leases(self, f, num_leases):
2145+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2146+
2147+    def get_leases(self):
2148+        """Yields a LeaseInfo instance for all leases."""
2149+        f = open(self.fname, 'rb')
2150+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2151+        f.seek(self._lease_offset)
2152+        for i in range(num_leases):
2153+            data = f.read(self.LEASE_SIZE)
2154+            if data:
2155+                yield LeaseInfo().from_immutable_data(data)
2156+
2157+    def add_lease(self, lease_info):
2158+        f = open(self.fname, 'rb+')
2159+        num_leases = self._read_num_leases(f)
2160+        self._write_lease_record(f, num_leases, lease_info)
2161+        self._write_num_leases(f, num_leases+1)
2162+        f.close()
2163+
2164+    def renew_lease(self, renew_secret, new_expire_time):
2165+        for i,lease in enumerate(self.get_leases()):
2166+            if constant_time_compare(lease.renew_secret, renew_secret):
2167+                # yup. See if we need to update the owner time.
2168+                if new_expire_time > lease.expiration_time:
2169+                    # yes
2170+                    lease.expiration_time = new_expire_time
2171+                    f = open(self.fname, 'rb+')
2172+                    self._write_lease_record(f, i, lease)
2173+                    f.close()
2174+                return
2175+        raise IndexError("unable to renew non-existent lease")
2176+
2177+    def add_or_renew_lease(self, lease_info):
2178+        try:
2179+            self.renew_lease(lease_info.renew_secret,
2180+                             lease_info.expiration_time)
2181+        except IndexError:
2182+            self.add_lease(lease_info)
2183+
2184+
2185+    def cancel_lease(self, cancel_secret):
2186+        """Remove a lease with the given cancel_secret. If the last lease is
2187+        cancelled, the file will be removed. Return the number of bytes that
2188+        were freed (by truncating the list of leases, and possibly by
2189+        deleting the file. Raise IndexError if there was no lease with the
2190+        given cancel_secret.
2191+        """
2192+
2193+        leases = list(self.get_leases())
2194+        num_leases_removed = 0
2195+        for i,lease in enumerate(leases):
2196+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2197+                leases[i] = None
2198+                num_leases_removed += 1
2199+        if not num_leases_removed:
2200+            raise IndexError("unable to find matching lease to cancel")
2201+        if num_leases_removed:
2202+            # pack and write out the remaining leases. We write these out in
2203+            # the same order as they were added, so that if we crash while
2204+            # doing this, we won't lose any non-cancelled leases.
2205+            leases = [l for l in leases if l] # remove the cancelled leases
2206+            f = open(self.fname, 'rb+')
2207+            for i,lease in enumerate(leases):
2208+                self._write_lease_record(f, i, lease)
2209+            self._write_num_leases(f, len(leases))
2210+            self._truncate_leases(f, len(leases))
2211+            f.close()
2212+        space_freed = self.LEASE_SIZE * num_leases_removed
2213+        if not len(leases):
2214+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2215+            self.unlink()
2216+        return space_freed
2217hunk ./src/allmydata/storage/immutable.py 114
2218 class BucketReader(Referenceable):
2219     implements(RIBucketReader)
2220 
2221-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2222+    def __init__(self, ss, share):
2223         self.ss = ss
2224hunk ./src/allmydata/storage/immutable.py 116
2225-        self._share_file = ShareFile(sharefname)
2226-        self.storage_index = storage_index
2227-        self.shnum = shnum
2228+        self._share_file = share
2229+        self.storage_index = share.storage_index
2230+        self.shnum = share.shnum
2231 
2232     def __repr__(self):
2233         return "<%s %s %s>" % (self.__class__.__name__,
2234hunk ./src/allmydata/storage/server.py 316
2235         si_s = si_b2a(storage_index)
2236         log.msg("storage: get_buckets %s" % si_s)
2237         bucketreaders = {} # k: sharenum, v: BucketReader
2238-        for shnum, filename in self.backend.get_shares(storage_index):
2239-            bucketreaders[shnum] = BucketReader(self, filename,
2240-                                                storage_index, shnum)
2241+        self.backend.set_storage_server(self)
2242+        for share in self.backend.get_shares(storage_index):
2243+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2244         self.add_latency("get", time.time() - start)
2245         return bucketreaders
2246 
2247hunk ./src/allmydata/test/test_backends.py 25
2248 tempdir = 'teststoredir'
2249 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2250 sharefname = os.path.join(sharedirname, '0')
2251+expiration_policy = {'enabled' : False,
2252+                     'mode' : 'age',
2253+                     'override_lease_duration' : None,
2254+                     'cutoff_date' : None,
2255+                     'sharetypes' : None}
2256 
2257 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2258     @mock.patch('time.time')
2259hunk ./src/allmydata/test/test_backends.py 43
2260         tries to read or write to the file system. """
2261 
2262         # Now begin the test.
2263-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2264+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2265 
2266         self.failIf(mockisdir.called)
2267         self.failIf(mocklistdir.called)
2268hunk ./src/allmydata/test/test_backends.py 74
2269         mockopen.side_effect = call_open
2270 
2271         # Now begin the test.
2272-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2273+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2274 
2275         self.failIf(mockisdir.called)
2276         self.failIf(mocklistdir.called)
2277hunk ./src/allmydata/test/test_backends.py 86
2278 
2279 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2280     def setUp(self):
2281-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2282+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2283 
2284     @mock.patch('os.mkdir')
2285     @mock.patch('__builtin__.open')
2286hunk ./src/allmydata/test/test_backends.py 136
2287             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2288                 return StringIO()
2289         mockopen.side_effect = call_open
2290-        expiration_policy = {'enabled' : False,
2291-                             'mode' : 'age',
2292-                             'override_lease_duration' : None,
2293-                             'cutoff_date' : None,
2294-                             'sharetypes' : None}
2295         testbackend = DASCore(tempdir, expiration_policy)
2296         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2297 
2298}
2299[checkpoint5
2300wilcoxjg@gmail.com**20110705034626
2301 Ignore-this: 255780bd58299b0aa33c027e9d008262
2302] {
2303addfile ./src/allmydata/storage/backends/base.py
2304hunk ./src/allmydata/storage/backends/base.py 1
2305+from twisted.application import service
2306+
2307+class Backend(service.MultiService):
2308+    def __init__(self):
2309+        service.MultiService.__init__(self)
2310hunk ./src/allmydata/storage/backends/null/core.py 19
2311 
2312     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2313         
2314+        immutableshare = ImmutableShare()
2315         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2316 
2317     def set_storage_server(self, ss):
2318hunk ./src/allmydata/storage/backends/null/core.py 28
2319 class ImmutableShare:
2320     sharetype = "immutable"
2321 
2322-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2323+    def __init__(self):
2324         """ If max_size is not None then I won't allow more than
2325         max_size to be written to me. If create=True then max_size
2326         must not be None. """
2327hunk ./src/allmydata/storage/backends/null/core.py 32
2328-        precondition((max_size is not None) or (not create), max_size, create)
2329-        self.shnum = shnum
2330-        self.storage_index = storageindex
2331-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2332-        self._max_size = max_size
2333-        if create:
2334-            # touch the file, so later callers will see that we're working on
2335-            # it. Also construct the metadata.
2336-            assert not os.path.exists(self.fname)
2337-            fileutil.make_dirs(os.path.dirname(self.fname))
2338-            f = open(self.fname, 'wb')
2339-            # The second field -- the four-byte share data length -- is no
2340-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2341-            # there in case someone downgrades a storage server from >=
2342-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2343-            # server to another, etc. We do saturation -- a share data length
2344-            # larger than 2**32-1 (what can fit into the field) is marked as
2345-            # the largest length that can fit into the field. That way, even
2346-            # if this does happen, the old < v1.3.0 server will still allow
2347-            # clients to read the first part of the share.
2348-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2349-            f.close()
2350-            self._lease_offset = max_size + 0x0c
2351-            self._num_leases = 0
2352-        else:
2353-            f = open(self.fname, 'rb')
2354-            filesize = os.path.getsize(self.fname)
2355-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2356-            f.close()
2357-            if version != 1:
2358-                msg = "sharefile %s had version %d but we wanted 1" % \
2359-                      (self.fname, version)
2360-                raise UnknownImmutableContainerVersionError(msg)
2361-            self._num_leases = num_leases
2362-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2363-        self._data_offset = 0xc
2364+        pass
2365 
2366     def get_shnum(self):
2367         return self.shnum
2368hunk ./src/allmydata/storage/backends/null/core.py 54
2369         return f.read(actuallength)
2370 
2371     def write_share_data(self, offset, data):
2372-        length = len(data)
2373-        precondition(offset >= 0, offset)
2374-        if self._max_size is not None and offset+length > self._max_size:
2375-            raise DataTooLargeError(self._max_size, offset, length)
2376-        f = open(self.fname, 'rb+')
2377-        real_offset = self._data_offset+offset
2378-        f.seek(real_offset)
2379-        assert f.tell() == real_offset
2380-        f.write(data)
2381-        f.close()
2382+        pass
2383 
2384     def _write_lease_record(self, f, lease_number, lease_info):
2385         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2386hunk ./src/allmydata/storage/backends/null/core.py 84
2387             if data:
2388                 yield LeaseInfo().from_immutable_data(data)
2389 
2390-    def add_lease(self, lease_info):
2391-        f = open(self.fname, 'rb+')
2392-        num_leases = self._read_num_leases(f)
2393-        self._write_lease_record(f, num_leases, lease_info)
2394-        self._write_num_leases(f, num_leases+1)
2395-        f.close()
2396+    def add_lease(self, lease):
2397+        pass
2398 
2399     def renew_lease(self, renew_secret, new_expire_time):
2400         for i,lease in enumerate(self.get_leases()):
2401hunk ./src/allmydata/test/test_backends.py 32
2402                      'sharetypes' : None}
2403 
2404 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2405-    @mock.patch('time.time')
2406-    @mock.patch('os.mkdir')
2407-    @mock.patch('__builtin__.open')
2408-    @mock.patch('os.listdir')
2409-    @mock.patch('os.path.isdir')
2410-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2411-        """ This tests whether a server instance can be constructed
2412-        with a null backend. The server instance fails the test if it
2413-        tries to read or write to the file system. """
2414-
2415-        # Now begin the test.
2416-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2417-
2418-        self.failIf(mockisdir.called)
2419-        self.failIf(mocklistdir.called)
2420-        self.failIf(mockopen.called)
2421-        self.failIf(mockmkdir.called)
2422-
2423-        # You passed!
2424-
2425     @mock.patch('time.time')
2426     @mock.patch('os.mkdir')
2427     @mock.patch('__builtin__.open')
2428hunk ./src/allmydata/test/test_backends.py 53
2429                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2430         mockopen.side_effect = call_open
2431 
2432-        # Now begin the test.
2433-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2434-
2435-        self.failIf(mockisdir.called)
2436-        self.failIf(mocklistdir.called)
2437-        self.failIf(mockopen.called)
2438-        self.failIf(mockmkdir.called)
2439-        self.failIf(mocktime.called)
2440-
2441-        # You passed!
2442-
2443-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2444-    def setUp(self):
2445-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2446-
2447-    @mock.patch('os.mkdir')
2448-    @mock.patch('__builtin__.open')
2449-    @mock.patch('os.listdir')
2450-    @mock.patch('os.path.isdir')
2451-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2452-        """ Write a new share. """
2453-
2454-        # Now begin the test.
2455-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2456-        bs[0].remote_write(0, 'a')
2457-        self.failIf(mockisdir.called)
2458-        self.failIf(mocklistdir.called)
2459-        self.failIf(mockopen.called)
2460-        self.failIf(mockmkdir.called)
2461+        def call_isdir(fname):
2462+            if fname == os.path.join(tempdir,'shares'):
2463+                return True
2464+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2465+                return True
2466+            else:
2467+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2468+        mockisdir.side_effect = call_isdir
2469 
2470hunk ./src/allmydata/test/test_backends.py 62
2471-    @mock.patch('os.path.exists')
2472-    @mock.patch('os.path.getsize')
2473-    @mock.patch('__builtin__.open')
2474-    @mock.patch('os.listdir')
2475-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2476-        """ This tests whether the code correctly finds and reads
2477-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2478-        servers. There is a similar test in test_download, but that one
2479-        is from the perspective of the client and exercises a deeper
2480-        stack of code. This one is for exercising just the
2481-        StorageServer object. """
2482+        def call_mkdir(fname, mode):
2483+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2484+            self.failUnlessEqual(0777, mode)
2485+            if fname == tempdir:
2486+                return None
2487+            elif fname == os.path.join(tempdir,'shares'):
2488+                return None
2489+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2490+                return None
2491+            else:
2492+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2493+        mockmkdir.side_effect = call_mkdir
2494 
2495         # Now begin the test.
2496hunk ./src/allmydata/test/test_backends.py 76
2497-        bs = self.s.remote_get_buckets('teststorage_index')
2498+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2499 
2500hunk ./src/allmydata/test/test_backends.py 78
2501-        self.failUnlessEqual(len(bs), 0)
2502-        self.failIf(mocklistdir.called)
2503-        self.failIf(mockopen.called)
2504-        self.failIf(mockgetsize.called)
2505-        self.failIf(mockexists.called)
2506+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2507 
2508 
2509 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2510hunk ./src/allmydata/test/test_backends.py 193
2511         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2512 
2513 
2514+
2515+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2516+    @mock.patch('time.time')
2517+    @mock.patch('os.mkdir')
2518+    @mock.patch('__builtin__.open')
2519+    @mock.patch('os.listdir')
2520+    @mock.patch('os.path.isdir')
2521+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2522+        """ This tests whether a file system backend instance can be
2523+        constructed. To pass the test, it has to use the
2524+        filesystem in only the prescribed ways. """
2525+
2526+        def call_open(fname, mode):
2527+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2528+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2529+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2530+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2531+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2532+                return StringIO()
2533+            else:
2534+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2535+        mockopen.side_effect = call_open
2536+
2537+        def call_isdir(fname):
2538+            if fname == os.path.join(tempdir,'shares'):
2539+                return True
2540+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2541+                return True
2542+            else:
2543+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2544+        mockisdir.side_effect = call_isdir
2545+
2546+        def call_mkdir(fname, mode):
2547+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2548+            self.failUnlessEqual(0777, mode)
2549+            if fname == tempdir:
2550+                return None
2551+            elif fname == os.path.join(tempdir,'shares'):
2552+                return None
2553+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2554+                return None
2555+            else:
2556+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2557+        mockmkdir.side_effect = call_mkdir
2558+
2559+        # Now begin the test.
2560+        DASCore('teststoredir', expiration_policy)
2561+
2562+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2563}
2564[checkpoint 6
2565wilcoxjg@gmail.com**20110706190824
2566 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2567] {
2568hunk ./src/allmydata/interfaces.py 100
2569                          renew_secret=LeaseRenewSecret,
2570                          cancel_secret=LeaseCancelSecret,
2571                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2572-                         allocated_size=Offset, canary=Referenceable):
2573+                         allocated_size=Offset,
2574+                         canary=Referenceable):
2575         """
2576hunk ./src/allmydata/interfaces.py 103
2577-        @param storage_index: the index of the bucket to be created or
2578+        @param storage_index: the index of the shares to be created or
2579                               increfed.
2580hunk ./src/allmydata/interfaces.py 105
2581-        @param sharenums: these are the share numbers (probably between 0 and
2582-                          99) that the sender is proposing to store on this
2583-                          server.
2584-        @param renew_secret: This is the secret used to protect bucket refresh
2585+        @param renew_secret: This is the secret used to protect shares refresh
2586                              This secret is generated by the client and
2587                              stored for later comparison by the server. Each
2588                              server is given a different secret.
2589hunk ./src/allmydata/interfaces.py 109
2590-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2591-        @param canary: If the canary is lost before close(), the bucket is
2592+        @param cancel_secret: Like renew_secret, but protects shares decref.
2593+        @param sharenums: these are the share numbers (probably between 0 and
2594+                          99) that the sender is proposing to store on this
2595+                          server.
2596+        @param allocated_size: XXX The size of the shares the client wishes to store.
2597+        @param canary: If the canary is lost before close(), the shares are
2598                        deleted.
2599hunk ./src/allmydata/interfaces.py 116
2600+
2601         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2602                  already have and allocated is what we hereby agree to accept.
2603                  New leases are added for shares in both lists.
2604hunk ./src/allmydata/interfaces.py 128
2605                   renew_secret=LeaseRenewSecret,
2606                   cancel_secret=LeaseCancelSecret):
2607         """
2608-        Add a new lease on the given bucket. If the renew_secret matches an
2609+        Add a new lease on the given shares. If the renew_secret matches an
2610         existing lease, that lease will be renewed instead. If there is no
2611         bucket for the given storage_index, return silently. (note that in
2612         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2613hunk ./src/allmydata/storage/server.py 17
2614 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2615      create_mutable_sharefile
2616 
2617-from zope.interface import implements
2618-
2619 # storage/
2620 # storage/shares/incoming
2621 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2622hunk ./src/allmydata/test/test_backends.py 6
2623 from StringIO import StringIO
2624 
2625 from allmydata.test.common_util import ReallyEqualMixin
2626+from allmydata.util.assertutil import _assert
2627 
2628 import mock, os
2629 
2630hunk ./src/allmydata/test/test_backends.py 92
2631                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2632             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2633                 return StringIO()
2634+            else:
2635+                _assert(False, "The tester code doesn't recognize this case.") 
2636+
2637         mockopen.side_effect = call_open
2638         testbackend = DASCore(tempdir, expiration_policy)
2639         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2640hunk ./src/allmydata/test/test_backends.py 109
2641 
2642         def call_listdir(dirname):
2643             self.failUnlessReallyEqual(dirname, sharedirname)
2644-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2645+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2646 
2647         mocklistdir.side_effect = call_listdir
2648 
2649hunk ./src/allmydata/test/test_backends.py 113
2650+        def call_isdir(dirname):
2651+            self.failUnlessReallyEqual(dirname, sharedirname)
2652+            return True
2653+
2654+        mockisdir.side_effect = call_isdir
2655+
2656+        def call_mkdir(dirname, permissions):
2657+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2658+                self.Fail
2659+            else:
2660+                return True
2661+
2662+        mockmkdir.side_effect = call_mkdir
2663+
2664         class MockFile:
2665             def __init__(self):
2666                 self.buffer = ''
2667hunk ./src/allmydata/test/test_backends.py 156
2668             return sharefile
2669 
2670         mockopen.side_effect = call_open
2671+
2672         # Now begin the test.
2673         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2674         bs[0].remote_write(0, 'a')
2675hunk ./src/allmydata/test/test_backends.py 161
2676         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2677+       
2678+        # Now test the allocated_size method.
2679+        spaceint = self.s.allocated_size()
2680 
2681     @mock.patch('os.path.exists')
2682     @mock.patch('os.path.getsize')
2683}
2684[checkpoint 7
2685wilcoxjg@gmail.com**20110706200820
2686 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2687] hunk ./src/allmydata/test/test_backends.py 164
2688         
2689         # Now test the allocated_size method.
2690         spaceint = self.s.allocated_size()
2691+        self.failUnlessReallyEqual(spaceint, 1)
2692 
2693     @mock.patch('os.path.exists')
2694     @mock.patch('os.path.getsize')
2695[checkpoint8
2696wilcoxjg@gmail.com**20110706223126
2697 Ignore-this: 97336180883cb798b16f15411179f827
2698   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2699] hunk ./src/allmydata/test/test_backends.py 32
2700                      'cutoff_date' : None,
2701                      'sharetypes' : None}
2702 
2703+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2704+    def setUp(self):
2705+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2706+
2707+    @mock.patch('os.mkdir')
2708+    @mock.patch('__builtin__.open')
2709+    @mock.patch('os.listdir')
2710+    @mock.patch('os.path.isdir')
2711+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2712+        """ Write a new share. """
2713+
2714+        # Now begin the test.
2715+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2716+        bs[0].remote_write(0, 'a')
2717+        self.failIf(mockisdir.called)
2718+        self.failIf(mocklistdir.called)
2719+        self.failIf(mockopen.called)
2720+        self.failIf(mockmkdir.called)
2721+
2722 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2723     @mock.patch('time.time')
2724     @mock.patch('os.mkdir')
2725[checkpoint 9
2726wilcoxjg@gmail.com**20110707042942
2727 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2728] {
2729hunk ./src/allmydata/storage/backends/das/core.py 88
2730                     filename = os.path.join(finalstoragedir, f)
2731                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2732         except OSError:
2733-            # Commonly caused by there being no buckets at all.
2734+            # Commonly caused by there being no shares at all.
2735             pass
2736         
2737     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2738hunk ./src/allmydata/storage/backends/das/core.py 141
2739         self.storage_index = storageindex
2740         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2741         self._max_size = max_size
2742+        self.incomingdir = os.path.join(sharedir, 'incoming')
2743+        si_dir = storage_index_to_dir(storageindex)
2744+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2745+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2746         if create:
2747             # touch the file, so later callers will see that we're working on
2748             # it. Also construct the metadata.
2749hunk ./src/allmydata/storage/backends/das/core.py 177
2750             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2751         self._data_offset = 0xc
2752 
2753+    def close(self):
2754+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2755+        fileutil.rename(self.incominghome, self.finalhome)
2756+        try:
2757+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2758+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2759+            # these directories lying around forever, but the delete might
2760+            # fail if we're working on another share for the same storage
2761+            # index (like ab/abcde/5). The alternative approach would be to
2762+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2763+            # ShareWriter), each of which is responsible for a single
2764+            # directory on disk, and have them use reference counting of
2765+            # their children to know when they should do the rmdir. This
2766+            # approach is simpler, but relies on os.rmdir refusing to delete
2767+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2768+            os.rmdir(os.path.dirname(self.incominghome))
2769+            # we also delete the grandparent (prefix) directory, .../ab ,
2770+            # again to avoid leaving directories lying around. This might
2771+            # fail if there is another bucket open that shares a prefix (like
2772+            # ab/abfff).
2773+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2774+            # we leave the great-grandparent (incoming/) directory in place.
2775+        except EnvironmentError:
2776+            # ignore the "can't rmdir because the directory is not empty"
2777+            # exceptions, those are normal consequences of the
2778+            # above-mentioned conditions.
2779+            pass
2780+        pass
2781+       
2782+    def stat(self):
2783+        return os.stat(self.finalhome)[stat.ST_SIZE]
2784+
2785     def get_shnum(self):
2786         return self.shnum
2787 
2788hunk ./src/allmydata/storage/immutable.py 7
2789 
2790 from zope.interface import implements
2791 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2792-from allmydata.util import base32, fileutil, log
2793+from allmydata.util import base32, log
2794 from allmydata.util.assertutil import precondition
2795 from allmydata.util.hashutil import constant_time_compare
2796 from allmydata.storage.lease import LeaseInfo
2797hunk ./src/allmydata/storage/immutable.py 44
2798     def remote_close(self):
2799         precondition(not self.closed)
2800         start = time.time()
2801-
2802-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2803-        fileutil.rename(self.incominghome, self.finalhome)
2804-        try:
2805-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2806-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2807-            # these directories lying around forever, but the delete might
2808-            # fail if we're working on another share for the same storage
2809-            # index (like ab/abcde/5). The alternative approach would be to
2810-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2811-            # ShareWriter), each of which is responsible for a single
2812-            # directory on disk, and have them use reference counting of
2813-            # their children to know when they should do the rmdir. This
2814-            # approach is simpler, but relies on os.rmdir refusing to delete
2815-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2816-            os.rmdir(os.path.dirname(self.incominghome))
2817-            # we also delete the grandparent (prefix) directory, .../ab ,
2818-            # again to avoid leaving directories lying around. This might
2819-            # fail if there is another bucket open that shares a prefix (like
2820-            # ab/abfff).
2821-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2822-            # we leave the great-grandparent (incoming/) directory in place.
2823-        except EnvironmentError:
2824-            # ignore the "can't rmdir because the directory is not empty"
2825-            # exceptions, those are normal consequences of the
2826-            # above-mentioned conditions.
2827-            pass
2828+        self._sharefile.close()
2829         self._sharefile = None
2830         self.closed = True
2831         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2832hunk ./src/allmydata/storage/immutable.py 49
2833 
2834-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2835+        filelen = self._sharefile.stat()
2836         self.ss.bucket_writer_closed(self, filelen)
2837         self.ss.add_latency("close", time.time() - start)
2838         self.ss.count("close")
2839hunk ./src/allmydata/storage/server.py 45
2840         self._active_writers = weakref.WeakKeyDictionary()
2841         self.backend = backend
2842         self.backend.setServiceParent(self)
2843+        self.backend.set_storage_server(self)
2844         log.msg("StorageServer created", facility="tahoe.storage")
2845 
2846         self.latencies = {"allocate": [], # immutable
2847hunk ./src/allmydata/storage/server.py 220
2848 
2849         for shnum in (sharenums - alreadygot):
2850             if (not limited) or (remaining_space >= max_space_per_bucket):
2851-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2852-                self.backend.set_storage_server(self)
2853                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2854                                                      max_space_per_bucket, lease_info, canary)
2855                 bucketwriters[shnum] = bw
2856hunk ./src/allmydata/test/test_backends.py 117
2857         mockopen.side_effect = call_open
2858         testbackend = DASCore(tempdir, expiration_policy)
2859         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2860-
2861+   
2862+    @mock.patch('allmydata.util.fileutil.get_available_space')
2863     @mock.patch('time.time')
2864     @mock.patch('os.mkdir')
2865     @mock.patch('__builtin__.open')
2866hunk ./src/allmydata/test/test_backends.py 124
2867     @mock.patch('os.listdir')
2868     @mock.patch('os.path.isdir')
2869-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2870+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2871+                             mockget_available_space):
2872         """ Write a new share. """
2873 
2874         def call_listdir(dirname):
2875hunk ./src/allmydata/test/test_backends.py 148
2876 
2877         mockmkdir.side_effect = call_mkdir
2878 
2879+        def call_get_available_space(storedir, reserved_space):
2880+            self.failUnlessReallyEqual(storedir, tempdir)
2881+            return 1
2882+
2883+        mockget_available_space.side_effect = call_get_available_space
2884+
2885         class MockFile:
2886             def __init__(self):
2887                 self.buffer = ''
2888hunk ./src/allmydata/test/test_backends.py 188
2889         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2890         bs[0].remote_write(0, 'a')
2891         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2892-       
2893+
2894+        # What happens when there's not enough space for the client's request?
2895+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2896+
2897         # Now test the allocated_size method.
2898         spaceint = self.s.allocated_size()
2899         self.failUnlessReallyEqual(spaceint, 1)
2900}
2901[checkpoint10
2902wilcoxjg@gmail.com**20110707172049
2903 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2904] {
2905hunk ./src/allmydata/test/test_backends.py 20
2906 # The following share file contents was generated with
2907 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2908 # with share data == 'a'.
2909-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2910+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2911+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2912+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2913 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2914 
2915hunk ./src/allmydata/test/test_backends.py 25
2916+testnodeid = 'testnodeidxxxxxxxxxx'
2917 tempdir = 'teststoredir'
2918 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2919 sharefname = os.path.join(sharedirname, '0')
2920hunk ./src/allmydata/test/test_backends.py 37
2921 
2922 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2923     def setUp(self):
2924-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2925+        self.s = StorageServer(testnodeid, backend=NullCore())
2926 
2927     @mock.patch('os.mkdir')
2928     @mock.patch('__builtin__.open')
2929hunk ./src/allmydata/test/test_backends.py 99
2930         mockmkdir.side_effect = call_mkdir
2931 
2932         # Now begin the test.
2933-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2934+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2935 
2936         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2937 
2938hunk ./src/allmydata/test/test_backends.py 119
2939 
2940         mockopen.side_effect = call_open
2941         testbackend = DASCore(tempdir, expiration_policy)
2942-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2943-   
2944+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
2945+       
2946+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
2947     @mock.patch('allmydata.util.fileutil.get_available_space')
2948     @mock.patch('time.time')
2949     @mock.patch('os.mkdir')
2950hunk ./src/allmydata/test/test_backends.py 129
2951     @mock.patch('os.listdir')
2952     @mock.patch('os.path.isdir')
2953     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2954-                             mockget_available_space):
2955+                             mockget_available_space, mockget_shares):
2956         """ Write a new share. """
2957 
2958         def call_listdir(dirname):
2959hunk ./src/allmydata/test/test_backends.py 139
2960         mocklistdir.side_effect = call_listdir
2961 
2962         def call_isdir(dirname):
2963+            #XXX Should there be any other tests here?
2964             self.failUnlessReallyEqual(dirname, sharedirname)
2965             return True
2966 
2967hunk ./src/allmydata/test/test_backends.py 159
2968 
2969         mockget_available_space.side_effect = call_get_available_space
2970 
2971+        mocktime.return_value = 0
2972+        class MockShare:
2973+            def __init__(self):
2974+                self.shnum = 1
2975+               
2976+            def add_or_renew_lease(elf, lease_info):
2977+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
2978+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
2979+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
2980+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
2981+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
2982+               
2983+
2984+        share = MockShare()
2985+        def call_get_shares(storageindex):
2986+            return [share]
2987+
2988+        mockget_shares.side_effect = call_get_shares
2989+
2990         class MockFile:
2991             def __init__(self):
2992                 self.buffer = ''
2993hunk ./src/allmydata/test/test_backends.py 199
2994             def tell(self):
2995                 return self.pos
2996 
2997-        mocktime.return_value = 0
2998 
2999         sharefile = MockFile()
3000         def call_open(fname, mode):
3001}
3002[jacp 11
3003wilcoxjg@gmail.com**20110708213919
3004 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3005] {
3006hunk ./src/allmydata/storage/backends/das/core.py 144
3007         self.incomingdir = os.path.join(sharedir, 'incoming')
3008         si_dir = storage_index_to_dir(storageindex)
3009         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3010+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3011         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3012         if create:
3013             # touch the file, so later callers will see that we're working on
3014hunk ./src/allmydata/storage/backends/das/core.py 208
3015         pass
3016         
3017     def stat(self):
3018-        return os.stat(self.finalhome)[stat.ST_SIZE]
3019+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3020 
3021     def get_shnum(self):
3022         return self.shnum
3023hunk ./src/allmydata/storage/immutable.py 44
3024     def remote_close(self):
3025         precondition(not self.closed)
3026         start = time.time()
3027+
3028         self._sharefile.close()
3029hunk ./src/allmydata/storage/immutable.py 46
3030+        filelen = self._sharefile.stat()
3031         self._sharefile = None
3032hunk ./src/allmydata/storage/immutable.py 48
3033+
3034         self.closed = True
3035         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3036 
3037hunk ./src/allmydata/storage/immutable.py 52
3038-        filelen = self._sharefile.stat()
3039         self.ss.bucket_writer_closed(self, filelen)
3040         self.ss.add_latency("close", time.time() - start)
3041         self.ss.count("close")
3042hunk ./src/allmydata/storage/server.py 220
3043 
3044         for shnum in (sharenums - alreadygot):
3045             if (not limited) or (remaining_space >= max_space_per_bucket):
3046-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3047-                                                     max_space_per_bucket, lease_info, canary)
3048+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3049                 bucketwriters[shnum] = bw
3050                 self._active_writers[bw] = 1
3051                 if limited:
3052hunk ./src/allmydata/test/test_backends.py 20
3053 # The following share file contents was generated with
3054 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3055 # with share data == 'a'.
3056-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3057-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3058+renew_secret  = 'x'*32
3059+cancel_secret = 'y'*32
3060 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3061 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3062 
3063hunk ./src/allmydata/test/test_backends.py 27
3064 testnodeid = 'testnodeidxxxxxxxxxx'
3065 tempdir = 'teststoredir'
3066-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3067-sharefname = os.path.join(sharedirname, '0')
3068+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3069+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3070+shareincomingname = os.path.join(sharedirincomingname, '0')
3071+sharefname = os.path.join(sharedirfinalname, '0')
3072+
3073 expiration_policy = {'enabled' : False,
3074                      'mode' : 'age',
3075                      'override_lease_duration' : None,
3076hunk ./src/allmydata/test/test_backends.py 123
3077         mockopen.side_effect = call_open
3078         testbackend = DASCore(tempdir, expiration_policy)
3079         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3080-       
3081+
3082+    @mock.patch('allmydata.util.fileutil.rename')
3083+    @mock.patch('allmydata.util.fileutil.make_dirs')
3084+    @mock.patch('os.path.exists')
3085+    @mock.patch('os.stat')
3086     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3087     @mock.patch('allmydata.util.fileutil.get_available_space')
3088     @mock.patch('time.time')
3089hunk ./src/allmydata/test/test_backends.py 136
3090     @mock.patch('os.listdir')
3091     @mock.patch('os.path.isdir')
3092     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3093-                             mockget_available_space, mockget_shares):
3094+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3095+                             mockmake_dirs, mockrename):
3096         """ Write a new share. """
3097 
3098         def call_listdir(dirname):
3099hunk ./src/allmydata/test/test_backends.py 141
3100-            self.failUnlessReallyEqual(dirname, sharedirname)
3101+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3102             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3103 
3104         mocklistdir.side_effect = call_listdir
3105hunk ./src/allmydata/test/test_backends.py 148
3106 
3107         def call_isdir(dirname):
3108             #XXX Should there be any other tests here?
3109-            self.failUnlessReallyEqual(dirname, sharedirname)
3110+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3111             return True
3112 
3113         mockisdir.side_effect = call_isdir
3114hunk ./src/allmydata/test/test_backends.py 154
3115 
3116         def call_mkdir(dirname, permissions):
3117-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3118+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3119                 self.Fail
3120             else:
3121                 return True
3122hunk ./src/allmydata/test/test_backends.py 208
3123                 return self.pos
3124 
3125 
3126-        sharefile = MockFile()
3127+        fobj = MockFile()
3128         def call_open(fname, mode):
3129             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3130hunk ./src/allmydata/test/test_backends.py 211
3131-            return sharefile
3132+            return fobj
3133 
3134         mockopen.side_effect = call_open
3135 
3136hunk ./src/allmydata/test/test_backends.py 215
3137+        def call_make_dirs(dname):
3138+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3139+           
3140+        mockmake_dirs.side_effect = call_make_dirs
3141+
3142+        def call_rename(src, dst):
3143+           self.failUnlessReallyEqual(src, shareincomingname)
3144+           self.failUnlessReallyEqual(dst, sharefname)
3145+           
3146+        mockrename.side_effect = call_rename
3147+
3148+        def call_exists(fname):
3149+            self.failUnlessReallyEqual(fname, sharefname)
3150+
3151+        mockexists.side_effect = call_exists
3152+
3153         # Now begin the test.
3154         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3155         bs[0].remote_write(0, 'a')
3156hunk ./src/allmydata/test/test_backends.py 234
3157-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3158+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3159+        spaceint = self.s.allocated_size()
3160+        self.failUnlessReallyEqual(spaceint, 1)
3161+
3162+        bs[0].remote_close()
3163 
3164         # What happens when there's not enough space for the client's request?
3165hunk ./src/allmydata/test/test_backends.py 241
3166-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3167+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3168 
3169         # Now test the allocated_size method.
3170hunk ./src/allmydata/test/test_backends.py 244
3171-        spaceint = self.s.allocated_size()
3172-        self.failUnlessReallyEqual(spaceint, 1)
3173+        #self.failIf(mockexists.called, mockexists.call_args_list)
3174+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3175+        #self.failIf(mockrename.called, mockrename.call_args_list)
3176+        #self.failIf(mockstat.called, mockstat.call_args_list)
3177 
3178     @mock.patch('os.path.exists')
3179     @mock.patch('os.path.getsize')
3180}
3181[checkpoint12 testing correct behavior with regard to incoming and final
3182wilcoxjg@gmail.com**20110710191915
3183 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3184] {
3185hunk ./src/allmydata/storage/backends/das/core.py 74
3186         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3187         self.lease_checker.setServiceParent(self)
3188 
3189+    def get_incoming(self, storageindex):
3190+        return set((1,))
3191+
3192     def get_available_space(self):
3193         if self.readonly:
3194             return 0
3195hunk ./src/allmydata/storage/server.py 77
3196         """Return a dict, indexed by category, that contains a dict of
3197         latency numbers for each category. If there are sufficient samples
3198         for unambiguous interpretation, each dict will contain the
3199-        following keys: mean, 01_0_percentile, 10_0_percentile,
3200+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3201         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3202         99_0_percentile, 99_9_percentile.  If there are insufficient
3203         samples for a given percentile to be interpreted unambiguously
3204hunk ./src/allmydata/storage/server.py 120
3205 
3206     def get_stats(self):
3207         # remember: RIStatsProvider requires that our return dict
3208-        # contains numeric values.
3209+        # contains numeric, or None values.
3210         stats = { 'storage_server.allocated': self.allocated_size(), }
3211         stats['storage_server.reserved_space'] = self.reserved_space
3212         for category,ld in self.get_latencies().items():
3213hunk ./src/allmydata/storage/server.py 185
3214         start = time.time()
3215         self.count("allocate")
3216         alreadygot = set()
3217+        incoming = set()
3218         bucketwriters = {} # k: shnum, v: BucketWriter
3219 
3220         si_s = si_b2a(storage_index)
3221hunk ./src/allmydata/storage/server.py 219
3222             alreadygot.add(share.shnum)
3223             share.add_or_renew_lease(lease_info)
3224 
3225-        for shnum in (sharenums - alreadygot):
3226+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3227+        incoming = self.backend.get_incoming(storageindex)
3228+
3229+        for shnum in ((sharenums - alreadygot) - incoming):
3230             if (not limited) or (remaining_space >= max_space_per_bucket):
3231                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3232                 bucketwriters[shnum] = bw
3233hunk ./src/allmydata/storage/server.py 229
3234                 self._active_writers[bw] = 1
3235                 if limited:
3236                     remaining_space -= max_space_per_bucket
3237-
3238-        #XXX We SHOULD DOCUMENT LATER.
3239+            else:
3240+                # Bummer not enough space to accept this share.
3241+                pass
3242 
3243         self.add_latency("allocate", time.time() - start)
3244         return alreadygot, bucketwriters
3245hunk ./src/allmydata/storage/server.py 323
3246         self.add_latency("get", time.time() - start)
3247         return bucketreaders
3248 
3249-    def get_leases(self, storage_index):
3250+    def remote_get_incoming(self, storageindex):
3251+        incoming_share_set = self.backend.get_incoming(storageindex)
3252+        return incoming_share_set
3253+
3254+    def get_leases(self, storageindex):
3255         """Provide an iterator that yields all of the leases attached to this
3256         bucket. Each lease is returned as a LeaseInfo instance.
3257 
3258hunk ./src/allmydata/storage/server.py 337
3259         # since all shares get the same lease data, we just grab the leases
3260         # from the first share
3261         try:
3262-            shnum, filename = self._get_shares(storage_index).next()
3263+            shnum, filename = self._get_shares(storageindex).next()
3264             sf = ShareFile(filename)
3265             return sf.get_leases()
3266         except StopIteration:
3267hunk ./src/allmydata/test/test_backends.py 182
3268 
3269         share = MockShare()
3270         def call_get_shares(storageindex):
3271-            return [share]
3272+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3273+            return []#share]
3274 
3275         mockget_shares.side_effect = call_get_shares
3276 
3277hunk ./src/allmydata/test/test_backends.py 222
3278         mockmake_dirs.side_effect = call_make_dirs
3279 
3280         def call_rename(src, dst):
3281-           self.failUnlessReallyEqual(src, shareincomingname)
3282-           self.failUnlessReallyEqual(dst, sharefname)
3283+            self.failUnlessReallyEqual(src, shareincomingname)
3284+            self.failUnlessReallyEqual(dst, sharefname)
3285             
3286         mockrename.side_effect = call_rename
3287 
3288hunk ./src/allmydata/test/test_backends.py 233
3289         mockexists.side_effect = call_exists
3290 
3291         # Now begin the test.
3292+
3293+        # XXX (0) ???  Fail unless something is not properly set-up?
3294         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3295hunk ./src/allmydata/test/test_backends.py 236
3296+
3297+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3298+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3299+
3300+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3301+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3302+        # with the same si, until BucketWriter.remote_close() has been called.
3303+        # self.failIf(bsa)
3304+
3305+        # XXX (3) Inspect final and fail unless there's nothing there.
3306         bs[0].remote_write(0, 'a')
3307hunk ./src/allmydata/test/test_backends.py 247
3308+        # XXX (4a) Inspect final and fail unless share 0 is there.
3309+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3310         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3311         spaceint = self.s.allocated_size()
3312         self.failUnlessReallyEqual(spaceint, 1)
3313hunk ./src/allmydata/test/test_backends.py 253
3314 
3315+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3316         bs[0].remote_close()
3317 
3318         # What happens when there's not enough space for the client's request?
3319hunk ./src/allmydata/test/test_backends.py 260
3320         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3321 
3322         # Now test the allocated_size method.
3323-        #self.failIf(mockexists.called, mockexists.call_args_list)
3324+        # self.failIf(mockexists.called, mockexists.call_args_list)
3325         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3326         #self.failIf(mockrename.called, mockrename.call_args_list)
3327         #self.failIf(mockstat.called, mockstat.call_args_list)
3328}
3329[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3330wilcoxjg@gmail.com**20110710195139
3331 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3332] {
3333hunk ./src/allmydata/storage/server.py 220
3334             share.add_or_renew_lease(lease_info)
3335 
3336         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3337-        incoming = self.backend.get_incoming(storageindex)
3338+        incoming = self.backend.get_incoming(storage_index)
3339 
3340         for shnum in ((sharenums - alreadygot) - incoming):
3341             if (not limited) or (remaining_space >= max_space_per_bucket):
3342hunk ./src/allmydata/storage/server.py 323
3343         self.add_latency("get", time.time() - start)
3344         return bucketreaders
3345 
3346-    def remote_get_incoming(self, storageindex):
3347-        incoming_share_set = self.backend.get_incoming(storageindex)
3348+    def remote_get_incoming(self, storage_index):
3349+        incoming_share_set = self.backend.get_incoming(storage_index)
3350         return incoming_share_set
3351 
3352hunk ./src/allmydata/storage/server.py 327
3353-    def get_leases(self, storageindex):
3354+    def get_leases(self, storage_index):
3355         """Provide an iterator that yields all of the leases attached to this
3356         bucket. Each lease is returned as a LeaseInfo instance.
3357 
3358hunk ./src/allmydata/storage/server.py 337
3359         # since all shares get the same lease data, we just grab the leases
3360         # from the first share
3361         try:
3362-            shnum, filename = self._get_shares(storageindex).next()
3363+            shnum, filename = self._get_shares(storage_index).next()
3364             sf = ShareFile(filename)
3365             return sf.get_leases()
3366         except StopIteration:
3367replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3368}
3369
3370Context:
3371
3372[add Protovis.js-based download-status timeline visualization
3373Brian Warner <warner@lothar.com>**20110629222606
3374 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
3375 
3376 provide status overlap info on the webapi t=json output, add decode/decrypt
3377 rate tooltips, add zoomin/zoomout buttons
3378]
3379[add more download-status data, fix tests
3380Brian Warner <warner@lothar.com>**20110629222555
3381 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
3382]
3383[prepare for viz: improve DownloadStatus events
3384Brian Warner <warner@lothar.com>**20110629222542
3385 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
3386 
3387 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
3388]
3389[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
3390zooko@zooko.com**20110629185711
3391 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
3392]
3393[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
3394david-sarah@jacaranda.org**20110130235809
3395 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
3396]
3397[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
3398david-sarah@jacaranda.org**20110626054124
3399 Ignore-this: abb864427a1b91bd10d5132b4589fd90
3400]
3401[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
3402david-sarah@jacaranda.org**20110623205528
3403 Ignore-this: c63e23146c39195de52fb17c7c49b2da
3404]
3405[Rename test_package_initialization.py to (much shorter) test_import.py .
3406Brian Warner <warner@lothar.com>**20110611190234
3407 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
3408 
3409 The former name was making my 'ls' listings hard to read, by forcing them
3410 down to just two columns.
3411]
3412[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
3413zooko@zooko.com**20110611163741
3414 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
3415 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
3416 fixes #1412
3417]
3418[wui: right-align the size column in the WUI
3419zooko@zooko.com**20110611153758
3420 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
3421 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
3422 fixes #1412
3423]
3424[docs: three minor fixes
3425zooko@zooko.com**20110610121656
3426 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
3427 CREDITS for arc for stats tweak
3428 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
3429 English usage tweak
3430]
3431[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
3432david-sarah@jacaranda.org**20110609223719
3433 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
3434]
3435[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
3436wilcoxjg@gmail.com**20110527120135
3437 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
3438 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
3439 NEWS.rst, stats.py: documentation of change to get_latencies
3440 stats.rst: now documents percentile modification in get_latencies
3441 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
3442 fixes #1392
3443]
3444[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
3445david-sarah@jacaranda.org**20110517011214
3446 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
3447]
3448[docs: convert NEWS to NEWS.rst and change all references to it.
3449david-sarah@jacaranda.org**20110517010255
3450 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
3451]
3452[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
3453david-sarah@jacaranda.org**20110512140559
3454 Ignore-this: 784548fc5367fac5450df1c46890876d
3455]
3456[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
3457david-sarah@jacaranda.org**20110130164923
3458 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
3459]
3460[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
3461zooko@zooko.com**20110128142006
3462 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
3463 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
3464]
3465[M-x whitespace-cleanup
3466zooko@zooko.com**20110510193653
3467 Ignore-this: dea02f831298c0f65ad096960e7df5c7
3468]
3469[docs: fix typo in running.rst, thanks to arch_o_median
3470zooko@zooko.com**20110510193633
3471 Ignore-this: ca06de166a46abbc61140513918e79e8
3472]
3473[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
3474david-sarah@jacaranda.org**20110204204902
3475 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
3476]
3477[relnotes.txt: forseeable -> foreseeable. refs #1342
3478david-sarah@jacaranda.org**20110204204116
3479 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
3480]
3481[replace remaining .html docs with .rst docs
3482zooko@zooko.com**20110510191650
3483 Ignore-this: d557d960a986d4ac8216d1677d236399
3484 Remove install.html (long since deprecated).
3485 Also replace some obsolete references to install.html with references to quickstart.rst.
3486 Fix some broken internal references within docs/historical/historical_known_issues.txt.
3487 Thanks to Ravi Pinjala and Patrick McDonald.
3488 refs #1227
3489]
3490[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
3491zooko@zooko.com**20110428055232
3492 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
3493]
3494[munin tahoe_files plugin: fix incorrect file count
3495francois@ctrlaltdel.ch**20110428055312
3496 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
3497 fixes #1391
3498]
3499[corrected "k must never be smaller than N" to "k must never be greater than N"
3500secorp@allmydata.org**20110425010308
3501 Ignore-this: 233129505d6c70860087f22541805eac
3502]
3503[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
3504david-sarah@jacaranda.org**20110411190738
3505 Ignore-this: 7847d26bc117c328c679f08a7baee519
3506]
3507[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
3508david-sarah@jacaranda.org**20110410155844
3509 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
3510]
3511[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
3512david-sarah@jacaranda.org**20110410155705
3513 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
3514]
3515[remove unused variable detected by pyflakes
3516zooko@zooko.com**20110407172231
3517 Ignore-this: 7344652d5e0720af822070d91f03daf9
3518]
3519[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
3520david-sarah@jacaranda.org**20110401202750
3521 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
3522]
3523[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
3524Brian Warner <warner@lothar.com>**20110325232511
3525 Ignore-this: d5307faa6900f143193bfbe14e0f01a
3526]
3527[control.py: remove all uses of s.get_serverid()
3528warner@lothar.com**20110227011203
3529 Ignore-this: f80a787953bd7fa3d40e828bde00e855
3530]
3531[web: remove some uses of s.get_serverid(), not all
3532warner@lothar.com**20110227011159
3533 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
3534]
3535[immutable/downloader/fetcher.py: remove all get_serverid() calls
3536warner@lothar.com**20110227011156
3537 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
3538]
3539[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
3540warner@lothar.com**20110227011153
3541 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
3542 
3543 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
3544 _shares_from_server dict was being popped incorrectly (using shnum as the
3545 index instead of serverid). I'm still thinking through the consequences of
3546 this bug. It was probably benign and really hard to detect. I think it would
3547 cause us to incorrectly believe that we're pulling too many shares from a
3548 server, and thus prefer a different server rather than asking for a second
3549 share from the first server. The diversity code is intended to spread out the
3550 number of shares simultaneously being requested from each server, but with
3551 this bug, it might be spreading out the total number of shares requested at
3552 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
3553 segment, so the effect doesn't last very long).
3554]
3555[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
3556warner@lothar.com**20110227011150
3557 Ignore-this: d8d56dd8e7b280792b40105e13664554
3558 
3559 test_download.py: create+check MyShare instances better, make sure they share
3560 Server objects, now that finder.py cares
3561]
3562[immutable/downloader/finder.py: reduce use of get_serverid(), one left
3563warner@lothar.com**20110227011146
3564 Ignore-this: 5785be173b491ae8a78faf5142892020
3565]
3566[immutable/offloaded.py: reduce use of get_serverid() a bit more
3567warner@lothar.com**20110227011142
3568 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
3569]
3570[immutable/upload.py: reduce use of get_serverid()
3571warner@lothar.com**20110227011138
3572 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
3573]
3574[immutable/checker.py: remove some uses of s.get_serverid(), not all
3575warner@lothar.com**20110227011134
3576 Ignore-this: e480a37efa9e94e8016d826c492f626e
3577]
3578[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
3579warner@lothar.com**20110227011132
3580 Ignore-this: 6078279ddf42b179996a4b53bee8c421
3581 MockIServer stubs
3582]
3583[upload.py: rearrange _make_trackers a bit, no behavior changes
3584warner@lothar.com**20110227011128
3585 Ignore-this: 296d4819e2af452b107177aef6ebb40f
3586]
3587[happinessutil.py: finally rename merge_peers to merge_servers
3588warner@lothar.com**20110227011124
3589 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
3590]
3591[test_upload.py: factor out FakeServerTracker
3592warner@lothar.com**20110227011120
3593 Ignore-this: 6c182cba90e908221099472cc159325b
3594]
3595[test_upload.py: server-vs-tracker cleanup
3596warner@lothar.com**20110227011115
3597 Ignore-this: 2915133be1a3ba456e8603885437e03
3598]
3599[happinessutil.py: server-vs-tracker cleanup
3600warner@lothar.com**20110227011111
3601 Ignore-this: b856c84033562d7d718cae7cb01085a9
3602]
3603[upload.py: more tracker-vs-server cleanup
3604warner@lothar.com**20110227011107
3605 Ignore-this: bb75ed2afef55e47c085b35def2de315
3606]
3607[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
3608warner@lothar.com**20110227011103
3609 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
3610]
3611[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
3612warner@lothar.com**20110227011100
3613 Ignore-this: 7ea858755cbe5896ac212a925840fe68
3614 
3615 No behavioral changes, just updating variable/method names and log messages.
3616 The effects outside these three files should be minimal: some exception
3617 messages changed (to say "server" instead of "peer"), and some internal class
3618 names were changed. A few things still use "peer" to minimize external
3619 changes, like UploadResults.timings["peer_selection"] and
3620 happinessutil.merge_peers, which can be changed later.
3621]
3622[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
3623warner@lothar.com**20110227011056
3624 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
3625]
3626[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
3627warner@lothar.com**20110227011051
3628 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
3629]
3630[test: increase timeout on a network test because Francois's ARM machine hit that timeout
3631zooko@zooko.com**20110317165909
3632 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
3633 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
3634]
3635[docs/configuration.rst: add a "Frontend Configuration" section
3636Brian Warner <warner@lothar.com>**20110222014323
3637 Ignore-this: 657018aa501fe4f0efef9851628444ca
3638 
3639 this points to docs/frontends/*.rst, which were previously underlinked
3640]
3641[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
3642"Brian Warner <warner@lothar.com>"**20110221061544
3643 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
3644]
3645[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
3646david-sarah@jacaranda.org**20110221015817
3647 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
3648]
3649[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
3650david-sarah@jacaranda.org**20110221020125
3651 Ignore-this: b0744ed58f161bf188e037bad077fc48
3652]
3653[Refactor StorageFarmBroker handling of servers
3654Brian Warner <warner@lothar.com>**20110221015804
3655 Ignore-this: 842144ed92f5717699b8f580eab32a51
3656 
3657 Pass around IServer instance instead of (peerid, rref) tuple. Replace
3658 "descriptor" with "server". Other replacements:
3659 
3660  get_all_servers -> get_connected_servers/get_known_servers
3661  get_servers_for_index -> get_servers_for_psi (now returns IServers)
3662 
3663 This change still needs to be pushed further down: lots of code is now
3664 getting the IServer and then distributing (peerid, rref) internally.
3665 Instead, it ought to distribute the IServer internally and delay
3666 extracting a serverid or rref until the last moment.
3667 
3668 no_network.py was updated to retain parallelism.
3669]
3670[TAG allmydata-tahoe-1.8.2
3671warner@lothar.com**20110131020101]
3672Patch bundle hash:
3673d96c31d3e49569ffbba58de050b35f4088087d6f