Ticket #999: jacp15.darcs.patch

File jacp15.darcs.patch, 205.9 KB (added by arch_o_median, at 2011-07-13T06:06:01Z)
Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
50  * adding comments to clarify what I'm about to do.
51
52Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
53  * branching back, no longer attempting to mock inside TestServerFSBackend
54
55Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
56  * checkpoint12 TestServerFSBackend no longer mocks filesystem
57
58Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
59  * JACP
60
61Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
62  * testing get incoming
63
64Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
65  * ImmutableShareFile does not know its StorageIndex
66
67Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
68  * get_incoming correctly reports the 0 share after it has arrived
69
70Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
71  * jacp14
72
73Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
74  * jacp14 or so
75
76New patches:
77
78[storage: new mocking tests of storage server read and write
79wilcoxjg@gmail.com**20110325203514
80 Ignore-this: df65c3c4f061dd1516f88662023fdb41
81 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
82] {
83addfile ./src/allmydata/test/test_server.py
84hunk ./src/allmydata/test/test_server.py 1
85+from twisted.trial import unittest
86+
87+from StringIO import StringIO
88+
89+from allmydata.test.common_util import ReallyEqualMixin
90+
91+import mock
92+
93+# This is the code that we're going to be testing.
94+from allmydata.storage.server import StorageServer
95+
96+# The following share file contents was generated with
97+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
98+# with share data == 'a'.
99+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
100+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
101+
102+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
103+
104+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
105+    @mock.patch('__builtin__.open')
106+    def test_create_server(self, mockopen):
107+        """ This tests whether a server instance can be constructed. """
108+
109+        def call_open(fname, mode):
110+            if fname == 'testdir/bucket_counter.state':
111+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
112+            elif fname == 'testdir/lease_checker.state':
113+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
114+            elif fname == 'testdir/lease_checker.history':
115+                return StringIO()
116+        mockopen.side_effect = call_open
117+
118+        # Now begin the test.
119+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
120+
121+        # You passed!
122+
123+class TestServer(unittest.TestCase, ReallyEqualMixin):
124+    @mock.patch('__builtin__.open')
125+    def setUp(self, mockopen):
126+        def call_open(fname, mode):
127+            if fname == 'testdir/bucket_counter.state':
128+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
129+            elif fname == 'testdir/lease_checker.state':
130+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
131+            elif fname == 'testdir/lease_checker.history':
132+                return StringIO()
133+        mockopen.side_effect = call_open
134+
135+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
136+
137+
138+    @mock.patch('time.time')
139+    @mock.patch('os.mkdir')
140+    @mock.patch('__builtin__.open')
141+    @mock.patch('os.listdir')
142+    @mock.patch('os.path.isdir')
143+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
144+        """Handle a report of corruption."""
145+
146+        def call_listdir(dirname):
147+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
148+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
149+
150+        mocklistdir.side_effect = call_listdir
151+
152+        class MockFile:
153+            def __init__(self):
154+                self.buffer = ''
155+                self.pos = 0
156+            def write(self, instring):
157+                begin = self.pos
158+                padlen = begin - len(self.buffer)
159+                if padlen > 0:
160+                    self.buffer += '\x00' * padlen
161+                end = self.pos + len(instring)
162+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
163+                self.pos = end
164+            def close(self):
165+                pass
166+            def seek(self, pos):
167+                self.pos = pos
168+            def read(self, numberbytes):
169+                return self.buffer[self.pos:self.pos+numberbytes]
170+            def tell(self):
171+                return self.pos
172+
173+        mocktime.return_value = 0
174+
175+        sharefile = MockFile()
176+        def call_open(fname, mode):
177+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
178+            return sharefile
179+
180+        mockopen.side_effect = call_open
181+        # Now begin the test.
182+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
183+        print bs
184+        bs[0].remote_write(0, 'a')
185+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
186+
187+
188+    @mock.patch('os.path.exists')
189+    @mock.patch('os.path.getsize')
190+    @mock.patch('__builtin__.open')
191+    @mock.patch('os.listdir')
192+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
193+        """ This tests whether the code correctly finds and reads
194+        shares written out by old (Tahoe-LAFS <= v1.8.2)
195+        servers. There is a similar test in test_download, but that one
196+        is from the perspective of the client and exercises a deeper
197+        stack of code. This one is for exercising just the
198+        StorageServer object. """
199+
200+        def call_listdir(dirname):
201+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
202+            return ['0']
203+
204+        mocklistdir.side_effect = call_listdir
205+
206+        def call_open(fname, mode):
207+            self.failUnlessReallyEqual(fname, sharefname)
208+            self.failUnless('r' in mode, mode)
209+            self.failUnless('b' in mode, mode)
210+
211+            return StringIO(share_file_data)
212+        mockopen.side_effect = call_open
213+
214+        datalen = len(share_file_data)
215+        def call_getsize(fname):
216+            self.failUnlessReallyEqual(fname, sharefname)
217+            return datalen
218+        mockgetsize.side_effect = call_getsize
219+
220+        def call_exists(fname):
221+            self.failUnlessReallyEqual(fname, sharefname)
222+            return True
223+        mockexists.side_effect = call_exists
224+
225+        # Now begin the test.
226+        bs = self.s.remote_get_buckets('teststorage_index')
227+
228+        self.failUnlessEqual(len(bs), 1)
229+        b = bs[0]
230+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
231+        # If you try to read past the end you get the as much data as is there.
232+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
233+        # If you start reading past the end of the file you get the empty string.
234+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
235}
236[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
237wilcoxjg@gmail.com**20110624202850
238 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
239 sloppy not for production
240] {
241move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
242hunk ./src/allmydata/storage/crawler.py 13
243     pass
244 
245 class ShareCrawler(service.MultiService):
246-    """A ShareCrawler subclass is attached to a StorageServer, and
247+    """A subcless of ShareCrawler is attached to a StorageServer, and
248     periodically walks all of its shares, processing each one in some
249     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
250     since large servers can easily have a terabyte of shares, in several
251hunk ./src/allmydata/storage/crawler.py 31
252     We assume that the normal upload/download/get_buckets traffic of a tahoe
253     grid will cause the prefixdir contents to be mostly cached in the kernel,
254     or that the number of buckets in each prefixdir will be small enough to
255-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
256+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
257     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
258     prefix. On this server, each prefixdir took 130ms-200ms to list the first
259     time, and 17ms to list the second time.
260hunk ./src/allmydata/storage/crawler.py 68
261     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
262     minimum_cycle_time = 300 # don't run a cycle faster than this
263 
264-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
265+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
266         service.MultiService.__init__(self)
267         if allowed_cpu_percentage is not None:
268             self.allowed_cpu_percentage = allowed_cpu_percentage
269hunk ./src/allmydata/storage/crawler.py 72
270-        self.server = server
271-        self.sharedir = server.sharedir
272-        self.statefile = statefile
273+        self.backend = backend
274         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
275                          for i in range(2**10)]
276         self.prefixes.sort()
277hunk ./src/allmydata/storage/crawler.py 446
278 
279     minimum_cycle_time = 60*60 # we don't need this more than once an hour
280 
281-    def __init__(self, server, statefile, num_sample_prefixes=1):
282-        ShareCrawler.__init__(self, server, statefile)
283+    def __init__(self, statefile, num_sample_prefixes=1):
284+        ShareCrawler.__init__(self, statefile)
285         self.num_sample_prefixes = num_sample_prefixes
286 
287     def add_initial_state(self):
288hunk ./src/allmydata/storage/expirer.py 15
289     removed.
290 
291     I collect statistics on the leases and make these available to a web
292-    status page, including::
293+    status page, including:
294 
295     Space recovered during this cycle-so-far:
296      actual (only if expiration_enabled=True):
297hunk ./src/allmydata/storage/expirer.py 51
298     slow_start = 360 # wait 6 minutes after startup
299     minimum_cycle_time = 12*60*60 # not more than twice per day
300 
301-    def __init__(self, server, statefile, historyfile,
302+    def __init__(self, statefile, historyfile,
303                  expiration_enabled, mode,
304                  override_lease_duration, # used if expiration_mode=="age"
305                  cutoff_date, # used if expiration_mode=="cutoff-date"
306hunk ./src/allmydata/storage/expirer.py 71
307         else:
308             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
309         self.sharetypes_to_expire = sharetypes
310-        ShareCrawler.__init__(self, server, statefile)
311+        ShareCrawler.__init__(self, statefile)
312 
313     def add_initial_state(self):
314         # we fill ["cycle-to-date"] here (even though they will be reset in
315hunk ./src/allmydata/storage/immutable.py 44
316     sharetype = "immutable"
317 
318     def __init__(self, filename, max_size=None, create=False):
319-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
320+        """ If max_size is not None then I won't allow more than
321+        max_size to be written to me. If create=True then max_size
322+        must not be None. """
323         precondition((max_size is not None) or (not create), max_size, create)
324         self.home = filename
325         self._max_size = max_size
326hunk ./src/allmydata/storage/immutable.py 87
327 
328     def read_share_data(self, offset, length):
329         precondition(offset >= 0)
330-        # reads beyond the end of the data are truncated. Reads that start
331-        # beyond the end of the data return an empty string. I wonder why
332-        # Python doesn't do the following computation for me?
333+        # Reads beyond the end of the data are truncated. Reads that start
334+        # beyond the end of the data return an empty string.
335         seekpos = self._data_offset+offset
336         fsize = os.path.getsize(self.home)
337         actuallength = max(0, min(length, fsize-seekpos))
338hunk ./src/allmydata/storage/immutable.py 198
339             space_freed += os.stat(self.home)[stat.ST_SIZE]
340             self.unlink()
341         return space_freed
342+class NullBucketWriter(Referenceable):
343+    implements(RIBucketWriter)
344 
345hunk ./src/allmydata/storage/immutable.py 201
346+    def remote_write(self, offset, data):
347+        return
348 
349 class BucketWriter(Referenceable):
350     implements(RIBucketWriter)
351hunk ./src/allmydata/storage/server.py 7
352 from twisted.application import service
353 
354 from zope.interface import implements
355-from allmydata.interfaces import RIStorageServer, IStatsProducer
356+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
357 from allmydata.util import fileutil, idlib, log, time_format
358 import allmydata # for __full_version__
359 
360hunk ./src/allmydata/storage/server.py 16
361 from allmydata.storage.lease import LeaseInfo
362 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
363      create_mutable_sharefile
364-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
365+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
366 from allmydata.storage.crawler import BucketCountingCrawler
367 from allmydata.storage.expirer import LeaseCheckingCrawler
368 
369hunk ./src/allmydata/storage/server.py 20
370+from zope.interface import implements
371+
372+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
373+# be started and stopped.
374+class Backend(service.MultiService):
375+    implements(IStatsProducer)
376+    def __init__(self):
377+        service.MultiService.__init__(self)
378+
379+    def get_bucket_shares(self):
380+        """XXX"""
381+        raise NotImplementedError
382+
383+    def get_share(self):
384+        """XXX"""
385+        raise NotImplementedError
386+
387+    def make_bucket_writer(self):
388+        """XXX"""
389+        raise NotImplementedError
390+
391+class NullBackend(Backend):
392+    def __init__(self):
393+        Backend.__init__(self)
394+
395+    def get_available_space(self):
396+        return None
397+
398+    def get_bucket_shares(self, storage_index):
399+        return set()
400+
401+    def get_share(self, storage_index, sharenum):
402+        return None
403+
404+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
405+        return NullBucketWriter()
406+
407+class FSBackend(Backend):
408+    def __init__(self, storedir, readonly=False, reserved_space=0):
409+        Backend.__init__(self)
410+
411+        self._setup_storage(storedir, readonly, reserved_space)
412+        self._setup_corruption_advisory()
413+        self._setup_bucket_counter()
414+        self._setup_lease_checkerf()
415+
416+    def _setup_storage(self, storedir, readonly, reserved_space):
417+        self.storedir = storedir
418+        self.readonly = readonly
419+        self.reserved_space = int(reserved_space)
420+        if self.reserved_space:
421+            if self.get_available_space() is None:
422+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
423+                        umid="0wZ27w", level=log.UNUSUAL)
424+
425+        self.sharedir = os.path.join(self.storedir, "shares")
426+        fileutil.make_dirs(self.sharedir)
427+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
428+        self._clean_incomplete()
429+
430+    def _clean_incomplete(self):
431+        fileutil.rm_dir(self.incomingdir)
432+        fileutil.make_dirs(self.incomingdir)
433+
434+    def _setup_corruption_advisory(self):
435+        # we don't actually create the corruption-advisory dir until necessary
436+        self.corruption_advisory_dir = os.path.join(self.storedir,
437+                                                    "corruption-advisories")
438+
439+    def _setup_bucket_counter(self):
440+        statefile = os.path.join(self.storedir, "bucket_counter.state")
441+        self.bucket_counter = BucketCountingCrawler(statefile)
442+        self.bucket_counter.setServiceParent(self)
443+
444+    def _setup_lease_checkerf(self):
445+        statefile = os.path.join(self.storedir, "lease_checker.state")
446+        historyfile = os.path.join(self.storedir, "lease_checker.history")
447+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
448+                                   expiration_enabled, expiration_mode,
449+                                   expiration_override_lease_duration,
450+                                   expiration_cutoff_date,
451+                                   expiration_sharetypes)
452+        self.lease_checker.setServiceParent(self)
453+
454+    def get_available_space(self):
455+        if self.readonly:
456+            return 0
457+        return fileutil.get_available_space(self.storedir, self.reserved_space)
458+
459+    def get_bucket_shares(self, storage_index):
460+        """Return a list of (shnum, pathname) tuples for files that hold
461+        shares for this storage_index. In each tuple, 'shnum' will always be
462+        the integer form of the last component of 'pathname'."""
463+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
464+        try:
465+            for f in os.listdir(storagedir):
466+                if NUM_RE.match(f):
467+                    filename = os.path.join(storagedir, f)
468+                    yield (int(f), filename)
469+        except OSError:
470+            # Commonly caused by there being no buckets at all.
471+            pass
472+
473 # storage/
474 # storage/shares/incoming
475 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
476hunk ./src/allmydata/storage/server.py 143
477     name = 'storage'
478     LeaseCheckerClass = LeaseCheckingCrawler
479 
480-    def __init__(self, storedir, nodeid, reserved_space=0,
481-                 discard_storage=False, readonly_storage=False,
482+    def __init__(self, nodeid, backend, reserved_space=0,
483+                 readonly_storage=False,
484                  stats_provider=None,
485                  expiration_enabled=False,
486                  expiration_mode="age",
487hunk ./src/allmydata/storage/server.py 155
488         assert isinstance(nodeid, str)
489         assert len(nodeid) == 20
490         self.my_nodeid = nodeid
491-        self.storedir = storedir
492-        sharedir = os.path.join(storedir, "shares")
493-        fileutil.make_dirs(sharedir)
494-        self.sharedir = sharedir
495-        # we don't actually create the corruption-advisory dir until necessary
496-        self.corruption_advisory_dir = os.path.join(storedir,
497-                                                    "corruption-advisories")
498-        self.reserved_space = int(reserved_space)
499-        self.no_storage = discard_storage
500-        self.readonly_storage = readonly_storage
501         self.stats_provider = stats_provider
502         if self.stats_provider:
503             self.stats_provider.register_producer(self)
504hunk ./src/allmydata/storage/server.py 158
505-        self.incomingdir = os.path.join(sharedir, 'incoming')
506-        self._clean_incomplete()
507-        fileutil.make_dirs(self.incomingdir)
508         self._active_writers = weakref.WeakKeyDictionary()
509hunk ./src/allmydata/storage/server.py 159
510+        self.backend = backend
511+        self.backend.setServiceParent(self)
512         log.msg("StorageServer created", facility="tahoe.storage")
513 
514hunk ./src/allmydata/storage/server.py 163
515-        if reserved_space:
516-            if self.get_available_space() is None:
517-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
518-                        umin="0wZ27w", level=log.UNUSUAL)
519-
520         self.latencies = {"allocate": [], # immutable
521                           "write": [],
522                           "close": [],
523hunk ./src/allmydata/storage/server.py 174
524                           "renew": [],
525                           "cancel": [],
526                           }
527-        self.add_bucket_counter()
528-
529-        statefile = os.path.join(self.storedir, "lease_checker.state")
530-        historyfile = os.path.join(self.storedir, "lease_checker.history")
531-        klass = self.LeaseCheckerClass
532-        self.lease_checker = klass(self, statefile, historyfile,
533-                                   expiration_enabled, expiration_mode,
534-                                   expiration_override_lease_duration,
535-                                   expiration_cutoff_date,
536-                                   expiration_sharetypes)
537-        self.lease_checker.setServiceParent(self)
538 
539     def __repr__(self):
540         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
541hunk ./src/allmydata/storage/server.py 178
542 
543-    def add_bucket_counter(self):
544-        statefile = os.path.join(self.storedir, "bucket_counter.state")
545-        self.bucket_counter = BucketCountingCrawler(self, statefile)
546-        self.bucket_counter.setServiceParent(self)
547-
548     def count(self, name, delta=1):
549         if self.stats_provider:
550             self.stats_provider.count("storage_server." + name, delta)
551hunk ./src/allmydata/storage/server.py 233
552             kwargs["facility"] = "tahoe.storage"
553         return log.msg(*args, **kwargs)
554 
555-    def _clean_incomplete(self):
556-        fileutil.rm_dir(self.incomingdir)
557-
558     def get_stats(self):
559         # remember: RIStatsProvider requires that our return dict
560         # contains numeric values.
561hunk ./src/allmydata/storage/server.py 269
562             stats['storage_server.total_bucket_count'] = bucket_count
563         return stats
564 
565-    def get_available_space(self):
566-        """Returns available space for share storage in bytes, or None if no
567-        API to get this information is available."""
568-
569-        if self.readonly_storage:
570-            return 0
571-        return fileutil.get_available_space(self.storedir, self.reserved_space)
572-
573     def allocated_size(self):
574         space = 0
575         for bw in self._active_writers:
576hunk ./src/allmydata/storage/server.py 276
577         return space
578 
579     def remote_get_version(self):
580-        remaining_space = self.get_available_space()
581+        remaining_space = self.backend.get_available_space()
582         if remaining_space is None:
583             # We're on a platform that has no API to get disk stats.
584             remaining_space = 2**64
585hunk ./src/allmydata/storage/server.py 301
586         self.count("allocate")
587         alreadygot = set()
588         bucketwriters = {} # k: shnum, v: BucketWriter
589-        si_dir = storage_index_to_dir(storage_index)
590-        si_s = si_b2a(storage_index)
591 
592hunk ./src/allmydata/storage/server.py 302
593+        si_s = si_b2a(storage_index)
594         log.msg("storage: allocate_buckets %s" % si_s)
595 
596         # in this implementation, the lease information (including secrets)
597hunk ./src/allmydata/storage/server.py 316
598 
599         max_space_per_bucket = allocated_size
600 
601-        remaining_space = self.get_available_space()
602+        remaining_space = self.backend.get_available_space()
603         limited = remaining_space is not None
604         if limited:
605             # this is a bit conservative, since some of this allocated_size()
606hunk ./src/allmydata/storage/server.py 329
607         # they asked about: this will save them a lot of work. Add or update
608         # leases for all of them: if they want us to hold shares for this
609         # file, they'll want us to hold leases for this file.
610-        for (shnum, fn) in self._get_bucket_shares(storage_index):
611+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
612             alreadygot.add(shnum)
613             sf = ShareFile(fn)
614             sf.add_or_renew_lease(lease_info)
615hunk ./src/allmydata/storage/server.py 335
616 
617         for shnum in sharenums:
618-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
619-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
620-            if os.path.exists(finalhome):
621+            share = self.backend.get_share(storage_index, shnum)
622+
623+            if not share:
624+                if (not limited) or (remaining_space >= max_space_per_bucket):
625+                    # ok! we need to create the new share file.
626+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
627+                                      max_space_per_bucket, lease_info, canary)
628+                    bucketwriters[shnum] = bw
629+                    self._active_writers[bw] = 1
630+                    if limited:
631+                        remaining_space -= max_space_per_bucket
632+                else:
633+                    # bummer! not enough space to accept this bucket
634+                    pass
635+
636+            elif share.is_complete():
637                 # great! we already have it. easy.
638                 pass
639hunk ./src/allmydata/storage/server.py 353
640-            elif os.path.exists(incominghome):
641+            elif not share.is_complete():
642                 # Note that we don't create BucketWriters for shnums that
643                 # have a partial share (in incoming/), so if a second upload
644                 # occurs while the first is still in progress, the second
645hunk ./src/allmydata/storage/server.py 359
646                 # uploader will use different storage servers.
647                 pass
648-            elif (not limited) or (remaining_space >= max_space_per_bucket):
649-                # ok! we need to create the new share file.
650-                bw = BucketWriter(self, incominghome, finalhome,
651-                                  max_space_per_bucket, lease_info, canary)
652-                if self.no_storage:
653-                    bw.throw_out_all_data = True
654-                bucketwriters[shnum] = bw
655-                self._active_writers[bw] = 1
656-                if limited:
657-                    remaining_space -= max_space_per_bucket
658-            else:
659-                # bummer! not enough space to accept this bucket
660-                pass
661-
662-        if bucketwriters:
663-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
664 
665         self.add_latency("allocate", time.time() - start)
666         return alreadygot, bucketwriters
667hunk ./src/allmydata/storage/server.py 437
668             self.stats_provider.count('storage_server.bytes_added', consumed_size)
669         del self._active_writers[bw]
670 
671-    def _get_bucket_shares(self, storage_index):
672-        """Return a list of (shnum, pathname) tuples for files that hold
673-        shares for this storage_index. In each tuple, 'shnum' will always be
674-        the integer form of the last component of 'pathname'."""
675-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
676-        try:
677-            for f in os.listdir(storagedir):
678-                if NUM_RE.match(f):
679-                    filename = os.path.join(storagedir, f)
680-                    yield (int(f), filename)
681-        except OSError:
682-            # Commonly caused by there being no buckets at all.
683-            pass
684 
685     def remote_get_buckets(self, storage_index):
686         start = time.time()
687hunk ./src/allmydata/storage/server.py 444
688         si_s = si_b2a(storage_index)
689         log.msg("storage: get_buckets %s" % si_s)
690         bucketreaders = {} # k: sharenum, v: BucketReader
691-        for shnum, filename in self._get_bucket_shares(storage_index):
692+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
693             bucketreaders[shnum] = BucketReader(self, filename,
694                                                 storage_index, shnum)
695         self.add_latency("get", time.time() - start)
696hunk ./src/allmydata/test/test_backends.py 10
697 import mock
698 
699 # This is the code that we're going to be testing.
700-from allmydata.storage.server import StorageServer
701+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
702 
703 # The following share file contents was generated with
704 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
705hunk ./src/allmydata/test/test_backends.py 21
706 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
707 
708 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
709+    @mock.patch('time.time')
710+    @mock.patch('os.mkdir')
711+    @mock.patch('__builtin__.open')
712+    @mock.patch('os.listdir')
713+    @mock.patch('os.path.isdir')
714+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
715+        """ This tests whether a server instance can be constructed
716+        with a null backend. The server instance fails the test if it
717+        tries to read or write to the file system. """
718+
719+        # Now begin the test.
720+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
721+
722+        self.failIf(mockisdir.called)
723+        self.failIf(mocklistdir.called)
724+        self.failIf(mockopen.called)
725+        self.failIf(mockmkdir.called)
726+
727+        # You passed!
728+
729+    @mock.patch('time.time')
730+    @mock.patch('os.mkdir')
731     @mock.patch('__builtin__.open')
732hunk ./src/allmydata/test/test_backends.py 44
733-    def test_create_server(self, mockopen):
734-        """ This tests whether a server instance can be constructed. """
735+    @mock.patch('os.listdir')
736+    @mock.patch('os.path.isdir')
737+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
738+        """ This tests whether a server instance can be constructed
739+        with a filesystem backend. To pass the test, it has to use the
740+        filesystem in only the prescribed ways. """
741 
742         def call_open(fname, mode):
743             if fname == 'testdir/bucket_counter.state':
744hunk ./src/allmydata/test/test_backends.py 58
745                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
746             elif fname == 'testdir/lease_checker.history':
747                 return StringIO()
748+            else:
749+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
750         mockopen.side_effect = call_open
751 
752         # Now begin the test.
753hunk ./src/allmydata/test/test_backends.py 63
754-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
755+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
756+
757+        self.failIf(mockisdir.called)
758+        self.failIf(mocklistdir.called)
759+        self.failIf(mockopen.called)
760+        self.failIf(mockmkdir.called)
761+        self.failIf(mocktime.called)
762 
763         # You passed!
764 
765hunk ./src/allmydata/test/test_backends.py 73
766-class TestServer(unittest.TestCase, ReallyEqualMixin):
767+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
768+    def setUp(self):
769+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
770+
771+    @mock.patch('os.mkdir')
772+    @mock.patch('__builtin__.open')
773+    @mock.patch('os.listdir')
774+    @mock.patch('os.path.isdir')
775+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
776+        """ Write a new share. """
777+
778+        # Now begin the test.
779+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
780+        bs[0].remote_write(0, 'a')
781+        self.failIf(mockisdir.called)
782+        self.failIf(mocklistdir.called)
783+        self.failIf(mockopen.called)
784+        self.failIf(mockmkdir.called)
785+
786+    @mock.patch('os.path.exists')
787+    @mock.patch('os.path.getsize')
788+    @mock.patch('__builtin__.open')
789+    @mock.patch('os.listdir')
790+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
791+        """ This tests whether the code correctly finds and reads
792+        shares written out by old (Tahoe-LAFS <= v1.8.2)
793+        servers. There is a similar test in test_download, but that one
794+        is from the perspective of the client and exercises a deeper
795+        stack of code. This one is for exercising just the
796+        StorageServer object. """
797+
798+        # Now begin the test.
799+        bs = self.s.remote_get_buckets('teststorage_index')
800+
801+        self.failUnlessEqual(len(bs), 0)
802+        self.failIf(mocklistdir.called)
803+        self.failIf(mockopen.called)
804+        self.failIf(mockgetsize.called)
805+        self.failIf(mockexists.called)
806+
807+
808+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
809     @mock.patch('__builtin__.open')
810     def setUp(self, mockopen):
811         def call_open(fname, mode):
812hunk ./src/allmydata/test/test_backends.py 126
813                 return StringIO()
814         mockopen.side_effect = call_open
815 
816-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
817-
818+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
819 
820     @mock.patch('time.time')
821     @mock.patch('os.mkdir')
822hunk ./src/allmydata/test/test_backends.py 134
823     @mock.patch('os.listdir')
824     @mock.patch('os.path.isdir')
825     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
826-        """Handle a report of corruption."""
827+        """ Write a new share. """
828 
829         def call_listdir(dirname):
830             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
831hunk ./src/allmydata/test/test_backends.py 173
832         mockopen.side_effect = call_open
833         # Now begin the test.
834         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
835-        print bs
836         bs[0].remote_write(0, 'a')
837         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
838 
839hunk ./src/allmydata/test/test_backends.py 176
840-
841     @mock.patch('os.path.exists')
842     @mock.patch('os.path.getsize')
843     @mock.patch('__builtin__.open')
844hunk ./src/allmydata/test/test_backends.py 218
845 
846         self.failUnlessEqual(len(bs), 1)
847         b = bs[0]
848+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
849         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
850         # If you try to read past the end you get the as much data as is there.
851         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
852hunk ./src/allmydata/test/test_backends.py 224
853         # If you start reading past the end of the file you get the empty string.
854         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
855+
856+
857}
858[a temp patch used as a snapshot
859wilcoxjg@gmail.com**20110626052732
860 Ignore-this: 95f05e314eaec870afa04c76d979aa44
861] {
862hunk ./docs/configuration.rst 637
863   [storage]
864   enabled = True
865   readonly = True
866-  sizelimit = 10000000000
867 
868 
869   [helper]
870hunk ./docs/garbage-collection.rst 16
871 
872 When a file or directory in the virtual filesystem is no longer referenced,
873 the space that its shares occupied on each storage server can be freed,
874-making room for other shares. Tahoe currently uses a garbage collection
875+making room for other shares. Tahoe uses a garbage collection
876 ("GC") mechanism to implement this space-reclamation process. Each share has
877 one or more "leases", which are managed by clients who want the
878 file/directory to be retained. The storage server accepts each share for a
879hunk ./docs/garbage-collection.rst 34
880 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
881 If lease renewal occurs quickly and with 100% reliability, than any renewal
882 time that is shorter than the lease duration will suffice, but a larger ratio
883-of duration-over-renewal-time will be more robust in the face of occasional
884+of lease duration to renewal time will be more robust in the face of occasional
885 delays or failures.
886 
887 The current recommended values for a small Tahoe grid are to renew the leases
888replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
889hunk ./src/allmydata/client.py 260
890             sharetypes.append("mutable")
891         expiration_sharetypes = tuple(sharetypes)
892 
893+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
894+            xyz
895+        xyz
896         ss = StorageServer(storedir, self.nodeid,
897                            reserved_space=reserved,
898                            discard_storage=discard,
899hunk ./src/allmydata/storage/crawler.py 234
900         f = open(tmpfile, "wb")
901         pickle.dump(self.state, f)
902         f.close()
903-        fileutil.move_into_place(tmpfile, self.statefile)
904+        fileutil.move_into_place(tmpfile, self.statefname)
905 
906     def startService(self):
907         # arrange things to look like we were just sleeping, so
908}
909[snapshot of progress on backend implementation (not suitable for trunk)
910wilcoxjg@gmail.com**20110626053244
911 Ignore-this: 50c764af791c2b99ada8289546806a0a
912] {
913adddir ./src/allmydata/storage/backends
914adddir ./src/allmydata/storage/backends/das
915move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
916adddir ./src/allmydata/storage/backends/null
917hunk ./src/allmydata/interfaces.py 270
918         store that on disk.
919         """
920 
921+class IStorageBackend(Interface):
922+    """
923+    Objects of this kind live on the server side and are used by the
924+    storage server object.
925+    """
926+    def get_available_space(self, reserved_space):
927+        """ Returns available space for share storage in bytes, or
928+        None if this information is not available or if the available
929+        space is unlimited.
930+
931+        If the backend is configured for read-only mode then this will
932+        return 0.
933+
934+        reserved_space is how many bytes to subtract from the answer, so
935+        you can pass how many bytes you would like to leave unused on this
936+        filesystem as reserved_space. """
937+
938+    def get_bucket_shares(self):
939+        """XXX"""
940+
941+    def get_share(self):
942+        """XXX"""
943+
944+    def make_bucket_writer(self):
945+        """XXX"""
946+
947+class IStorageBackendShare(Interface):
948+    """
949+    This object contains as much as all of the share data.  It is intended
950+    for lazy evaluation such that in many use cases substantially less than
951+    all of the share data will be accessed.
952+    """
953+    def is_complete(self):
954+        """
955+        Returns the share state, or None if the share does not exist.
956+        """
957+
958 class IStorageBucketWriter(Interface):
959     """
960     Objects of this kind live on the client side.
961hunk ./src/allmydata/interfaces.py 2492
962 
963 class EmptyPathnameComponentError(Exception):
964     """The webapi disallows empty pathname components."""
965+
966+class IShareStore(Interface):
967+    pass
968+
969addfile ./src/allmydata/storage/backends/__init__.py
970addfile ./src/allmydata/storage/backends/das/__init__.py
971addfile ./src/allmydata/storage/backends/das/core.py
972hunk ./src/allmydata/storage/backends/das/core.py 1
973+from allmydata.interfaces import IStorageBackend
974+from allmydata.storage.backends.base import Backend
975+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
976+from allmydata.util.assertutil import precondition
977+
978+import os, re, weakref, struct, time
979+
980+from foolscap.api import Referenceable
981+from twisted.application import service
982+
983+from zope.interface import implements
984+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
985+from allmydata.util import fileutil, idlib, log, time_format
986+import allmydata # for __full_version__
987+
988+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
989+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
990+from allmydata.storage.lease import LeaseInfo
991+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
992+     create_mutable_sharefile
993+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
994+from allmydata.storage.crawler import FSBucketCountingCrawler
995+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
996+
997+from zope.interface import implements
998+
999+class DASCore(Backend):
1000+    implements(IStorageBackend)
1001+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1002+        Backend.__init__(self)
1003+
1004+        self._setup_storage(storedir, readonly, reserved_space)
1005+        self._setup_corruption_advisory()
1006+        self._setup_bucket_counter()
1007+        self._setup_lease_checkerf(expiration_policy)
1008+
1009+    def _setup_storage(self, storedir, readonly, reserved_space):
1010+        self.storedir = storedir
1011+        self.readonly = readonly
1012+        self.reserved_space = int(reserved_space)
1013+        if self.reserved_space:
1014+            if self.get_available_space() is None:
1015+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1016+                        umid="0wZ27w", level=log.UNUSUAL)
1017+
1018+        self.sharedir = os.path.join(self.storedir, "shares")
1019+        fileutil.make_dirs(self.sharedir)
1020+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1021+        self._clean_incomplete()
1022+
1023+    def _clean_incomplete(self):
1024+        fileutil.rm_dir(self.incomingdir)
1025+        fileutil.make_dirs(self.incomingdir)
1026+
1027+    def _setup_corruption_advisory(self):
1028+        # we don't actually create the corruption-advisory dir until necessary
1029+        self.corruption_advisory_dir = os.path.join(self.storedir,
1030+                                                    "corruption-advisories")
1031+
1032+    def _setup_bucket_counter(self):
1033+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1034+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1035+        self.bucket_counter.setServiceParent(self)
1036+
1037+    def _setup_lease_checkerf(self, expiration_policy):
1038+        statefile = os.path.join(self.storedir, "lease_checker.state")
1039+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1040+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1041+        self.lease_checker.setServiceParent(self)
1042+
1043+    def get_available_space(self):
1044+        if self.readonly:
1045+            return 0
1046+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1047+
1048+    def get_shares(self, storage_index):
1049+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1050+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1051+        try:
1052+            for f in os.listdir(finalstoragedir):
1053+                if NUM_RE.match(f):
1054+                    filename = os.path.join(finalstoragedir, f)
1055+                    yield FSBShare(filename, int(f))
1056+        except OSError:
1057+            # Commonly caused by there being no buckets at all.
1058+            pass
1059+       
1060+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1061+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1062+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1063+        return bw
1064+       
1065+
1066+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1067+# and share data. The share data is accessed by RIBucketWriter.write and
1068+# RIBucketReader.read . The lease information is not accessible through these
1069+# interfaces.
1070+
1071+# The share file has the following layout:
1072+#  0x00: share file version number, four bytes, current version is 1
1073+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1074+#  0x08: number of leases, four bytes big-endian
1075+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1076+#  A+0x0c = B: first lease. Lease format is:
1077+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1078+#   B+0x04: renew secret, 32 bytes (SHA256)
1079+#   B+0x24: cancel secret, 32 bytes (SHA256)
1080+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1081+#   B+0x48: next lease, or end of record
1082+
1083+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1084+# but it is still filled in by storage servers in case the storage server
1085+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1086+# share file is moved from one storage server to another. The value stored in
1087+# this field is truncated, so if the actual share data length is >= 2**32,
1088+# then the value stored in this field will be the actual share data length
1089+# modulo 2**32.
1090+
1091+class ImmutableShare:
1092+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1093+    sharetype = "immutable"
1094+
1095+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1096+        """ If max_size is not None then I won't allow more than
1097+        max_size to be written to me. If create=True then max_size
1098+        must not be None. """
1099+        precondition((max_size is not None) or (not create), max_size, create)
1100+        self.shnum = shnum
1101+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1102+        self._max_size = max_size
1103+        if create:
1104+            # touch the file, so later callers will see that we're working on
1105+            # it. Also construct the metadata.
1106+            assert not os.path.exists(self.fname)
1107+            fileutil.make_dirs(os.path.dirname(self.fname))
1108+            f = open(self.fname, 'wb')
1109+            # The second field -- the four-byte share data length -- is no
1110+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1111+            # there in case someone downgrades a storage server from >=
1112+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1113+            # server to another, etc. We do saturation -- a share data length
1114+            # larger than 2**32-1 (what can fit into the field) is marked as
1115+            # the largest length that can fit into the field. That way, even
1116+            # if this does happen, the old < v1.3.0 server will still allow
1117+            # clients to read the first part of the share.
1118+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1119+            f.close()
1120+            self._lease_offset = max_size + 0x0c
1121+            self._num_leases = 0
1122+        else:
1123+            f = open(self.fname, 'rb')
1124+            filesize = os.path.getsize(self.fname)
1125+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1126+            f.close()
1127+            if version != 1:
1128+                msg = "sharefile %s had version %d but we wanted 1" % \
1129+                      (self.fname, version)
1130+                raise UnknownImmutableContainerVersionError(msg)
1131+            self._num_leases = num_leases
1132+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1133+        self._data_offset = 0xc
1134+
1135+    def unlink(self):
1136+        os.unlink(self.fname)
1137+
1138+    def read_share_data(self, offset, length):
1139+        precondition(offset >= 0)
1140+        # Reads beyond the end of the data are truncated. Reads that start
1141+        # beyond the end of the data return an empty string.
1142+        seekpos = self._data_offset+offset
1143+        fsize = os.path.getsize(self.fname)
1144+        actuallength = max(0, min(length, fsize-seekpos))
1145+        if actuallength == 0:
1146+            return ""
1147+        f = open(self.fname, 'rb')
1148+        f.seek(seekpos)
1149+        return f.read(actuallength)
1150+
1151+    def write_share_data(self, offset, data):
1152+        length = len(data)
1153+        precondition(offset >= 0, offset)
1154+        if self._max_size is not None and offset+length > self._max_size:
1155+            raise DataTooLargeError(self._max_size, offset, length)
1156+        f = open(self.fname, 'rb+')
1157+        real_offset = self._data_offset+offset
1158+        f.seek(real_offset)
1159+        assert f.tell() == real_offset
1160+        f.write(data)
1161+        f.close()
1162+
1163+    def _write_lease_record(self, f, lease_number, lease_info):
1164+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1165+        f.seek(offset)
1166+        assert f.tell() == offset
1167+        f.write(lease_info.to_immutable_data())
1168+
1169+    def _read_num_leases(self, f):
1170+        f.seek(0x08)
1171+        (num_leases,) = struct.unpack(">L", f.read(4))
1172+        return num_leases
1173+
1174+    def _write_num_leases(self, f, num_leases):
1175+        f.seek(0x08)
1176+        f.write(struct.pack(">L", num_leases))
1177+
1178+    def _truncate_leases(self, f, num_leases):
1179+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1180+
1181+    def get_leases(self):
1182+        """Yields a LeaseInfo instance for all leases."""
1183+        f = open(self.fname, 'rb')
1184+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1185+        f.seek(self._lease_offset)
1186+        for i in range(num_leases):
1187+            data = f.read(self.LEASE_SIZE)
1188+            if data:
1189+                yield LeaseInfo().from_immutable_data(data)
1190+
1191+    def add_lease(self, lease_info):
1192+        f = open(self.fname, 'rb+')
1193+        num_leases = self._read_num_leases(f)
1194+        self._write_lease_record(f, num_leases, lease_info)
1195+        self._write_num_leases(f, num_leases+1)
1196+        f.close()
1197+
1198+    def renew_lease(self, renew_secret, new_expire_time):
1199+        for i,lease in enumerate(self.get_leases()):
1200+            if constant_time_compare(lease.renew_secret, renew_secret):
1201+                # yup. See if we need to update the owner time.
1202+                if new_expire_time > lease.expiration_time:
1203+                    # yes
1204+                    lease.expiration_time = new_expire_time
1205+                    f = open(self.fname, 'rb+')
1206+                    self._write_lease_record(f, i, lease)
1207+                    f.close()
1208+                return
1209+        raise IndexError("unable to renew non-existent lease")
1210+
1211+    def add_or_renew_lease(self, lease_info):
1212+        try:
1213+            self.renew_lease(lease_info.renew_secret,
1214+                             lease_info.expiration_time)
1215+        except IndexError:
1216+            self.add_lease(lease_info)
1217+
1218+
1219+    def cancel_lease(self, cancel_secret):
1220+        """Remove a lease with the given cancel_secret. If the last lease is
1221+        cancelled, the file will be removed. Return the number of bytes that
1222+        were freed (by truncating the list of leases, and possibly by
1223+        deleting the file. Raise IndexError if there was no lease with the
1224+        given cancel_secret.
1225+        """
1226+
1227+        leases = list(self.get_leases())
1228+        num_leases_removed = 0
1229+        for i,lease in enumerate(leases):
1230+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1231+                leases[i] = None
1232+                num_leases_removed += 1
1233+        if not num_leases_removed:
1234+            raise IndexError("unable to find matching lease to cancel")
1235+        if num_leases_removed:
1236+            # pack and write out the remaining leases. We write these out in
1237+            # the same order as they were added, so that if we crash while
1238+            # doing this, we won't lose any non-cancelled leases.
1239+            leases = [l for l in leases if l] # remove the cancelled leases
1240+            f = open(self.fname, 'rb+')
1241+            for i,lease in enumerate(leases):
1242+                self._write_lease_record(f, i, lease)
1243+            self._write_num_leases(f, len(leases))
1244+            self._truncate_leases(f, len(leases))
1245+            f.close()
1246+        space_freed = self.LEASE_SIZE * num_leases_removed
1247+        if not len(leases):
1248+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1249+            self.unlink()
1250+        return space_freed
1251hunk ./src/allmydata/storage/backends/das/expirer.py 2
1252 import time, os, pickle, struct
1253-from allmydata.storage.crawler import ShareCrawler
1254-from allmydata.storage.shares import get_share_file
1255+from allmydata.storage.crawler import FSShareCrawler
1256 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1257      UnknownImmutableContainerVersionError
1258 from twisted.python import log as twlog
1259hunk ./src/allmydata/storage/backends/das/expirer.py 7
1260 
1261-class LeaseCheckingCrawler(ShareCrawler):
1262+class FSLeaseCheckingCrawler(FSShareCrawler):
1263     """I examine the leases on all shares, determining which are still valid
1264     and which have expired. I can remove the expired leases (if so
1265     configured), and the share will be deleted when the last lease is
1266hunk ./src/allmydata/storage/backends/das/expirer.py 50
1267     slow_start = 360 # wait 6 minutes after startup
1268     minimum_cycle_time = 12*60*60 # not more than twice per day
1269 
1270-    def __init__(self, statefile, historyfile,
1271-                 expiration_enabled, mode,
1272-                 override_lease_duration, # used if expiration_mode=="age"
1273-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1274-                 sharetypes):
1275+    def __init__(self, statefile, historyfile, expiration_policy):
1276         self.historyfile = historyfile
1277hunk ./src/allmydata/storage/backends/das/expirer.py 52
1278-        self.expiration_enabled = expiration_enabled
1279-        self.mode = mode
1280+        self.expiration_enabled = expiration_policy['enabled']
1281+        self.mode = expiration_policy['mode']
1282         self.override_lease_duration = None
1283         self.cutoff_date = None
1284         if self.mode == "age":
1285hunk ./src/allmydata/storage/backends/das/expirer.py 57
1286-            assert isinstance(override_lease_duration, (int, type(None)))
1287-            self.override_lease_duration = override_lease_duration # seconds
1288+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1289+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1290         elif self.mode == "cutoff-date":
1291hunk ./src/allmydata/storage/backends/das/expirer.py 60
1292-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1293+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1294             assert cutoff_date is not None
1295hunk ./src/allmydata/storage/backends/das/expirer.py 62
1296-            self.cutoff_date = cutoff_date
1297+            self.cutoff_date = expiration_policy['cutoff_date']
1298         else:
1299hunk ./src/allmydata/storage/backends/das/expirer.py 64
1300-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1301-        self.sharetypes_to_expire = sharetypes
1302-        ShareCrawler.__init__(self, statefile)
1303+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1304+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1305+        FSShareCrawler.__init__(self, statefile)
1306 
1307     def add_initial_state(self):
1308         # we fill ["cycle-to-date"] here (even though they will be reset in
1309hunk ./src/allmydata/storage/backends/das/expirer.py 156
1310 
1311     def process_share(self, sharefilename):
1312         # first, find out what kind of a share it is
1313-        sf = get_share_file(sharefilename)
1314+        f = open(sharefilename, "rb")
1315+        prefix = f.read(32)
1316+        f.close()
1317+        if prefix == MutableShareFile.MAGIC:
1318+            sf = MutableShareFile(sharefilename)
1319+        else:
1320+            # otherwise assume it's immutable
1321+            sf = FSBShare(sharefilename)
1322         sharetype = sf.sharetype
1323         now = time.time()
1324         s = self.stat(sharefilename)
1325addfile ./src/allmydata/storage/backends/null/__init__.py
1326addfile ./src/allmydata/storage/backends/null/core.py
1327hunk ./src/allmydata/storage/backends/null/core.py 1
1328+from allmydata.storage.backends.base import Backend
1329+
1330+class NullCore(Backend):
1331+    def __init__(self):
1332+        Backend.__init__(self)
1333+
1334+    def get_available_space(self):
1335+        return None
1336+
1337+    def get_shares(self, storage_index):
1338+        return set()
1339+
1340+    def get_share(self, storage_index, sharenum):
1341+        return None
1342+
1343+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1344+        return NullBucketWriter()
1345hunk ./src/allmydata/storage/crawler.py 12
1346 class TimeSliceExceeded(Exception):
1347     pass
1348 
1349-class ShareCrawler(service.MultiService):
1350+class FSShareCrawler(service.MultiService):
1351     """A subcless of ShareCrawler is attached to a StorageServer, and
1352     periodically walks all of its shares, processing each one in some
1353     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1354hunk ./src/allmydata/storage/crawler.py 68
1355     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1356     minimum_cycle_time = 300 # don't run a cycle faster than this
1357 
1358-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1359+    def __init__(self, statefname, allowed_cpu_percentage=None):
1360         service.MultiService.__init__(self)
1361         if allowed_cpu_percentage is not None:
1362             self.allowed_cpu_percentage = allowed_cpu_percentage
1363hunk ./src/allmydata/storage/crawler.py 72
1364-        self.backend = backend
1365+        self.statefname = statefname
1366         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1367                          for i in range(2**10)]
1368         self.prefixes.sort()
1369hunk ./src/allmydata/storage/crawler.py 192
1370         #                            of the last bucket to be processed, or
1371         #                            None if we are sleeping between cycles
1372         try:
1373-            f = open(self.statefile, "rb")
1374+            f = open(self.statefname, "rb")
1375             state = pickle.load(f)
1376             f.close()
1377         except EnvironmentError:
1378hunk ./src/allmydata/storage/crawler.py 230
1379         else:
1380             last_complete_prefix = self.prefixes[lcpi]
1381         self.state["last-complete-prefix"] = last_complete_prefix
1382-        tmpfile = self.statefile + ".tmp"
1383+        tmpfile = self.statefname + ".tmp"
1384         f = open(tmpfile, "wb")
1385         pickle.dump(self.state, f)
1386         f.close()
1387hunk ./src/allmydata/storage/crawler.py 433
1388         pass
1389 
1390 
1391-class BucketCountingCrawler(ShareCrawler):
1392+class FSBucketCountingCrawler(FSShareCrawler):
1393     """I keep track of how many buckets are being managed by this server.
1394     This is equivalent to the number of distributed files and directories for
1395     which I am providing storage. The actual number of files+directories in
1396hunk ./src/allmydata/storage/crawler.py 446
1397 
1398     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1399 
1400-    def __init__(self, statefile, num_sample_prefixes=1):
1401-        ShareCrawler.__init__(self, statefile)
1402+    def __init__(self, statefname, num_sample_prefixes=1):
1403+        FSShareCrawler.__init__(self, statefname)
1404         self.num_sample_prefixes = num_sample_prefixes
1405 
1406     def add_initial_state(self):
1407hunk ./src/allmydata/storage/immutable.py 14
1408 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1409      DataTooLargeError
1410 
1411-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1412-# and share data. The share data is accessed by RIBucketWriter.write and
1413-# RIBucketReader.read . The lease information is not accessible through these
1414-# interfaces.
1415-
1416-# The share file has the following layout:
1417-#  0x00: share file version number, four bytes, current version is 1
1418-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1419-#  0x08: number of leases, four bytes big-endian
1420-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1421-#  A+0x0c = B: first lease. Lease format is:
1422-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1423-#   B+0x04: renew secret, 32 bytes (SHA256)
1424-#   B+0x24: cancel secret, 32 bytes (SHA256)
1425-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1426-#   B+0x48: next lease, or end of record
1427-
1428-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1429-# but it is still filled in by storage servers in case the storage server
1430-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1431-# share file is moved from one storage server to another. The value stored in
1432-# this field is truncated, so if the actual share data length is >= 2**32,
1433-# then the value stored in this field will be the actual share data length
1434-# modulo 2**32.
1435-
1436-class ShareFile:
1437-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1438-    sharetype = "immutable"
1439-
1440-    def __init__(self, filename, max_size=None, create=False):
1441-        """ If max_size is not None then I won't allow more than
1442-        max_size to be written to me. If create=True then max_size
1443-        must not be None. """
1444-        precondition((max_size is not None) or (not create), max_size, create)
1445-        self.home = filename
1446-        self._max_size = max_size
1447-        if create:
1448-            # touch the file, so later callers will see that we're working on
1449-            # it. Also construct the metadata.
1450-            assert not os.path.exists(self.home)
1451-            fileutil.make_dirs(os.path.dirname(self.home))
1452-            f = open(self.home, 'wb')
1453-            # The second field -- the four-byte share data length -- is no
1454-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1455-            # there in case someone downgrades a storage server from >=
1456-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1457-            # server to another, etc. We do saturation -- a share data length
1458-            # larger than 2**32-1 (what can fit into the field) is marked as
1459-            # the largest length that can fit into the field. That way, even
1460-            # if this does happen, the old < v1.3.0 server will still allow
1461-            # clients to read the first part of the share.
1462-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1463-            f.close()
1464-            self._lease_offset = max_size + 0x0c
1465-            self._num_leases = 0
1466-        else:
1467-            f = open(self.home, 'rb')
1468-            filesize = os.path.getsize(self.home)
1469-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1470-            f.close()
1471-            if version != 1:
1472-                msg = "sharefile %s had version %d but we wanted 1" % \
1473-                      (filename, version)
1474-                raise UnknownImmutableContainerVersionError(msg)
1475-            self._num_leases = num_leases
1476-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1477-        self._data_offset = 0xc
1478-
1479-    def unlink(self):
1480-        os.unlink(self.home)
1481-
1482-    def read_share_data(self, offset, length):
1483-        precondition(offset >= 0)
1484-        # Reads beyond the end of the data are truncated. Reads that start
1485-        # beyond the end of the data return an empty string.
1486-        seekpos = self._data_offset+offset
1487-        fsize = os.path.getsize(self.home)
1488-        actuallength = max(0, min(length, fsize-seekpos))
1489-        if actuallength == 0:
1490-            return ""
1491-        f = open(self.home, 'rb')
1492-        f.seek(seekpos)
1493-        return f.read(actuallength)
1494-
1495-    def write_share_data(self, offset, data):
1496-        length = len(data)
1497-        precondition(offset >= 0, offset)
1498-        if self._max_size is not None and offset+length > self._max_size:
1499-            raise DataTooLargeError(self._max_size, offset, length)
1500-        f = open(self.home, 'rb+')
1501-        real_offset = self._data_offset+offset
1502-        f.seek(real_offset)
1503-        assert f.tell() == real_offset
1504-        f.write(data)
1505-        f.close()
1506-
1507-    def _write_lease_record(self, f, lease_number, lease_info):
1508-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1509-        f.seek(offset)
1510-        assert f.tell() == offset
1511-        f.write(lease_info.to_immutable_data())
1512-
1513-    def _read_num_leases(self, f):
1514-        f.seek(0x08)
1515-        (num_leases,) = struct.unpack(">L", f.read(4))
1516-        return num_leases
1517-
1518-    def _write_num_leases(self, f, num_leases):
1519-        f.seek(0x08)
1520-        f.write(struct.pack(">L", num_leases))
1521-
1522-    def _truncate_leases(self, f, num_leases):
1523-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1524-
1525-    def get_leases(self):
1526-        """Yields a LeaseInfo instance for all leases."""
1527-        f = open(self.home, 'rb')
1528-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1529-        f.seek(self._lease_offset)
1530-        for i in range(num_leases):
1531-            data = f.read(self.LEASE_SIZE)
1532-            if data:
1533-                yield LeaseInfo().from_immutable_data(data)
1534-
1535-    def add_lease(self, lease_info):
1536-        f = open(self.home, 'rb+')
1537-        num_leases = self._read_num_leases(f)
1538-        self._write_lease_record(f, num_leases, lease_info)
1539-        self._write_num_leases(f, num_leases+1)
1540-        f.close()
1541-
1542-    def renew_lease(self, renew_secret, new_expire_time):
1543-        for i,lease in enumerate(self.get_leases()):
1544-            if constant_time_compare(lease.renew_secret, renew_secret):
1545-                # yup. See if we need to update the owner time.
1546-                if new_expire_time > lease.expiration_time:
1547-                    # yes
1548-                    lease.expiration_time = new_expire_time
1549-                    f = open(self.home, 'rb+')
1550-                    self._write_lease_record(f, i, lease)
1551-                    f.close()
1552-                return
1553-        raise IndexError("unable to renew non-existent lease")
1554-
1555-    def add_or_renew_lease(self, lease_info):
1556-        try:
1557-            self.renew_lease(lease_info.renew_secret,
1558-                             lease_info.expiration_time)
1559-        except IndexError:
1560-            self.add_lease(lease_info)
1561-
1562-
1563-    def cancel_lease(self, cancel_secret):
1564-        """Remove a lease with the given cancel_secret. If the last lease is
1565-        cancelled, the file will be removed. Return the number of bytes that
1566-        were freed (by truncating the list of leases, and possibly by
1567-        deleting the file. Raise IndexError if there was no lease with the
1568-        given cancel_secret.
1569-        """
1570-
1571-        leases = list(self.get_leases())
1572-        num_leases_removed = 0
1573-        for i,lease in enumerate(leases):
1574-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1575-                leases[i] = None
1576-                num_leases_removed += 1
1577-        if not num_leases_removed:
1578-            raise IndexError("unable to find matching lease to cancel")
1579-        if num_leases_removed:
1580-            # pack and write out the remaining leases. We write these out in
1581-            # the same order as they were added, so that if we crash while
1582-            # doing this, we won't lose any non-cancelled leases.
1583-            leases = [l for l in leases if l] # remove the cancelled leases
1584-            f = open(self.home, 'rb+')
1585-            for i,lease in enumerate(leases):
1586-                self._write_lease_record(f, i, lease)
1587-            self._write_num_leases(f, len(leases))
1588-            self._truncate_leases(f, len(leases))
1589-            f.close()
1590-        space_freed = self.LEASE_SIZE * num_leases_removed
1591-        if not len(leases):
1592-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1593-            self.unlink()
1594-        return space_freed
1595-class NullBucketWriter(Referenceable):
1596-    implements(RIBucketWriter)
1597-
1598-    def remote_write(self, offset, data):
1599-        return
1600-
1601 class BucketWriter(Referenceable):
1602     implements(RIBucketWriter)
1603 
1604hunk ./src/allmydata/storage/immutable.py 17
1605-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1606+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1607         self.ss = ss
1608hunk ./src/allmydata/storage/immutable.py 19
1609-        self.incominghome = incominghome
1610-        self.finalhome = finalhome
1611         self._max_size = max_size # don't allow the client to write more than this
1612         self._canary = canary
1613         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1614hunk ./src/allmydata/storage/immutable.py 24
1615         self.closed = False
1616         self.throw_out_all_data = False
1617-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1618+        self._sharefile = immutableshare
1619         # also, add our lease to the file now, so that other ones can be
1620         # added by simultaneous uploaders
1621         self._sharefile.add_lease(lease_info)
1622hunk ./src/allmydata/storage/server.py 16
1623 from allmydata.storage.lease import LeaseInfo
1624 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1625      create_mutable_sharefile
1626-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1627-from allmydata.storage.crawler import BucketCountingCrawler
1628-from allmydata.storage.expirer import LeaseCheckingCrawler
1629 
1630 from zope.interface import implements
1631 
1632hunk ./src/allmydata/storage/server.py 19
1633-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1634-# be started and stopped.
1635-class Backend(service.MultiService):
1636-    implements(IStatsProducer)
1637-    def __init__(self):
1638-        service.MultiService.__init__(self)
1639-
1640-    def get_bucket_shares(self):
1641-        """XXX"""
1642-        raise NotImplementedError
1643-
1644-    def get_share(self):
1645-        """XXX"""
1646-        raise NotImplementedError
1647-
1648-    def make_bucket_writer(self):
1649-        """XXX"""
1650-        raise NotImplementedError
1651-
1652-class NullBackend(Backend):
1653-    def __init__(self):
1654-        Backend.__init__(self)
1655-
1656-    def get_available_space(self):
1657-        return None
1658-
1659-    def get_bucket_shares(self, storage_index):
1660-        return set()
1661-
1662-    def get_share(self, storage_index, sharenum):
1663-        return None
1664-
1665-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1666-        return NullBucketWriter()
1667-
1668-class FSBackend(Backend):
1669-    def __init__(self, storedir, readonly=False, reserved_space=0):
1670-        Backend.__init__(self)
1671-
1672-        self._setup_storage(storedir, readonly, reserved_space)
1673-        self._setup_corruption_advisory()
1674-        self._setup_bucket_counter()
1675-        self._setup_lease_checkerf()
1676-
1677-    def _setup_storage(self, storedir, readonly, reserved_space):
1678-        self.storedir = storedir
1679-        self.readonly = readonly
1680-        self.reserved_space = int(reserved_space)
1681-        if self.reserved_space:
1682-            if self.get_available_space() is None:
1683-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1684-                        umid="0wZ27w", level=log.UNUSUAL)
1685-
1686-        self.sharedir = os.path.join(self.storedir, "shares")
1687-        fileutil.make_dirs(self.sharedir)
1688-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1689-        self._clean_incomplete()
1690-
1691-    def _clean_incomplete(self):
1692-        fileutil.rm_dir(self.incomingdir)
1693-        fileutil.make_dirs(self.incomingdir)
1694-
1695-    def _setup_corruption_advisory(self):
1696-        # we don't actually create the corruption-advisory dir until necessary
1697-        self.corruption_advisory_dir = os.path.join(self.storedir,
1698-                                                    "corruption-advisories")
1699-
1700-    def _setup_bucket_counter(self):
1701-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1702-        self.bucket_counter = BucketCountingCrawler(statefile)
1703-        self.bucket_counter.setServiceParent(self)
1704-
1705-    def _setup_lease_checkerf(self):
1706-        statefile = os.path.join(self.storedir, "lease_checker.state")
1707-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1708-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1709-                                   expiration_enabled, expiration_mode,
1710-                                   expiration_override_lease_duration,
1711-                                   expiration_cutoff_date,
1712-                                   expiration_sharetypes)
1713-        self.lease_checker.setServiceParent(self)
1714-
1715-    def get_available_space(self):
1716-        if self.readonly:
1717-            return 0
1718-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1719-
1720-    def get_bucket_shares(self, storage_index):
1721-        """Return a list of (shnum, pathname) tuples for files that hold
1722-        shares for this storage_index. In each tuple, 'shnum' will always be
1723-        the integer form of the last component of 'pathname'."""
1724-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1725-        try:
1726-            for f in os.listdir(storagedir):
1727-                if NUM_RE.match(f):
1728-                    filename = os.path.join(storagedir, f)
1729-                    yield (int(f), filename)
1730-        except OSError:
1731-            # Commonly caused by there being no buckets at all.
1732-            pass
1733-
1734 # storage/
1735 # storage/shares/incoming
1736 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1737hunk ./src/allmydata/storage/server.py 32
1738 # $SHARENUM matches this regex:
1739 NUM_RE=re.compile("^[0-9]+$")
1740 
1741-
1742-
1743 class StorageServer(service.MultiService, Referenceable):
1744     implements(RIStorageServer, IStatsProducer)
1745     name = 'storage'
1746hunk ./src/allmydata/storage/server.py 35
1747-    LeaseCheckerClass = LeaseCheckingCrawler
1748 
1749     def __init__(self, nodeid, backend, reserved_space=0,
1750                  readonly_storage=False,
1751hunk ./src/allmydata/storage/server.py 38
1752-                 stats_provider=None,
1753-                 expiration_enabled=False,
1754-                 expiration_mode="age",
1755-                 expiration_override_lease_duration=None,
1756-                 expiration_cutoff_date=None,
1757-                 expiration_sharetypes=("mutable", "immutable")):
1758+                 stats_provider=None ):
1759         service.MultiService.__init__(self)
1760         assert isinstance(nodeid, str)
1761         assert len(nodeid) == 20
1762hunk ./src/allmydata/storage/server.py 217
1763         # they asked about: this will save them a lot of work. Add or update
1764         # leases for all of them: if they want us to hold shares for this
1765         # file, they'll want us to hold leases for this file.
1766-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1767-            alreadygot.add(shnum)
1768-            sf = ShareFile(fn)
1769-            sf.add_or_renew_lease(lease_info)
1770-
1771-        for shnum in sharenums:
1772-            share = self.backend.get_share(storage_index, shnum)
1773+        for share in self.backend.get_shares(storage_index):
1774+            alreadygot.add(share.shnum)
1775+            share.add_or_renew_lease(lease_info)
1776 
1777hunk ./src/allmydata/storage/server.py 221
1778-            if not share:
1779-                if (not limited) or (remaining_space >= max_space_per_bucket):
1780-                    # ok! we need to create the new share file.
1781-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1782-                                      max_space_per_bucket, lease_info, canary)
1783-                    bucketwriters[shnum] = bw
1784-                    self._active_writers[bw] = 1
1785-                    if limited:
1786-                        remaining_space -= max_space_per_bucket
1787-                else:
1788-                    # bummer! not enough space to accept this bucket
1789-                    pass
1790+        for shnum in (sharenums - alreadygot):
1791+            if (not limited) or (remaining_space >= max_space_per_bucket):
1792+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1793+                self.backend.set_storage_server(self)
1794+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1795+                                                     max_space_per_bucket, lease_info, canary)
1796+                bucketwriters[shnum] = bw
1797+                self._active_writers[bw] = 1
1798+                if limited:
1799+                    remaining_space -= max_space_per_bucket
1800 
1801hunk ./src/allmydata/storage/server.py 232
1802-            elif share.is_complete():
1803-                # great! we already have it. easy.
1804-                pass
1805-            elif not share.is_complete():
1806-                # Note that we don't create BucketWriters for shnums that
1807-                # have a partial share (in incoming/), so if a second upload
1808-                # occurs while the first is still in progress, the second
1809-                # uploader will use different storage servers.
1810-                pass
1811+        #XXX We SHOULD DOCUMENT LATER.
1812 
1813         self.add_latency("allocate", time.time() - start)
1814         return alreadygot, bucketwriters
1815hunk ./src/allmydata/storage/server.py 238
1816 
1817     def _iter_share_files(self, storage_index):
1818-        for shnum, filename in self._get_bucket_shares(storage_index):
1819+        for shnum, filename in self._get_shares(storage_index):
1820             f = open(filename, 'rb')
1821             header = f.read(32)
1822             f.close()
1823hunk ./src/allmydata/storage/server.py 318
1824         si_s = si_b2a(storage_index)
1825         log.msg("storage: get_buckets %s" % si_s)
1826         bucketreaders = {} # k: sharenum, v: BucketReader
1827-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1828+        for shnum, filename in self.backend.get_shares(storage_index):
1829             bucketreaders[shnum] = BucketReader(self, filename,
1830                                                 storage_index, shnum)
1831         self.add_latency("get", time.time() - start)
1832hunk ./src/allmydata/storage/server.py 334
1833         # since all shares get the same lease data, we just grab the leases
1834         # from the first share
1835         try:
1836-            shnum, filename = self._get_bucket_shares(storage_index).next()
1837+            shnum, filename = self._get_shares(storage_index).next()
1838             sf = ShareFile(filename)
1839             return sf.get_leases()
1840         except StopIteration:
1841hunk ./src/allmydata/storage/shares.py 1
1842-#! /usr/bin/python
1843-
1844-from allmydata.storage.mutable import MutableShareFile
1845-from allmydata.storage.immutable import ShareFile
1846-
1847-def get_share_file(filename):
1848-    f = open(filename, "rb")
1849-    prefix = f.read(32)
1850-    f.close()
1851-    if prefix == MutableShareFile.MAGIC:
1852-        return MutableShareFile(filename)
1853-    # otherwise assume it's immutable
1854-    return ShareFile(filename)
1855-
1856rmfile ./src/allmydata/storage/shares.py
1857hunk ./src/allmydata/test/common_util.py 20
1858 
1859 def flip_one_bit(s, offset=0, size=None):
1860     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1861-    than offset+size. """
1862+    than offset+size. Return the new string. """
1863     if size is None:
1864         size=len(s)-offset
1865     i = randrange(offset, offset+size)
1866hunk ./src/allmydata/test/test_backends.py 7
1867 
1868 from allmydata.test.common_util import ReallyEqualMixin
1869 
1870-import mock
1871+import mock, os
1872 
1873 # This is the code that we're going to be testing.
1874hunk ./src/allmydata/test/test_backends.py 10
1875-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1876+from allmydata.storage.server import StorageServer
1877+
1878+from allmydata.storage.backends.das.core import DASCore
1879+from allmydata.storage.backends.null.core import NullCore
1880+
1881 
1882 # The following share file contents was generated with
1883 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1884hunk ./src/allmydata/test/test_backends.py 22
1885 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1886 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1887 
1888-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1889+tempdir = 'teststoredir'
1890+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1891+sharefname = os.path.join(sharedirname, '0')
1892 
1893 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1894     @mock.patch('time.time')
1895hunk ./src/allmydata/test/test_backends.py 58
1896         filesystem in only the prescribed ways. """
1897 
1898         def call_open(fname, mode):
1899-            if fname == 'testdir/bucket_counter.state':
1900-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1901-            elif fname == 'testdir/lease_checker.state':
1902-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1903-            elif fname == 'testdir/lease_checker.history':
1904+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1905+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1906+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1907+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1908+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1909                 return StringIO()
1910             else:
1911                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1912hunk ./src/allmydata/test/test_backends.py 124
1913     @mock.patch('__builtin__.open')
1914     def setUp(self, mockopen):
1915         def call_open(fname, mode):
1916-            if fname == 'testdir/bucket_counter.state':
1917-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1918-            elif fname == 'testdir/lease_checker.state':
1919-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1920-            elif fname == 'testdir/lease_checker.history':
1921+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1922+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1923+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1924+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1925+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1926                 return StringIO()
1927         mockopen.side_effect = call_open
1928hunk ./src/allmydata/test/test_backends.py 131
1929-
1930-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1931+        expiration_policy = {'enabled' : False,
1932+                             'mode' : 'age',
1933+                             'override_lease_duration' : None,
1934+                             'cutoff_date' : None,
1935+                             'sharetypes' : None}
1936+        testbackend = DASCore(tempdir, expiration_policy)
1937+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1938 
1939     @mock.patch('time.time')
1940     @mock.patch('os.mkdir')
1941hunk ./src/allmydata/test/test_backends.py 148
1942         """ Write a new share. """
1943 
1944         def call_listdir(dirname):
1945-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1946-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1947+            self.failUnlessReallyEqual(dirname, sharedirname)
1948+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1949 
1950         mocklistdir.side_effect = call_listdir
1951 
1952hunk ./src/allmydata/test/test_backends.py 178
1953 
1954         sharefile = MockFile()
1955         def call_open(fname, mode):
1956-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1957+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1958             return sharefile
1959 
1960         mockopen.side_effect = call_open
1961hunk ./src/allmydata/test/test_backends.py 200
1962         StorageServer object. """
1963 
1964         def call_listdir(dirname):
1965-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1966+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1967             return ['0']
1968 
1969         mocklistdir.side_effect = call_listdir
1970}
1971[checkpoint patch
1972wilcoxjg@gmail.com**20110626165715
1973 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1974] {
1975hunk ./src/allmydata/storage/backends/das/core.py 21
1976 from allmydata.storage.lease import LeaseInfo
1977 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1978      create_mutable_sharefile
1979-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1980+from allmydata.storage.immutable import BucketWriter, BucketReader
1981 from allmydata.storage.crawler import FSBucketCountingCrawler
1982 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1983 
1984hunk ./src/allmydata/storage/backends/das/core.py 27
1985 from zope.interface import implements
1986 
1987+# $SHARENUM matches this regex:
1988+NUM_RE=re.compile("^[0-9]+$")
1989+
1990 class DASCore(Backend):
1991     implements(IStorageBackend)
1992     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1993hunk ./src/allmydata/storage/backends/das/core.py 80
1994         return fileutil.get_available_space(self.storedir, self.reserved_space)
1995 
1996     def get_shares(self, storage_index):
1997-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1998+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
1999         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
2000         try:
2001             for f in os.listdir(finalstoragedir):
2002hunk ./src/allmydata/storage/backends/das/core.py 86
2003                 if NUM_RE.match(f):
2004                     filename = os.path.join(finalstoragedir, f)
2005-                    yield FSBShare(filename, int(f))
2006+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
2007         except OSError:
2008             # Commonly caused by there being no buckets at all.
2009             pass
2010hunk ./src/allmydata/storage/backends/das/core.py 95
2011         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
2012         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2013         return bw
2014+
2015+    def set_storage_server(self, ss):
2016+        self.ss = ss
2017         
2018 
2019 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2020hunk ./src/allmydata/storage/server.py 29
2021 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2022 # base-32 chars).
2023 
2024-# $SHARENUM matches this regex:
2025-NUM_RE=re.compile("^[0-9]+$")
2026 
2027 class StorageServer(service.MultiService, Referenceable):
2028     implements(RIStorageServer, IStatsProducer)
2029}
2030[checkpoint4
2031wilcoxjg@gmail.com**20110628202202
2032 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2033] {
2034hunk ./src/allmydata/storage/backends/das/core.py 96
2035         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2036         return bw
2037 
2038+    def make_bucket_reader(self, share):
2039+        return BucketReader(self.ss, share)
2040+
2041     def set_storage_server(self, ss):
2042         self.ss = ss
2043         
2044hunk ./src/allmydata/storage/backends/das/core.py 138
2045         must not be None. """
2046         precondition((max_size is not None) or (not create), max_size, create)
2047         self.shnum = shnum
2048+        self.storage_index = storageindex
2049         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2050         self._max_size = max_size
2051         if create:
2052hunk ./src/allmydata/storage/backends/das/core.py 173
2053             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2054         self._data_offset = 0xc
2055 
2056+    def get_shnum(self):
2057+        return self.shnum
2058+
2059     def unlink(self):
2060         os.unlink(self.fname)
2061 
2062hunk ./src/allmydata/storage/backends/null/core.py 2
2063 from allmydata.storage.backends.base import Backend
2064+from allmydata.storage.immutable import BucketWriter, BucketReader
2065 
2066 class NullCore(Backend):
2067     def __init__(self):
2068hunk ./src/allmydata/storage/backends/null/core.py 17
2069     def get_share(self, storage_index, sharenum):
2070         return None
2071 
2072-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2073-        return NullBucketWriter()
2074+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2075+       
2076+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2077+
2078+    def set_storage_server(self, ss):
2079+        self.ss = ss
2080+
2081+class ImmutableShare:
2082+    sharetype = "immutable"
2083+
2084+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2085+        """ If max_size is not None then I won't allow more than
2086+        max_size to be written to me. If create=True then max_size
2087+        must not be None. """
2088+        precondition((max_size is not None) or (not create), max_size, create)
2089+        self.shnum = shnum
2090+        self.storage_index = storageindex
2091+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2092+        self._max_size = max_size
2093+        if create:
2094+            # touch the file, so later callers will see that we're working on
2095+            # it. Also construct the metadata.
2096+            assert not os.path.exists(self.fname)
2097+            fileutil.make_dirs(os.path.dirname(self.fname))
2098+            f = open(self.fname, 'wb')
2099+            # The second field -- the four-byte share data length -- is no
2100+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2101+            # there in case someone downgrades a storage server from >=
2102+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2103+            # server to another, etc. We do saturation -- a share data length
2104+            # larger than 2**32-1 (what can fit into the field) is marked as
2105+            # the largest length that can fit into the field. That way, even
2106+            # if this does happen, the old < v1.3.0 server will still allow
2107+            # clients to read the first part of the share.
2108+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2109+            f.close()
2110+            self._lease_offset = max_size + 0x0c
2111+            self._num_leases = 0
2112+        else:
2113+            f = open(self.fname, 'rb')
2114+            filesize = os.path.getsize(self.fname)
2115+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2116+            f.close()
2117+            if version != 1:
2118+                msg = "sharefile %s had version %d but we wanted 1" % \
2119+                      (self.fname, version)
2120+                raise UnknownImmutableContainerVersionError(msg)
2121+            self._num_leases = num_leases
2122+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2123+        self._data_offset = 0xc
2124+
2125+    def get_shnum(self):
2126+        return self.shnum
2127+
2128+    def unlink(self):
2129+        os.unlink(self.fname)
2130+
2131+    def read_share_data(self, offset, length):
2132+        precondition(offset >= 0)
2133+        # Reads beyond the end of the data are truncated. Reads that start
2134+        # beyond the end of the data return an empty string.
2135+        seekpos = self._data_offset+offset
2136+        fsize = os.path.getsize(self.fname)
2137+        actuallength = max(0, min(length, fsize-seekpos))
2138+        if actuallength == 0:
2139+            return ""
2140+        f = open(self.fname, 'rb')
2141+        f.seek(seekpos)
2142+        return f.read(actuallength)
2143+
2144+    def write_share_data(self, offset, data):
2145+        length = len(data)
2146+        precondition(offset >= 0, offset)
2147+        if self._max_size is not None and offset+length > self._max_size:
2148+            raise DataTooLargeError(self._max_size, offset, length)
2149+        f = open(self.fname, 'rb+')
2150+        real_offset = self._data_offset+offset
2151+        f.seek(real_offset)
2152+        assert f.tell() == real_offset
2153+        f.write(data)
2154+        f.close()
2155+
2156+    def _write_lease_record(self, f, lease_number, lease_info):
2157+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2158+        f.seek(offset)
2159+        assert f.tell() == offset
2160+        f.write(lease_info.to_immutable_data())
2161+
2162+    def _read_num_leases(self, f):
2163+        f.seek(0x08)
2164+        (num_leases,) = struct.unpack(">L", f.read(4))
2165+        return num_leases
2166+
2167+    def _write_num_leases(self, f, num_leases):
2168+        f.seek(0x08)
2169+        f.write(struct.pack(">L", num_leases))
2170+
2171+    def _truncate_leases(self, f, num_leases):
2172+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2173+
2174+    def get_leases(self):
2175+        """Yields a LeaseInfo instance for all leases."""
2176+        f = open(self.fname, 'rb')
2177+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2178+        f.seek(self._lease_offset)
2179+        for i in range(num_leases):
2180+            data = f.read(self.LEASE_SIZE)
2181+            if data:
2182+                yield LeaseInfo().from_immutable_data(data)
2183+
2184+    def add_lease(self, lease_info):
2185+        f = open(self.fname, 'rb+')
2186+        num_leases = self._read_num_leases(f)
2187+        self._write_lease_record(f, num_leases, lease_info)
2188+        self._write_num_leases(f, num_leases+1)
2189+        f.close()
2190+
2191+    def renew_lease(self, renew_secret, new_expire_time):
2192+        for i,lease in enumerate(self.get_leases()):
2193+            if constant_time_compare(lease.renew_secret, renew_secret):
2194+                # yup. See if we need to update the owner time.
2195+                if new_expire_time > lease.expiration_time:
2196+                    # yes
2197+                    lease.expiration_time = new_expire_time
2198+                    f = open(self.fname, 'rb+')
2199+                    self._write_lease_record(f, i, lease)
2200+                    f.close()
2201+                return
2202+        raise IndexError("unable to renew non-existent lease")
2203+
2204+    def add_or_renew_lease(self, lease_info):
2205+        try:
2206+            self.renew_lease(lease_info.renew_secret,
2207+                             lease_info.expiration_time)
2208+        except IndexError:
2209+            self.add_lease(lease_info)
2210+
2211+
2212+    def cancel_lease(self, cancel_secret):
2213+        """Remove a lease with the given cancel_secret. If the last lease is
2214+        cancelled, the file will be removed. Return the number of bytes that
2215+        were freed (by truncating the list of leases, and possibly by
2216+        deleting the file. Raise IndexError if there was no lease with the
2217+        given cancel_secret.
2218+        """
2219+
2220+        leases = list(self.get_leases())
2221+        num_leases_removed = 0
2222+        for i,lease in enumerate(leases):
2223+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2224+                leases[i] = None
2225+                num_leases_removed += 1
2226+        if not num_leases_removed:
2227+            raise IndexError("unable to find matching lease to cancel")
2228+        if num_leases_removed:
2229+            # pack and write out the remaining leases. We write these out in
2230+            # the same order as they were added, so that if we crash while
2231+            # doing this, we won't lose any non-cancelled leases.
2232+            leases = [l for l in leases if l] # remove the cancelled leases
2233+            f = open(self.fname, 'rb+')
2234+            for i,lease in enumerate(leases):
2235+                self._write_lease_record(f, i, lease)
2236+            self._write_num_leases(f, len(leases))
2237+            self._truncate_leases(f, len(leases))
2238+            f.close()
2239+        space_freed = self.LEASE_SIZE * num_leases_removed
2240+        if not len(leases):
2241+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2242+            self.unlink()
2243+        return space_freed
2244hunk ./src/allmydata/storage/immutable.py 114
2245 class BucketReader(Referenceable):
2246     implements(RIBucketReader)
2247 
2248-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2249+    def __init__(self, ss, share):
2250         self.ss = ss
2251hunk ./src/allmydata/storage/immutable.py 116
2252-        self._share_file = ShareFile(sharefname)
2253-        self.storage_index = storage_index
2254-        self.shnum = shnum
2255+        self._share_file = share
2256+        self.storage_index = share.storage_index
2257+        self.shnum = share.shnum
2258 
2259     def __repr__(self):
2260         return "<%s %s %s>" % (self.__class__.__name__,
2261hunk ./src/allmydata/storage/server.py 316
2262         si_s = si_b2a(storage_index)
2263         log.msg("storage: get_buckets %s" % si_s)
2264         bucketreaders = {} # k: sharenum, v: BucketReader
2265-        for shnum, filename in self.backend.get_shares(storage_index):
2266-            bucketreaders[shnum] = BucketReader(self, filename,
2267-                                                storage_index, shnum)
2268+        self.backend.set_storage_server(self)
2269+        for share in self.backend.get_shares(storage_index):
2270+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2271         self.add_latency("get", time.time() - start)
2272         return bucketreaders
2273 
2274hunk ./src/allmydata/test/test_backends.py 25
2275 tempdir = 'teststoredir'
2276 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2277 sharefname = os.path.join(sharedirname, '0')
2278+expiration_policy = {'enabled' : False,
2279+                     'mode' : 'age',
2280+                     'override_lease_duration' : None,
2281+                     'cutoff_date' : None,
2282+                     'sharetypes' : None}
2283 
2284 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2285     @mock.patch('time.time')
2286hunk ./src/allmydata/test/test_backends.py 43
2287         tries to read or write to the file system. """
2288 
2289         # Now begin the test.
2290-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2291+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2292 
2293         self.failIf(mockisdir.called)
2294         self.failIf(mocklistdir.called)
2295hunk ./src/allmydata/test/test_backends.py 74
2296         mockopen.side_effect = call_open
2297 
2298         # Now begin the test.
2299-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2300+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2301 
2302         self.failIf(mockisdir.called)
2303         self.failIf(mocklistdir.called)
2304hunk ./src/allmydata/test/test_backends.py 86
2305 
2306 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2307     def setUp(self):
2308-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2309+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2310 
2311     @mock.patch('os.mkdir')
2312     @mock.patch('__builtin__.open')
2313hunk ./src/allmydata/test/test_backends.py 136
2314             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2315                 return StringIO()
2316         mockopen.side_effect = call_open
2317-        expiration_policy = {'enabled' : False,
2318-                             'mode' : 'age',
2319-                             'override_lease_duration' : None,
2320-                             'cutoff_date' : None,
2321-                             'sharetypes' : None}
2322         testbackend = DASCore(tempdir, expiration_policy)
2323         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2324 
2325}
2326[checkpoint5
2327wilcoxjg@gmail.com**20110705034626
2328 Ignore-this: 255780bd58299b0aa33c027e9d008262
2329] {
2330addfile ./src/allmydata/storage/backends/base.py
2331hunk ./src/allmydata/storage/backends/base.py 1
2332+from twisted.application import service
2333+
2334+class Backend(service.MultiService):
2335+    def __init__(self):
2336+        service.MultiService.__init__(self)
2337hunk ./src/allmydata/storage/backends/null/core.py 19
2338 
2339     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2340         
2341+        immutableshare = ImmutableShare()
2342         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2343 
2344     def set_storage_server(self, ss):
2345hunk ./src/allmydata/storage/backends/null/core.py 28
2346 class ImmutableShare:
2347     sharetype = "immutable"
2348 
2349-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2350+    def __init__(self):
2351         """ If max_size is not None then I won't allow more than
2352         max_size to be written to me. If create=True then max_size
2353         must not be None. """
2354hunk ./src/allmydata/storage/backends/null/core.py 32
2355-        precondition((max_size is not None) or (not create), max_size, create)
2356-        self.shnum = shnum
2357-        self.storage_index = storageindex
2358-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2359-        self._max_size = max_size
2360-        if create:
2361-            # touch the file, so later callers will see that we're working on
2362-            # it. Also construct the metadata.
2363-            assert not os.path.exists(self.fname)
2364-            fileutil.make_dirs(os.path.dirname(self.fname))
2365-            f = open(self.fname, 'wb')
2366-            # The second field -- the four-byte share data length -- is no
2367-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2368-            # there in case someone downgrades a storage server from >=
2369-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2370-            # server to another, etc. We do saturation -- a share data length
2371-            # larger than 2**32-1 (what can fit into the field) is marked as
2372-            # the largest length that can fit into the field. That way, even
2373-            # if this does happen, the old < v1.3.0 server will still allow
2374-            # clients to read the first part of the share.
2375-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2376-            f.close()
2377-            self._lease_offset = max_size + 0x0c
2378-            self._num_leases = 0
2379-        else:
2380-            f = open(self.fname, 'rb')
2381-            filesize = os.path.getsize(self.fname)
2382-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2383-            f.close()
2384-            if version != 1:
2385-                msg = "sharefile %s had version %d but we wanted 1" % \
2386-                      (self.fname, version)
2387-                raise UnknownImmutableContainerVersionError(msg)
2388-            self._num_leases = num_leases
2389-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2390-        self._data_offset = 0xc
2391+        pass
2392 
2393     def get_shnum(self):
2394         return self.shnum
2395hunk ./src/allmydata/storage/backends/null/core.py 54
2396         return f.read(actuallength)
2397 
2398     def write_share_data(self, offset, data):
2399-        length = len(data)
2400-        precondition(offset >= 0, offset)
2401-        if self._max_size is not None and offset+length > self._max_size:
2402-            raise DataTooLargeError(self._max_size, offset, length)
2403-        f = open(self.fname, 'rb+')
2404-        real_offset = self._data_offset+offset
2405-        f.seek(real_offset)
2406-        assert f.tell() == real_offset
2407-        f.write(data)
2408-        f.close()
2409+        pass
2410 
2411     def _write_lease_record(self, f, lease_number, lease_info):
2412         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2413hunk ./src/allmydata/storage/backends/null/core.py 84
2414             if data:
2415                 yield LeaseInfo().from_immutable_data(data)
2416 
2417-    def add_lease(self, lease_info):
2418-        f = open(self.fname, 'rb+')
2419-        num_leases = self._read_num_leases(f)
2420-        self._write_lease_record(f, num_leases, lease_info)
2421-        self._write_num_leases(f, num_leases+1)
2422-        f.close()
2423+    def add_lease(self, lease):
2424+        pass
2425 
2426     def renew_lease(self, renew_secret, new_expire_time):
2427         for i,lease in enumerate(self.get_leases()):
2428hunk ./src/allmydata/test/test_backends.py 32
2429                      'sharetypes' : None}
2430 
2431 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2432-    @mock.patch('time.time')
2433-    @mock.patch('os.mkdir')
2434-    @mock.patch('__builtin__.open')
2435-    @mock.patch('os.listdir')
2436-    @mock.patch('os.path.isdir')
2437-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2438-        """ This tests whether a server instance can be constructed
2439-        with a null backend. The server instance fails the test if it
2440-        tries to read or write to the file system. """
2441-
2442-        # Now begin the test.
2443-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2444-
2445-        self.failIf(mockisdir.called)
2446-        self.failIf(mocklistdir.called)
2447-        self.failIf(mockopen.called)
2448-        self.failIf(mockmkdir.called)
2449-
2450-        # You passed!
2451-
2452     @mock.patch('time.time')
2453     @mock.patch('os.mkdir')
2454     @mock.patch('__builtin__.open')
2455hunk ./src/allmydata/test/test_backends.py 53
2456                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2457         mockopen.side_effect = call_open
2458 
2459-        # Now begin the test.
2460-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2461-
2462-        self.failIf(mockisdir.called)
2463-        self.failIf(mocklistdir.called)
2464-        self.failIf(mockopen.called)
2465-        self.failIf(mockmkdir.called)
2466-        self.failIf(mocktime.called)
2467-
2468-        # You passed!
2469-
2470-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2471-    def setUp(self):
2472-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2473-
2474-    @mock.patch('os.mkdir')
2475-    @mock.patch('__builtin__.open')
2476-    @mock.patch('os.listdir')
2477-    @mock.patch('os.path.isdir')
2478-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2479-        """ Write a new share. """
2480-
2481-        # Now begin the test.
2482-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2483-        bs[0].remote_write(0, 'a')
2484-        self.failIf(mockisdir.called)
2485-        self.failIf(mocklistdir.called)
2486-        self.failIf(mockopen.called)
2487-        self.failIf(mockmkdir.called)
2488+        def call_isdir(fname):
2489+            if fname == os.path.join(tempdir,'shares'):
2490+                return True
2491+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2492+                return True
2493+            else:
2494+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2495+        mockisdir.side_effect = call_isdir
2496 
2497hunk ./src/allmydata/test/test_backends.py 62
2498-    @mock.patch('os.path.exists')
2499-    @mock.patch('os.path.getsize')
2500-    @mock.patch('__builtin__.open')
2501-    @mock.patch('os.listdir')
2502-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2503-        """ This tests whether the code correctly finds and reads
2504-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2505-        servers. There is a similar test in test_download, but that one
2506-        is from the perspective of the client and exercises a deeper
2507-        stack of code. This one is for exercising just the
2508-        StorageServer object. """
2509+        def call_mkdir(fname, mode):
2510+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2511+            self.failUnlessEqual(0777, mode)
2512+            if fname == tempdir:
2513+                return None
2514+            elif fname == os.path.join(tempdir,'shares'):
2515+                return None
2516+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2517+                return None
2518+            else:
2519+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2520+        mockmkdir.side_effect = call_mkdir
2521 
2522         # Now begin the test.
2523hunk ./src/allmydata/test/test_backends.py 76
2524-        bs = self.s.remote_get_buckets('teststorage_index')
2525+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2526 
2527hunk ./src/allmydata/test/test_backends.py 78
2528-        self.failUnlessEqual(len(bs), 0)
2529-        self.failIf(mocklistdir.called)
2530-        self.failIf(mockopen.called)
2531-        self.failIf(mockgetsize.called)
2532-        self.failIf(mockexists.called)
2533+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2534 
2535 
2536 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2537hunk ./src/allmydata/test/test_backends.py 193
2538         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2539 
2540 
2541+
2542+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2543+    @mock.patch('time.time')
2544+    @mock.patch('os.mkdir')
2545+    @mock.patch('__builtin__.open')
2546+    @mock.patch('os.listdir')
2547+    @mock.patch('os.path.isdir')
2548+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2549+        """ This tests whether a file system backend instance can be
2550+        constructed. To pass the test, it has to use the
2551+        filesystem in only the prescribed ways. """
2552+
2553+        def call_open(fname, mode):
2554+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2555+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2556+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2557+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2558+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2559+                return StringIO()
2560+            else:
2561+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2562+        mockopen.side_effect = call_open
2563+
2564+        def call_isdir(fname):
2565+            if fname == os.path.join(tempdir,'shares'):
2566+                return True
2567+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2568+                return True
2569+            else:
2570+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2571+        mockisdir.side_effect = call_isdir
2572+
2573+        def call_mkdir(fname, mode):
2574+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2575+            self.failUnlessEqual(0777, mode)
2576+            if fname == tempdir:
2577+                return None
2578+            elif fname == os.path.join(tempdir,'shares'):
2579+                return None
2580+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2581+                return None
2582+            else:
2583+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2584+        mockmkdir.side_effect = call_mkdir
2585+
2586+        # Now begin the test.
2587+        DASCore('teststoredir', expiration_policy)
2588+
2589+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2590}
2591[checkpoint 6
2592wilcoxjg@gmail.com**20110706190824
2593 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2594] {
2595hunk ./src/allmydata/interfaces.py 100
2596                          renew_secret=LeaseRenewSecret,
2597                          cancel_secret=LeaseCancelSecret,
2598                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2599-                         allocated_size=Offset, canary=Referenceable):
2600+                         allocated_size=Offset,
2601+                         canary=Referenceable):
2602         """
2603hunk ./src/allmydata/interfaces.py 103
2604-        @param storage_index: the index of the bucket to be created or
2605+        @param storage_index: the index of the shares to be created or
2606                               increfed.
2607hunk ./src/allmydata/interfaces.py 105
2608-        @param sharenums: these are the share numbers (probably between 0 and
2609-                          99) that the sender is proposing to store on this
2610-                          server.
2611-        @param renew_secret: This is the secret used to protect bucket refresh
2612+        @param renew_secret: This is the secret used to protect shares refresh
2613                              This secret is generated by the client and
2614                              stored for later comparison by the server. Each
2615                              server is given a different secret.
2616hunk ./src/allmydata/interfaces.py 109
2617-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2618-        @param canary: If the canary is lost before close(), the bucket is
2619+        @param cancel_secret: Like renew_secret, but protects shares decref.
2620+        @param sharenums: these are the share numbers (probably between 0 and
2621+                          99) that the sender is proposing to store on this
2622+                          server.
2623+        @param allocated_size: XXX The size of the shares the client wishes to store.
2624+        @param canary: If the canary is lost before close(), the shares are
2625                        deleted.
2626hunk ./src/allmydata/interfaces.py 116
2627+
2628         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2629                  already have and allocated is what we hereby agree to accept.
2630                  New leases are added for shares in both lists.
2631hunk ./src/allmydata/interfaces.py 128
2632                   renew_secret=LeaseRenewSecret,
2633                   cancel_secret=LeaseCancelSecret):
2634         """
2635-        Add a new lease on the given bucket. If the renew_secret matches an
2636+        Add a new lease on the given shares. If the renew_secret matches an
2637         existing lease, that lease will be renewed instead. If there is no
2638         bucket for the given storage_index, return silently. (note that in
2639         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2640hunk ./src/allmydata/storage/server.py 17
2641 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2642      create_mutable_sharefile
2643 
2644-from zope.interface import implements
2645-
2646 # storage/
2647 # storage/shares/incoming
2648 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2649hunk ./src/allmydata/test/test_backends.py 6
2650 from StringIO import StringIO
2651 
2652 from allmydata.test.common_util import ReallyEqualMixin
2653+from allmydata.util.assertutil import _assert
2654 
2655 import mock, os
2656 
2657hunk ./src/allmydata/test/test_backends.py 92
2658                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2659             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2660                 return StringIO()
2661+            else:
2662+                _assert(False, "The tester code doesn't recognize this case.") 
2663+
2664         mockopen.side_effect = call_open
2665         testbackend = DASCore(tempdir, expiration_policy)
2666         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2667hunk ./src/allmydata/test/test_backends.py 109
2668 
2669         def call_listdir(dirname):
2670             self.failUnlessReallyEqual(dirname, sharedirname)
2671-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2672+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2673 
2674         mocklistdir.side_effect = call_listdir
2675 
2676hunk ./src/allmydata/test/test_backends.py 113
2677+        def call_isdir(dirname):
2678+            self.failUnlessReallyEqual(dirname, sharedirname)
2679+            return True
2680+
2681+        mockisdir.side_effect = call_isdir
2682+
2683+        def call_mkdir(dirname, permissions):
2684+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2685+                self.Fail
2686+            else:
2687+                return True
2688+
2689+        mockmkdir.side_effect = call_mkdir
2690+
2691         class MockFile:
2692             def __init__(self):
2693                 self.buffer = ''
2694hunk ./src/allmydata/test/test_backends.py 156
2695             return sharefile
2696 
2697         mockopen.side_effect = call_open
2698+
2699         # Now begin the test.
2700         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2701         bs[0].remote_write(0, 'a')
2702hunk ./src/allmydata/test/test_backends.py 161
2703         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2704+       
2705+        # Now test the allocated_size method.
2706+        spaceint = self.s.allocated_size()
2707 
2708     @mock.patch('os.path.exists')
2709     @mock.patch('os.path.getsize')
2710}
2711[checkpoint 7
2712wilcoxjg@gmail.com**20110706200820
2713 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2714] hunk ./src/allmydata/test/test_backends.py 164
2715         
2716         # Now test the allocated_size method.
2717         spaceint = self.s.allocated_size()
2718+        self.failUnlessReallyEqual(spaceint, 1)
2719 
2720     @mock.patch('os.path.exists')
2721     @mock.patch('os.path.getsize')
2722[checkpoint8
2723wilcoxjg@gmail.com**20110706223126
2724 Ignore-this: 97336180883cb798b16f15411179f827
2725   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2726] hunk ./src/allmydata/test/test_backends.py 32
2727                      'cutoff_date' : None,
2728                      'sharetypes' : None}
2729 
2730+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2731+    def setUp(self):
2732+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2733+
2734+    @mock.patch('os.mkdir')
2735+    @mock.patch('__builtin__.open')
2736+    @mock.patch('os.listdir')
2737+    @mock.patch('os.path.isdir')
2738+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2739+        """ Write a new share. """
2740+
2741+        # Now begin the test.
2742+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2743+        bs[0].remote_write(0, 'a')
2744+        self.failIf(mockisdir.called)
2745+        self.failIf(mocklistdir.called)
2746+        self.failIf(mockopen.called)
2747+        self.failIf(mockmkdir.called)
2748+
2749 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2750     @mock.patch('time.time')
2751     @mock.patch('os.mkdir')
2752[checkpoint 9
2753wilcoxjg@gmail.com**20110707042942
2754 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2755] {
2756hunk ./src/allmydata/storage/backends/das/core.py 88
2757                     filename = os.path.join(finalstoragedir, f)
2758                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2759         except OSError:
2760-            # Commonly caused by there being no buckets at all.
2761+            # Commonly caused by there being no shares at all.
2762             pass
2763         
2764     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2765hunk ./src/allmydata/storage/backends/das/core.py 141
2766         self.storage_index = storageindex
2767         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2768         self._max_size = max_size
2769+        self.incomingdir = os.path.join(sharedir, 'incoming')
2770+        si_dir = storage_index_to_dir(storageindex)
2771+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2772+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2773         if create:
2774             # touch the file, so later callers will see that we're working on
2775             # it. Also construct the metadata.
2776hunk ./src/allmydata/storage/backends/das/core.py 177
2777             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2778         self._data_offset = 0xc
2779 
2780+    def close(self):
2781+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2782+        fileutil.rename(self.incominghome, self.finalhome)
2783+        try:
2784+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2785+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2786+            # these directories lying around forever, but the delete might
2787+            # fail if we're working on another share for the same storage
2788+            # index (like ab/abcde/5). The alternative approach would be to
2789+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2790+            # ShareWriter), each of which is responsible for a single
2791+            # directory on disk, and have them use reference counting of
2792+            # their children to know when they should do the rmdir. This
2793+            # approach is simpler, but relies on os.rmdir refusing to delete
2794+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2795+            os.rmdir(os.path.dirname(self.incominghome))
2796+            # we also delete the grandparent (prefix) directory, .../ab ,
2797+            # again to avoid leaving directories lying around. This might
2798+            # fail if there is another bucket open that shares a prefix (like
2799+            # ab/abfff).
2800+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2801+            # we leave the great-grandparent (incoming/) directory in place.
2802+        except EnvironmentError:
2803+            # ignore the "can't rmdir because the directory is not empty"
2804+            # exceptions, those are normal consequences of the
2805+            # above-mentioned conditions.
2806+            pass
2807+        pass
2808+       
2809+    def stat(self):
2810+        return os.stat(self.finalhome)[stat.ST_SIZE]
2811+
2812     def get_shnum(self):
2813         return self.shnum
2814 
2815hunk ./src/allmydata/storage/immutable.py 7
2816 
2817 from zope.interface import implements
2818 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2819-from allmydata.util import base32, fileutil, log
2820+from allmydata.util import base32, log
2821 from allmydata.util.assertutil import precondition
2822 from allmydata.util.hashutil import constant_time_compare
2823 from allmydata.storage.lease import LeaseInfo
2824hunk ./src/allmydata/storage/immutable.py 44
2825     def remote_close(self):
2826         precondition(not self.closed)
2827         start = time.time()
2828-
2829-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2830-        fileutil.rename(self.incominghome, self.finalhome)
2831-        try:
2832-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2833-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2834-            # these directories lying around forever, but the delete might
2835-            # fail if we're working on another share for the same storage
2836-            # index (like ab/abcde/5). The alternative approach would be to
2837-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2838-            # ShareWriter), each of which is responsible for a single
2839-            # directory on disk, and have them use reference counting of
2840-            # their children to know when they should do the rmdir. This
2841-            # approach is simpler, but relies on os.rmdir refusing to delete
2842-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2843-            os.rmdir(os.path.dirname(self.incominghome))
2844-            # we also delete the grandparent (prefix) directory, .../ab ,
2845-            # again to avoid leaving directories lying around. This might
2846-            # fail if there is another bucket open that shares a prefix (like
2847-            # ab/abfff).
2848-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2849-            # we leave the great-grandparent (incoming/) directory in place.
2850-        except EnvironmentError:
2851-            # ignore the "can't rmdir because the directory is not empty"
2852-            # exceptions, those are normal consequences of the
2853-            # above-mentioned conditions.
2854-            pass
2855+        self._sharefile.close()
2856         self._sharefile = None
2857         self.closed = True
2858         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2859hunk ./src/allmydata/storage/immutable.py 49
2860 
2861-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2862+        filelen = self._sharefile.stat()
2863         self.ss.bucket_writer_closed(self, filelen)
2864         self.ss.add_latency("close", time.time() - start)
2865         self.ss.count("close")
2866hunk ./src/allmydata/storage/server.py 45
2867         self._active_writers = weakref.WeakKeyDictionary()
2868         self.backend = backend
2869         self.backend.setServiceParent(self)
2870+        self.backend.set_storage_server(self)
2871         log.msg("StorageServer created", facility="tahoe.storage")
2872 
2873         self.latencies = {"allocate": [], # immutable
2874hunk ./src/allmydata/storage/server.py 220
2875 
2876         for shnum in (sharenums - alreadygot):
2877             if (not limited) or (remaining_space >= max_space_per_bucket):
2878-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2879-                self.backend.set_storage_server(self)
2880                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2881                                                      max_space_per_bucket, lease_info, canary)
2882                 bucketwriters[shnum] = bw
2883hunk ./src/allmydata/test/test_backends.py 117
2884         mockopen.side_effect = call_open
2885         testbackend = DASCore(tempdir, expiration_policy)
2886         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2887-
2888+   
2889+    @mock.patch('allmydata.util.fileutil.get_available_space')
2890     @mock.patch('time.time')
2891     @mock.patch('os.mkdir')
2892     @mock.patch('__builtin__.open')
2893hunk ./src/allmydata/test/test_backends.py 124
2894     @mock.patch('os.listdir')
2895     @mock.patch('os.path.isdir')
2896-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2897+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2898+                             mockget_available_space):
2899         """ Write a new share. """
2900 
2901         def call_listdir(dirname):
2902hunk ./src/allmydata/test/test_backends.py 148
2903 
2904         mockmkdir.side_effect = call_mkdir
2905 
2906+        def call_get_available_space(storedir, reserved_space):
2907+            self.failUnlessReallyEqual(storedir, tempdir)
2908+            return 1
2909+
2910+        mockget_available_space.side_effect = call_get_available_space
2911+
2912         class MockFile:
2913             def __init__(self):
2914                 self.buffer = ''
2915hunk ./src/allmydata/test/test_backends.py 188
2916         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2917         bs[0].remote_write(0, 'a')
2918         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2919-       
2920+
2921+        # What happens when there's not enough space for the client's request?
2922+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2923+
2924         # Now test the allocated_size method.
2925         spaceint = self.s.allocated_size()
2926         self.failUnlessReallyEqual(spaceint, 1)
2927}
2928[checkpoint10
2929wilcoxjg@gmail.com**20110707172049
2930 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2931] {
2932hunk ./src/allmydata/test/test_backends.py 20
2933 # The following share file contents was generated with
2934 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2935 # with share data == 'a'.
2936-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2937+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2938+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2939+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2940 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2941 
2942hunk ./src/allmydata/test/test_backends.py 25
2943+testnodeid = 'testnodeidxxxxxxxxxx'
2944 tempdir = 'teststoredir'
2945 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2946 sharefname = os.path.join(sharedirname, '0')
2947hunk ./src/allmydata/test/test_backends.py 37
2948 
2949 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2950     def setUp(self):
2951-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2952+        self.s = StorageServer(testnodeid, backend=NullCore())
2953 
2954     @mock.patch('os.mkdir')
2955     @mock.patch('__builtin__.open')
2956hunk ./src/allmydata/test/test_backends.py 99
2957         mockmkdir.side_effect = call_mkdir
2958 
2959         # Now begin the test.
2960-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2961+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2962 
2963         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2964 
2965hunk ./src/allmydata/test/test_backends.py 119
2966 
2967         mockopen.side_effect = call_open
2968         testbackend = DASCore(tempdir, expiration_policy)
2969-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2970-   
2971+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
2972+       
2973+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
2974     @mock.patch('allmydata.util.fileutil.get_available_space')
2975     @mock.patch('time.time')
2976     @mock.patch('os.mkdir')
2977hunk ./src/allmydata/test/test_backends.py 129
2978     @mock.patch('os.listdir')
2979     @mock.patch('os.path.isdir')
2980     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2981-                             mockget_available_space):
2982+                             mockget_available_space, mockget_shares):
2983         """ Write a new share. """
2984 
2985         def call_listdir(dirname):
2986hunk ./src/allmydata/test/test_backends.py 139
2987         mocklistdir.side_effect = call_listdir
2988 
2989         def call_isdir(dirname):
2990+            #XXX Should there be any other tests here?
2991             self.failUnlessReallyEqual(dirname, sharedirname)
2992             return True
2993 
2994hunk ./src/allmydata/test/test_backends.py 159
2995 
2996         mockget_available_space.side_effect = call_get_available_space
2997 
2998+        mocktime.return_value = 0
2999+        class MockShare:
3000+            def __init__(self):
3001+                self.shnum = 1
3002+               
3003+            def add_or_renew_lease(elf, lease_info):
3004+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3005+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3006+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3007+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3008+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3009+               
3010+
3011+        share = MockShare()
3012+        def call_get_shares(storageindex):
3013+            return [share]
3014+
3015+        mockget_shares.side_effect = call_get_shares
3016+
3017         class MockFile:
3018             def __init__(self):
3019                 self.buffer = ''
3020hunk ./src/allmydata/test/test_backends.py 199
3021             def tell(self):
3022                 return self.pos
3023 
3024-        mocktime.return_value = 0
3025 
3026         sharefile = MockFile()
3027         def call_open(fname, mode):
3028}
3029[jacp 11
3030wilcoxjg@gmail.com**20110708213919
3031 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3032] {
3033hunk ./src/allmydata/storage/backends/das/core.py 144
3034         self.incomingdir = os.path.join(sharedir, 'incoming')
3035         si_dir = storage_index_to_dir(storageindex)
3036         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3037+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3038         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3039         if create:
3040             # touch the file, so later callers will see that we're working on
3041hunk ./src/allmydata/storage/backends/das/core.py 208
3042         pass
3043         
3044     def stat(self):
3045-        return os.stat(self.finalhome)[stat.ST_SIZE]
3046+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3047 
3048     def get_shnum(self):
3049         return self.shnum
3050hunk ./src/allmydata/storage/immutable.py 44
3051     def remote_close(self):
3052         precondition(not self.closed)
3053         start = time.time()
3054+
3055         self._sharefile.close()
3056hunk ./src/allmydata/storage/immutable.py 46
3057+        filelen = self._sharefile.stat()
3058         self._sharefile = None
3059hunk ./src/allmydata/storage/immutable.py 48
3060+
3061         self.closed = True
3062         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3063 
3064hunk ./src/allmydata/storage/immutable.py 52
3065-        filelen = self._sharefile.stat()
3066         self.ss.bucket_writer_closed(self, filelen)
3067         self.ss.add_latency("close", time.time() - start)
3068         self.ss.count("close")
3069hunk ./src/allmydata/storage/server.py 220
3070 
3071         for shnum in (sharenums - alreadygot):
3072             if (not limited) or (remaining_space >= max_space_per_bucket):
3073-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3074-                                                     max_space_per_bucket, lease_info, canary)
3075+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3076                 bucketwriters[shnum] = bw
3077                 self._active_writers[bw] = 1
3078                 if limited:
3079hunk ./src/allmydata/test/test_backends.py 20
3080 # The following share file contents was generated with
3081 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3082 # with share data == 'a'.
3083-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3084-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3085+renew_secret  = 'x'*32
3086+cancel_secret = 'y'*32
3087 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3088 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3089 
3090hunk ./src/allmydata/test/test_backends.py 27
3091 testnodeid = 'testnodeidxxxxxxxxxx'
3092 tempdir = 'teststoredir'
3093-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3094-sharefname = os.path.join(sharedirname, '0')
3095+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3096+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3097+shareincomingname = os.path.join(sharedirincomingname, '0')
3098+sharefname = os.path.join(sharedirfinalname, '0')
3099+
3100 expiration_policy = {'enabled' : False,
3101                      'mode' : 'age',
3102                      'override_lease_duration' : None,
3103hunk ./src/allmydata/test/test_backends.py 123
3104         mockopen.side_effect = call_open
3105         testbackend = DASCore(tempdir, expiration_policy)
3106         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3107-       
3108+
3109+    @mock.patch('allmydata.util.fileutil.rename')
3110+    @mock.patch('allmydata.util.fileutil.make_dirs')
3111+    @mock.patch('os.path.exists')
3112+    @mock.patch('os.stat')
3113     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3114     @mock.patch('allmydata.util.fileutil.get_available_space')
3115     @mock.patch('time.time')
3116hunk ./src/allmydata/test/test_backends.py 136
3117     @mock.patch('os.listdir')
3118     @mock.patch('os.path.isdir')
3119     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3120-                             mockget_available_space, mockget_shares):
3121+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3122+                             mockmake_dirs, mockrename):
3123         """ Write a new share. """
3124 
3125         def call_listdir(dirname):
3126hunk ./src/allmydata/test/test_backends.py 141
3127-            self.failUnlessReallyEqual(dirname, sharedirname)
3128+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3129             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3130 
3131         mocklistdir.side_effect = call_listdir
3132hunk ./src/allmydata/test/test_backends.py 148
3133 
3134         def call_isdir(dirname):
3135             #XXX Should there be any other tests here?
3136-            self.failUnlessReallyEqual(dirname, sharedirname)
3137+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3138             return True
3139 
3140         mockisdir.side_effect = call_isdir
3141hunk ./src/allmydata/test/test_backends.py 154
3142 
3143         def call_mkdir(dirname, permissions):
3144-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3145+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3146                 self.Fail
3147             else:
3148                 return True
3149hunk ./src/allmydata/test/test_backends.py 208
3150                 return self.pos
3151 
3152 
3153-        sharefile = MockFile()
3154+        fobj = MockFile()
3155         def call_open(fname, mode):
3156             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3157hunk ./src/allmydata/test/test_backends.py 211
3158-            return sharefile
3159+            return fobj
3160 
3161         mockopen.side_effect = call_open
3162 
3163hunk ./src/allmydata/test/test_backends.py 215
3164+        def call_make_dirs(dname):
3165+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3166+           
3167+        mockmake_dirs.side_effect = call_make_dirs
3168+
3169+        def call_rename(src, dst):
3170+           self.failUnlessReallyEqual(src, shareincomingname)
3171+           self.failUnlessReallyEqual(dst, sharefname)
3172+           
3173+        mockrename.side_effect = call_rename
3174+
3175+        def call_exists(fname):
3176+            self.failUnlessReallyEqual(fname, sharefname)
3177+
3178+        mockexists.side_effect = call_exists
3179+
3180         # Now begin the test.
3181         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3182         bs[0].remote_write(0, 'a')
3183hunk ./src/allmydata/test/test_backends.py 234
3184-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3185+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3186+        spaceint = self.s.allocated_size()
3187+        self.failUnlessReallyEqual(spaceint, 1)
3188+
3189+        bs[0].remote_close()
3190 
3191         # What happens when there's not enough space for the client's request?
3192hunk ./src/allmydata/test/test_backends.py 241
3193-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3194+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3195 
3196         # Now test the allocated_size method.
3197hunk ./src/allmydata/test/test_backends.py 244
3198-        spaceint = self.s.allocated_size()
3199-        self.failUnlessReallyEqual(spaceint, 1)
3200+        #self.failIf(mockexists.called, mockexists.call_args_list)
3201+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3202+        #self.failIf(mockrename.called, mockrename.call_args_list)
3203+        #self.failIf(mockstat.called, mockstat.call_args_list)
3204 
3205     @mock.patch('os.path.exists')
3206     @mock.patch('os.path.getsize')
3207}
3208[checkpoint12 testing correct behavior with regard to incoming and final
3209wilcoxjg@gmail.com**20110710191915
3210 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3211] {
3212hunk ./src/allmydata/storage/backends/das/core.py 74
3213         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3214         self.lease_checker.setServiceParent(self)
3215 
3216+    def get_incoming(self, storageindex):
3217+        return set((1,))
3218+
3219     def get_available_space(self):
3220         if self.readonly:
3221             return 0
3222hunk ./src/allmydata/storage/server.py 77
3223         """Return a dict, indexed by category, that contains a dict of
3224         latency numbers for each category. If there are sufficient samples
3225         for unambiguous interpretation, each dict will contain the
3226-        following keys: mean, 01_0_percentile, 10_0_percentile,
3227+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3228         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3229         99_0_percentile, 99_9_percentile.  If there are insufficient
3230         samples for a given percentile to be interpreted unambiguously
3231hunk ./src/allmydata/storage/server.py 120
3232 
3233     def get_stats(self):
3234         # remember: RIStatsProvider requires that our return dict
3235-        # contains numeric values.
3236+        # contains numeric, or None values.
3237         stats = { 'storage_server.allocated': self.allocated_size(), }
3238         stats['storage_server.reserved_space'] = self.reserved_space
3239         for category,ld in self.get_latencies().items():
3240hunk ./src/allmydata/storage/server.py 185
3241         start = time.time()
3242         self.count("allocate")
3243         alreadygot = set()
3244+        incoming = set()
3245         bucketwriters = {} # k: shnum, v: BucketWriter
3246 
3247         si_s = si_b2a(storage_index)
3248hunk ./src/allmydata/storage/server.py 219
3249             alreadygot.add(share.shnum)
3250             share.add_or_renew_lease(lease_info)
3251 
3252-        for shnum in (sharenums - alreadygot):
3253+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3254+        incoming = self.backend.get_incoming(storageindex)
3255+
3256+        for shnum in ((sharenums - alreadygot) - incoming):
3257             if (not limited) or (remaining_space >= max_space_per_bucket):
3258                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3259                 bucketwriters[shnum] = bw
3260hunk ./src/allmydata/storage/server.py 229
3261                 self._active_writers[bw] = 1
3262                 if limited:
3263                     remaining_space -= max_space_per_bucket
3264-
3265-        #XXX We SHOULD DOCUMENT LATER.
3266+            else:
3267+                # Bummer not enough space to accept this share.
3268+                pass
3269 
3270         self.add_latency("allocate", time.time() - start)
3271         return alreadygot, bucketwriters
3272hunk ./src/allmydata/storage/server.py 323
3273         self.add_latency("get", time.time() - start)
3274         return bucketreaders
3275 
3276-    def get_leases(self, storage_index):
3277+    def remote_get_incoming(self, storageindex):
3278+        incoming_share_set = self.backend.get_incoming(storageindex)
3279+        return incoming_share_set
3280+
3281+    def get_leases(self, storageindex):
3282         """Provide an iterator that yields all of the leases attached to this
3283         bucket. Each lease is returned as a LeaseInfo instance.
3284 
3285hunk ./src/allmydata/storage/server.py 337
3286         # since all shares get the same lease data, we just grab the leases
3287         # from the first share
3288         try:
3289-            shnum, filename = self._get_shares(storage_index).next()
3290+            shnum, filename = self._get_shares(storageindex).next()
3291             sf = ShareFile(filename)
3292             return sf.get_leases()
3293         except StopIteration:
3294hunk ./src/allmydata/test/test_backends.py 182
3295 
3296         share = MockShare()
3297         def call_get_shares(storageindex):
3298-            return [share]
3299+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3300+            return []#share]
3301 
3302         mockget_shares.side_effect = call_get_shares
3303 
3304hunk ./src/allmydata/test/test_backends.py 222
3305         mockmake_dirs.side_effect = call_make_dirs
3306 
3307         def call_rename(src, dst):
3308-           self.failUnlessReallyEqual(src, shareincomingname)
3309-           self.failUnlessReallyEqual(dst, sharefname)
3310+            self.failUnlessReallyEqual(src, shareincomingname)
3311+            self.failUnlessReallyEqual(dst, sharefname)
3312             
3313         mockrename.side_effect = call_rename
3314 
3315hunk ./src/allmydata/test/test_backends.py 233
3316         mockexists.side_effect = call_exists
3317 
3318         # Now begin the test.
3319+
3320+        # XXX (0) ???  Fail unless something is not properly set-up?
3321         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3322hunk ./src/allmydata/test/test_backends.py 236
3323+
3324+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3325+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3326+
3327+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3328+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3329+        # with the same si, until BucketWriter.remote_close() has been called.
3330+        # self.failIf(bsa)
3331+
3332+        # XXX (3) Inspect final and fail unless there's nothing there.
3333         bs[0].remote_write(0, 'a')
3334hunk ./src/allmydata/test/test_backends.py 247
3335+        # XXX (4a) Inspect final and fail unless share 0 is there.
3336+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3337         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3338         spaceint = self.s.allocated_size()
3339         self.failUnlessReallyEqual(spaceint, 1)
3340hunk ./src/allmydata/test/test_backends.py 253
3341 
3342+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3343         bs[0].remote_close()
3344 
3345         # What happens when there's not enough space for the client's request?
3346hunk ./src/allmydata/test/test_backends.py 260
3347         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3348 
3349         # Now test the allocated_size method.
3350-        #self.failIf(mockexists.called, mockexists.call_args_list)
3351+        # self.failIf(mockexists.called, mockexists.call_args_list)
3352         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3353         #self.failIf(mockrename.called, mockrename.call_args_list)
3354         #self.failIf(mockstat.called, mockstat.call_args_list)
3355}
3356[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3357wilcoxjg@gmail.com**20110710195139
3358 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3359] {
3360hunk ./src/allmydata/storage/server.py 220
3361             share.add_or_renew_lease(lease_info)
3362 
3363         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3364-        incoming = self.backend.get_incoming(storageindex)
3365+        incoming = self.backend.get_incoming(storage_index)
3366 
3367         for shnum in ((sharenums - alreadygot) - incoming):
3368             if (not limited) or (remaining_space >= max_space_per_bucket):
3369hunk ./src/allmydata/storage/server.py 323
3370         self.add_latency("get", time.time() - start)
3371         return bucketreaders
3372 
3373-    def remote_get_incoming(self, storageindex):
3374-        incoming_share_set = self.backend.get_incoming(storageindex)
3375+    def remote_get_incoming(self, storage_index):
3376+        incoming_share_set = self.backend.get_incoming(storage_index)
3377         return incoming_share_set
3378 
3379hunk ./src/allmydata/storage/server.py 327
3380-    def get_leases(self, storageindex):
3381+    def get_leases(self, storage_index):
3382         """Provide an iterator that yields all of the leases attached to this
3383         bucket. Each lease is returned as a LeaseInfo instance.
3384 
3385hunk ./src/allmydata/storage/server.py 337
3386         # since all shares get the same lease data, we just grab the leases
3387         # from the first share
3388         try:
3389-            shnum, filename = self._get_shares(storageindex).next()
3390+            shnum, filename = self._get_shares(storage_index).next()
3391             sf = ShareFile(filename)
3392             return sf.get_leases()
3393         except StopIteration:
3394replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3395}
3396[adding comments to clarify what I'm about to do.
3397wilcoxjg@gmail.com**20110710220623
3398 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3399] {
3400hunk ./src/allmydata/storage/backends/das/core.py 8
3401 
3402 import os, re, weakref, struct, time
3403 
3404-from foolscap.api import Referenceable
3405+#from foolscap.api import Referenceable
3406 from twisted.application import service
3407 
3408 from zope.interface import implements
3409hunk ./src/allmydata/storage/backends/das/core.py 12
3410-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3411+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3412 from allmydata.util import fileutil, idlib, log, time_format
3413 import allmydata # for __full_version__
3414 
3415hunk ./src/allmydata/storage/server.py 219
3416             alreadygot.add(share.shnum)
3417             share.add_or_renew_lease(lease_info)
3418 
3419-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3420+        # fill incoming with all shares that are incoming use a set operation
3421+        # since there's no need to operate on individual pieces
3422         incoming = self.backend.get_incoming(storageindex)
3423 
3424         for shnum in ((sharenums - alreadygot) - incoming):
3425hunk ./src/allmydata/test/test_backends.py 245
3426         # with the same si, until BucketWriter.remote_close() has been called.
3427         # self.failIf(bsa)
3428 
3429-        # XXX (3) Inspect final and fail unless there's nothing there.
3430         bs[0].remote_write(0, 'a')
3431hunk ./src/allmydata/test/test_backends.py 246
3432-        # XXX (4a) Inspect final and fail unless share 0 is there.
3433-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3434         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3435         spaceint = self.s.allocated_size()
3436         self.failUnlessReallyEqual(spaceint, 1)
3437hunk ./src/allmydata/test/test_backends.py 250
3438 
3439-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3440+        # XXX (3) Inspect final and fail unless there's nothing there.
3441         bs[0].remote_close()
3442hunk ./src/allmydata/test/test_backends.py 252
3443+        # XXX (4a) Inspect final and fail unless share 0 is there.
3444+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3445 
3446         # What happens when there's not enough space for the client's request?
3447         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3448}
3449[branching back, no longer attempting to mock inside TestServerFSBackend
3450wilcoxjg@gmail.com**20110711190849
3451 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3452] {
3453hunk ./src/allmydata/storage/backends/das/core.py 75
3454         self.lease_checker.setServiceParent(self)
3455 
3456     def get_incoming(self, storageindex):
3457-        return set((1,))
3458-
3459-    def get_available_space(self):
3460-        if self.readonly:
3461-            return 0
3462-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3463+        """Return the set of incoming shnums."""
3464+        return set(os.listdir(self.incomingdir))
3465 
3466     def get_shares(self, storage_index):
3467         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3468hunk ./src/allmydata/storage/backends/das/core.py 90
3469             # Commonly caused by there being no shares at all.
3470             pass
3471         
3472+    def get_available_space(self):
3473+        if self.readonly:
3474+            return 0
3475+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3476+
3477     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3478         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3479         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3480hunk ./src/allmydata/test/test_backends.py 27
3481 
3482 testnodeid = 'testnodeidxxxxxxxxxx'
3483 tempdir = 'teststoredir'
3484-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3485-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3486+basedir = os.path.join(tempdir, 'shares')
3487+baseincdir = os.path.join(basedir, 'incoming')
3488+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3489+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3490 shareincomingname = os.path.join(sharedirincomingname, '0')
3491 sharefname = os.path.join(sharedirfinalname, '0')
3492 
3493hunk ./src/allmydata/test/test_backends.py 142
3494                              mockmake_dirs, mockrename):
3495         """ Write a new share. """
3496 
3497-        def call_listdir(dirname):
3498-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3499-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3500-
3501-        mocklistdir.side_effect = call_listdir
3502-
3503-        def call_isdir(dirname):
3504-            #XXX Should there be any other tests here?
3505-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3506-            return True
3507-
3508-        mockisdir.side_effect = call_isdir
3509-
3510-        def call_mkdir(dirname, permissions):
3511-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3512-                self.Fail
3513-            else:
3514-                return True
3515-
3516-        mockmkdir.side_effect = call_mkdir
3517-
3518-        def call_get_available_space(storedir, reserved_space):
3519-            self.failUnlessReallyEqual(storedir, tempdir)
3520-            return 1
3521-
3522-        mockget_available_space.side_effect = call_get_available_space
3523-
3524-        mocktime.return_value = 0
3525         class MockShare:
3526             def __init__(self):
3527                 self.shnum = 1
3528hunk ./src/allmydata/test/test_backends.py 152
3529                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3530                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3531                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3532-               
3533 
3534         share = MockShare()
3535hunk ./src/allmydata/test/test_backends.py 154
3536-        def call_get_shares(storageindex):
3537-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3538-            return []#share]
3539-
3540-        mockget_shares.side_effect = call_get_shares
3541 
3542         class MockFile:
3543             def __init__(self):
3544hunk ./src/allmydata/test/test_backends.py 176
3545             def tell(self):
3546                 return self.pos
3547 
3548-
3549         fobj = MockFile()
3550hunk ./src/allmydata/test/test_backends.py 177
3551+
3552+        directories = {}
3553+        def call_listdir(dirname):
3554+            if dirname not in directories:
3555+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3556+            else:
3557+                return directories[dirname].get_contents()
3558+
3559+        mocklistdir.side_effect = call_listdir
3560+
3561+        class MockDir:
3562+            def __init__(self, dirname):
3563+                self.name = dirname
3564+                self.contents = []
3565+   
3566+            def get_contents(self):
3567+                return self.contents
3568+
3569+        def call_isdir(dirname):
3570+            #XXX Should there be any other tests here?
3571+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3572+            return True
3573+
3574+        mockisdir.side_effect = call_isdir
3575+
3576+        def call_mkdir(dirname, permissions):
3577+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3578+                self.Fail
3579+            if dirname in directories:
3580+                raise OSError(17, "File exists: '%s'" % dirname)
3581+                self.Fail
3582+            elif dirname not in directories:
3583+                directories[dirname] = MockDir(dirname)
3584+                return True
3585+
3586+        mockmkdir.side_effect = call_mkdir
3587+
3588+        def call_get_available_space(storedir, reserved_space):
3589+            self.failUnlessReallyEqual(storedir, tempdir)
3590+            return 1
3591+
3592+        mockget_available_space.side_effect = call_get_available_space
3593+
3594+        mocktime.return_value = 0
3595+        def call_get_shares(storageindex):
3596+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3597+            return []#share]
3598+
3599+        mockget_shares.side_effect = call_get_shares
3600+
3601         def call_open(fname, mode):
3602             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3603             return fobj
3604}
3605[checkpoint12 TestServerFSBackend no longer mocks filesystem
3606wilcoxjg@gmail.com**20110711193357
3607 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3608] {
3609hunk ./src/allmydata/storage/backends/das/core.py 23
3610      create_mutable_sharefile
3611 from allmydata.storage.immutable import BucketWriter, BucketReader
3612 from allmydata.storage.crawler import FSBucketCountingCrawler
3613+from allmydata.util.hashutil import constant_time_compare
3614 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3615 
3616 from zope.interface import implements
3617hunk ./src/allmydata/storage/backends/das/core.py 28
3618 
3619+# storage/
3620+# storage/shares/incoming
3621+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3622+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3623+# storage/shares/$START/$STORAGEINDEX
3624+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3625+
3626+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3627+# base-32 chars).
3628 # $SHARENUM matches this regex:
3629 NUM_RE=re.compile("^[0-9]+$")
3630 
3631hunk ./src/allmydata/test/test_backends.py 126
3632         testbackend = DASCore(tempdir, expiration_policy)
3633         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3634 
3635-    @mock.patch('allmydata.util.fileutil.rename')
3636-    @mock.patch('allmydata.util.fileutil.make_dirs')
3637-    @mock.patch('os.path.exists')
3638-    @mock.patch('os.stat')
3639-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3640-    @mock.patch('allmydata.util.fileutil.get_available_space')
3641     @mock.patch('time.time')
3642hunk ./src/allmydata/test/test_backends.py 127
3643-    @mock.patch('os.mkdir')
3644-    @mock.patch('__builtin__.open')
3645-    @mock.patch('os.listdir')
3646-    @mock.patch('os.path.isdir')
3647-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3648-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3649-                             mockmake_dirs, mockrename):
3650+    def test_write_share(self, mocktime):
3651         """ Write a new share. """
3652 
3653         class MockShare:
3654hunk ./src/allmydata/test/test_backends.py 143
3655 
3656         share = MockShare()
3657 
3658-        class MockFile:
3659-            def __init__(self):
3660-                self.buffer = ''
3661-                self.pos = 0
3662-            def write(self, instring):
3663-                begin = self.pos
3664-                padlen = begin - len(self.buffer)
3665-                if padlen > 0:
3666-                    self.buffer += '\x00' * padlen
3667-                end = self.pos + len(instring)
3668-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3669-                self.pos = end
3670-            def close(self):
3671-                pass
3672-            def seek(self, pos):
3673-                self.pos = pos
3674-            def read(self, numberbytes):
3675-                return self.buffer[self.pos:self.pos+numberbytes]
3676-            def tell(self):
3677-                return self.pos
3678-
3679-        fobj = MockFile()
3680-
3681-        directories = {}
3682-        def call_listdir(dirname):
3683-            if dirname not in directories:
3684-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3685-            else:
3686-                return directories[dirname].get_contents()
3687-
3688-        mocklistdir.side_effect = call_listdir
3689-
3690-        class MockDir:
3691-            def __init__(self, dirname):
3692-                self.name = dirname
3693-                self.contents = []
3694-   
3695-            def get_contents(self):
3696-                return self.contents
3697-
3698-        def call_isdir(dirname):
3699-            #XXX Should there be any other tests here?
3700-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3701-            return True
3702-
3703-        mockisdir.side_effect = call_isdir
3704-
3705-        def call_mkdir(dirname, permissions):
3706-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3707-                self.Fail
3708-            if dirname in directories:
3709-                raise OSError(17, "File exists: '%s'" % dirname)
3710-                self.Fail
3711-            elif dirname not in directories:
3712-                directories[dirname] = MockDir(dirname)
3713-                return True
3714-
3715-        mockmkdir.side_effect = call_mkdir
3716-
3717-        def call_get_available_space(storedir, reserved_space):
3718-            self.failUnlessReallyEqual(storedir, tempdir)
3719-            return 1
3720-
3721-        mockget_available_space.side_effect = call_get_available_space
3722-
3723-        mocktime.return_value = 0
3724-        def call_get_shares(storageindex):
3725-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3726-            return []#share]
3727-
3728-        mockget_shares.side_effect = call_get_shares
3729-
3730-        def call_open(fname, mode):
3731-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3732-            return fobj
3733-
3734-        mockopen.side_effect = call_open
3735-
3736-        def call_make_dirs(dname):
3737-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3738-           
3739-        mockmake_dirs.side_effect = call_make_dirs
3740-
3741-        def call_rename(src, dst):
3742-            self.failUnlessReallyEqual(src, shareincomingname)
3743-            self.failUnlessReallyEqual(dst, sharefname)
3744-           
3745-        mockrename.side_effect = call_rename
3746-
3747-        def call_exists(fname):
3748-            self.failUnlessReallyEqual(fname, sharefname)
3749-
3750-        mockexists.side_effect = call_exists
3751-
3752         # Now begin the test.
3753 
3754         # XXX (0) ???  Fail unless something is not properly set-up?
3755}
3756[JACP
3757wilcoxjg@gmail.com**20110711194407
3758 Ignore-this: b54745de777c4bb58d68d708f010bbb
3759] {
3760hunk ./src/allmydata/storage/backends/das/core.py 86
3761 
3762     def get_incoming(self, storageindex):
3763         """Return the set of incoming shnums."""
3764-        return set(os.listdir(self.incomingdir))
3765+        try:
3766+            incominglist = os.listdir(self.incomingdir)
3767+            print "incominglist: ", incominglist
3768+            return set(incominglist)
3769+        except OSError:
3770+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3771+            pass
3772 
3773     def get_shares(self, storage_index):
3774         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3775hunk ./src/allmydata/storage/server.py 17
3776 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3777      create_mutable_sharefile
3778 
3779-# storage/
3780-# storage/shares/incoming
3781-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3782-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3783-# storage/shares/$START/$STORAGEINDEX
3784-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3785-
3786-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3787-# base-32 chars).
3788-
3789-
3790 class StorageServer(service.MultiService, Referenceable):
3791     implements(RIStorageServer, IStatsProducer)
3792     name = 'storage'
3793}
3794[testing get incoming
3795wilcoxjg@gmail.com**20110711210224
3796 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3797] {
3798hunk ./src/allmydata/storage/backends/das/core.py 87
3799     def get_incoming(self, storageindex):
3800         """Return the set of incoming shnums."""
3801         try:
3802-            incominglist = os.listdir(self.incomingdir)
3803+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3804+            incominglist = os.listdir(incomingsharesdir)
3805             print "incominglist: ", incominglist
3806             return set(incominglist)
3807         except OSError:
3808hunk ./src/allmydata/storage/backends/das/core.py 92
3809-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3810-            pass
3811-
3812+            # XXX I'd like to make this more specific. If there are no shares at all.
3813+            return set()
3814+           
3815     def get_shares(self, storage_index):
3816         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3817         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3818hunk ./src/allmydata/test/test_backends.py 149
3819         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3820 
3821         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3822+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3823         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3824 
3825hunk ./src/allmydata/test/test_backends.py 152
3826-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3827         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3828         # with the same si, until BucketWriter.remote_close() has been called.
3829         # self.failIf(bsa)
3830}
3831[ImmutableShareFile does not know its StorageIndex
3832wilcoxjg@gmail.com**20110711211424
3833 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3834] {
3835hunk ./src/allmydata/storage/backends/das/core.py 112
3836             return 0
3837         return fileutil.get_available_space(self.storedir, self.reserved_space)
3838 
3839-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3840-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3841+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3842+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3843+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3844+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3845         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3846         return bw
3847 
3848hunk ./src/allmydata/storage/backends/das/core.py 155
3849     LEASE_SIZE = struct.calcsize(">L32s32sL")
3850     sharetype = "immutable"
3851 
3852-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3853+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3854         """ If max_size is not None then I won't allow more than
3855         max_size to be written to me. If create=True then max_size
3856         must not be None. """
3857}
3858[get_incoming correctly reports the 0 share after it has arrived
3859wilcoxjg@gmail.com**20110712025157
3860 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3861] {
3862hunk ./src/allmydata/storage/backends/das/core.py 1
3863+import os, re, weakref, struct, time, stat
3864+
3865 from allmydata.interfaces import IStorageBackend
3866 from allmydata.storage.backends.base import Backend
3867 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3868hunk ./src/allmydata/storage/backends/das/core.py 8
3869 from allmydata.util.assertutil import precondition
3870 
3871-import os, re, weakref, struct, time
3872-
3873 #from foolscap.api import Referenceable
3874 from twisted.application import service
3875 
3876hunk ./src/allmydata/storage/backends/das/core.py 89
3877         try:
3878             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3879             incominglist = os.listdir(incomingsharesdir)
3880-            print "incominglist: ", incominglist
3881-            return set(incominglist)
3882+            incomingshnums = [int(x) for x in incominglist]
3883+            return set(incomingshnums)
3884         except OSError:
3885             # XXX I'd like to make this more specific. If there are no shares at all.
3886             return set()
3887hunk ./src/allmydata/storage/backends/das/core.py 113
3888         return fileutil.get_available_space(self.storedir, self.reserved_space)
3889 
3890     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3891-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3892-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3893-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3894+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3895+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3896+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3897         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3898         return bw
3899 
3900hunk ./src/allmydata/storage/backends/das/core.py 160
3901         max_size to be written to me. If create=True then max_size
3902         must not be None. """
3903         precondition((max_size is not None) or (not create), max_size, create)
3904-        self.shnum = shnum
3905-        self.storage_index = storageindex
3906-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3907         self._max_size = max_size
3908hunk ./src/allmydata/storage/backends/das/core.py 161
3909-        self.incomingdir = os.path.join(sharedir, 'incoming')
3910-        si_dir = storage_index_to_dir(storageindex)
3911-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3912-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3913-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3914+        self.incominghome = incominghome
3915+        self.finalhome = finalhome
3916         if create:
3917             # touch the file, so later callers will see that we're working on
3918             # it. Also construct the metadata.
3919hunk ./src/allmydata/storage/backends/das/core.py 166
3920-            assert not os.path.exists(self.fname)
3921-            fileutil.make_dirs(os.path.dirname(self.fname))
3922-            f = open(self.fname, 'wb')
3923+            assert not os.path.exists(self.finalhome)
3924+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3925+            f = open(self.incominghome, 'wb')
3926             # The second field -- the four-byte share data length -- is no
3927             # longer used as of Tahoe v1.3.0, but we continue to write it in
3928             # there in case someone downgrades a storage server from >=
3929hunk ./src/allmydata/storage/backends/das/core.py 183
3930             self._lease_offset = max_size + 0x0c
3931             self._num_leases = 0
3932         else:
3933-            f = open(self.fname, 'rb')
3934-            filesize = os.path.getsize(self.fname)
3935+            f = open(self.finalhome, 'rb')
3936+            filesize = os.path.getsize(self.finalhome)
3937             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3938             f.close()
3939             if version != 1:
3940hunk ./src/allmydata/storage/backends/das/core.py 189
3941                 msg = "sharefile %s had version %d but we wanted 1" % \
3942-                      (self.fname, version)
3943+                      (self.finalhome, version)
3944                 raise UnknownImmutableContainerVersionError(msg)
3945             self._num_leases = num_leases
3946             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3947hunk ./src/allmydata/storage/backends/das/core.py 225
3948         pass
3949         
3950     def stat(self):
3951-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3952+        return os.stat(self.finalhome)[stat.ST_SIZE]
3953+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
3954 
3955     def get_shnum(self):
3956         return self.shnum
3957hunk ./src/allmydata/storage/backends/das/core.py 232
3958 
3959     def unlink(self):
3960-        os.unlink(self.fname)
3961+        os.unlink(self.finalhome)
3962 
3963     def read_share_data(self, offset, length):
3964         precondition(offset >= 0)
3965hunk ./src/allmydata/storage/backends/das/core.py 239
3966         # Reads beyond the end of the data are truncated. Reads that start
3967         # beyond the end of the data return an empty string.
3968         seekpos = self._data_offset+offset
3969-        fsize = os.path.getsize(self.fname)
3970+        fsize = os.path.getsize(self.finalhome)
3971         actuallength = max(0, min(length, fsize-seekpos))
3972         if actuallength == 0:
3973             return ""
3974hunk ./src/allmydata/storage/backends/das/core.py 243
3975-        f = open(self.fname, 'rb')
3976+        f = open(self.finalhome, 'rb')
3977         f.seek(seekpos)
3978         return f.read(actuallength)
3979 
3980hunk ./src/allmydata/storage/backends/das/core.py 252
3981         precondition(offset >= 0, offset)
3982         if self._max_size is not None and offset+length > self._max_size:
3983             raise DataTooLargeError(self._max_size, offset, length)
3984-        f = open(self.fname, 'rb+')
3985+        f = open(self.incominghome, 'rb+')
3986         real_offset = self._data_offset+offset
3987         f.seek(real_offset)
3988         assert f.tell() == real_offset
3989hunk ./src/allmydata/storage/backends/das/core.py 279
3990 
3991     def get_leases(self):
3992         """Yields a LeaseInfo instance for all leases."""
3993-        f = open(self.fname, 'rb')
3994+        f = open(self.finalhome, 'rb')
3995         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3996         f.seek(self._lease_offset)
3997         for i in range(num_leases):
3998hunk ./src/allmydata/storage/backends/das/core.py 288
3999                 yield LeaseInfo().from_immutable_data(data)
4000 
4001     def add_lease(self, lease_info):
4002-        f = open(self.fname, 'rb+')
4003+        f = open(self.incominghome, 'rb+')
4004         num_leases = self._read_num_leases(f)
4005         self._write_lease_record(f, num_leases, lease_info)
4006         self._write_num_leases(f, num_leases+1)
4007hunk ./src/allmydata/storage/backends/das/core.py 301
4008                 if new_expire_time > lease.expiration_time:
4009                     # yes
4010                     lease.expiration_time = new_expire_time
4011-                    f = open(self.fname, 'rb+')
4012+                    f = open(self.finalhome, 'rb+')
4013                     self._write_lease_record(f, i, lease)
4014                     f.close()
4015                 return
4016hunk ./src/allmydata/storage/backends/das/core.py 336
4017             # the same order as they were added, so that if we crash while
4018             # doing this, we won't lose any non-cancelled leases.
4019             leases = [l for l in leases if l] # remove the cancelled leases
4020-            f = open(self.fname, 'rb+')
4021+            f = open(self.finalhome, 'rb+')
4022             for i,lease in enumerate(leases):
4023                 self._write_lease_record(f, i, lease)
4024             self._write_num_leases(f, len(leases))
4025hunk ./src/allmydata/storage/backends/das/core.py 344
4026             f.close()
4027         space_freed = self.LEASE_SIZE * num_leases_removed
4028         if not len(leases):
4029-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
4030+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
4031             self.unlink()
4032         return space_freed
4033hunk ./src/allmydata/test/test_backends.py 129
4034     @mock.patch('time.time')
4035     def test_write_share(self, mocktime):
4036         """ Write a new share. """
4037-
4038-        class MockShare:
4039-            def __init__(self):
4040-                self.shnum = 1
4041-               
4042-            def add_or_renew_lease(elf, lease_info):
4043-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4044-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4045-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4046-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4047-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4048-
4049-        share = MockShare()
4050-
4051         # Now begin the test.
4052 
4053         # XXX (0) ???  Fail unless something is not properly set-up?
4054hunk ./src/allmydata/test/test_backends.py 143
4055         # self.failIf(bsa)
4056 
4057         bs[0].remote_write(0, 'a')
4058-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4059+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4060         spaceint = self.s.allocated_size()
4061         self.failUnlessReallyEqual(spaceint, 1)
4062 
4063hunk ./src/allmydata/test/test_backends.py 161
4064         #self.failIf(mockrename.called, mockrename.call_args_list)
4065         #self.failIf(mockstat.called, mockstat.call_args_list)
4066 
4067+    def test_handle_incoming(self):
4068+        incomingset = self.s.backend.get_incoming('teststorage_index')
4069+        self.failUnlessReallyEqual(incomingset, set())
4070+
4071+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4072+       
4073+        incomingset = self.s.backend.get_incoming('teststorage_index')
4074+        self.failUnlessReallyEqual(incomingset, set((0,)))
4075+
4076+        bs[0].remote_close()
4077+        self.failUnlessReallyEqual(incomingset, set())
4078+
4079     @mock.patch('os.path.exists')
4080     @mock.patch('os.path.getsize')
4081     @mock.patch('__builtin__.open')
4082hunk ./src/allmydata/test/test_backends.py 223
4083         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4084 
4085 
4086-
4087 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4088     @mock.patch('time.time')
4089     @mock.patch('os.mkdir')
4090hunk ./src/allmydata/test/test_backends.py 271
4091         DASCore('teststoredir', expiration_policy)
4092 
4093         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4094+
4095}
4096[jacp14
4097wilcoxjg@gmail.com**20110712061211
4098 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4099] {
4100hunk ./src/allmydata/storage/backends/das/core.py 95
4101             # XXX I'd like to make this more specific. If there are no shares at all.
4102             return set()
4103             
4104-    def get_shares(self, storage_index):
4105+    def get_shares(self, storageindex):
4106         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4107hunk ./src/allmydata/storage/backends/das/core.py 97
4108-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4109+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4110         try:
4111             for f in os.listdir(finalstoragedir):
4112                 if NUM_RE.match(f):
4113hunk ./src/allmydata/storage/backends/das/core.py 102
4114                     filename = os.path.join(finalstoragedir, f)
4115-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4116+                    yield ImmutableShare(filename, storageindex, f)
4117         except OSError:
4118             # Commonly caused by there being no shares at all.
4119             pass
4120hunk ./src/allmydata/storage/backends/das/core.py 115
4121     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4122         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4123         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4124-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4125+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4126         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4127         return bw
4128 
4129hunk ./src/allmydata/storage/backends/das/core.py 155
4130     LEASE_SIZE = struct.calcsize(">L32s32sL")
4131     sharetype = "immutable"
4132 
4133-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4134+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4135         """ If max_size is not None then I won't allow more than
4136         max_size to be written to me. If create=True then max_size
4137         must not be None. """
4138hunk ./src/allmydata/storage/backends/das/core.py 160
4139         precondition((max_size is not None) or (not create), max_size, create)
4140+        self.storageindex = storageindex
4141         self._max_size = max_size
4142         self.incominghome = incominghome
4143         self.finalhome = finalhome
4144hunk ./src/allmydata/storage/backends/das/core.py 164
4145+        self.shnum = shnum
4146         if create:
4147             # touch the file, so later callers will see that we're working on
4148             # it. Also construct the metadata.
4149hunk ./src/allmydata/storage/backends/das/core.py 212
4150             # their children to know when they should do the rmdir. This
4151             # approach is simpler, but relies on os.rmdir refusing to delete
4152             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4153+            #print "os.path.dirname(self.incominghome): "
4154+            #print os.path.dirname(self.incominghome)
4155             os.rmdir(os.path.dirname(self.incominghome))
4156             # we also delete the grandparent (prefix) directory, .../ab ,
4157             # again to avoid leaving directories lying around. This might
4158hunk ./src/allmydata/storage/immutable.py 93
4159     def __init__(self, ss, share):
4160         self.ss = ss
4161         self._share_file = share
4162-        self.storage_index = share.storage_index
4163+        self.storageindex = share.storageindex
4164         self.shnum = share.shnum
4165 
4166     def __repr__(self):
4167hunk ./src/allmydata/storage/immutable.py 98
4168         return "<%s %s %s>" % (self.__class__.__name__,
4169-                               base32.b2a_l(self.storage_index[:8], 60),
4170+                               base32.b2a_l(self.storageindex[:8], 60),
4171                                self.shnum)
4172 
4173     def remote_read(self, offset, length):
4174hunk ./src/allmydata/storage/immutable.py 110
4175 
4176     def remote_advise_corrupt_share(self, reason):
4177         return self.ss.remote_advise_corrupt_share("immutable",
4178-                                                   self.storage_index,
4179+                                                   self.storageindex,
4180                                                    self.shnum,
4181                                                    reason)
4182hunk ./src/allmydata/test/test_backends.py 20
4183 # The following share file contents was generated with
4184 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4185 # with share data == 'a'.
4186-renew_secret  = 'x'*32
4187-cancel_secret = 'y'*32
4188-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4189-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4190+shareversionnumber = '\x00\x00\x00\x01'
4191+sharedatalength = '\x00\x00\x00\x01'
4192+numberofleases = '\x00\x00\x00\x01'
4193+shareinputdata = 'a'
4194+ownernumber = '\x00\x00\x00\x00'
4195+renewsecret  = 'x'*32
4196+cancelsecret = 'y'*32
4197+expirationtime = '\x00(\xde\x80'
4198+nextlease = ''
4199+containerdata = shareversionnumber + sharedatalength + numberofleases
4200+client_data = shareinputdata + ownernumber + renewsecret + \
4201+    cancelsecret + expirationtime + nextlease
4202+share_data = containerdata + client_data
4203+
4204 
4205 testnodeid = 'testnodeidxxxxxxxxxx'
4206 tempdir = 'teststoredir'
4207hunk ./src/allmydata/test/test_backends.py 52
4208 
4209 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4210     def setUp(self):
4211-        self.s = StorageServer(testnodeid, backend=NullCore())
4212+        self.ss = StorageServer(testnodeid, backend=NullCore())
4213 
4214     @mock.patch('os.mkdir')
4215     @mock.patch('__builtin__.open')
4216hunk ./src/allmydata/test/test_backends.py 62
4217         """ Write a new share. """
4218 
4219         # Now begin the test.
4220-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4221+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4222         bs[0].remote_write(0, 'a')
4223         self.failIf(mockisdir.called)
4224         self.failIf(mocklistdir.called)
4225hunk ./src/allmydata/test/test_backends.py 133
4226                 _assert(False, "The tester code doesn't recognize this case.") 
4227 
4228         mockopen.side_effect = call_open
4229-        testbackend = DASCore(tempdir, expiration_policy)
4230-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4231+        self.backend = DASCore(tempdir, expiration_policy)
4232+        self.ss = StorageServer(testnodeid, self.backend)
4233+        self.ssinf = StorageServer(testnodeid, self.backend)
4234 
4235     @mock.patch('time.time')
4236     def test_write_share(self, mocktime):
4237hunk ./src/allmydata/test/test_backends.py 142
4238         """ Write a new share. """
4239         # Now begin the test.
4240 
4241-        # XXX (0) ???  Fail unless something is not properly set-up?
4242-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4243+        mocktime.return_value = 0
4244+        # Inspect incoming and fail unless it's empty.
4245+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4246+        self.failUnlessReallyEqual(incomingset, set())
4247+       
4248+        # Among other things, populate incoming with the sharenum: 0.
4249+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4250 
4251hunk ./src/allmydata/test/test_backends.py 150
4252-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4253-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4254-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4255+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4256+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4257+       
4258+        # Attempt to create a second share writer with the same share.
4259+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4260 
4261hunk ./src/allmydata/test/test_backends.py 156
4262-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4263+        # Show that no sharewriter results from a remote_allocate_buckets
4264         # with the same si, until BucketWriter.remote_close() has been called.
4265hunk ./src/allmydata/test/test_backends.py 158
4266-        # self.failIf(bsa)
4267+        self.failIf(bsa)
4268 
4269hunk ./src/allmydata/test/test_backends.py 160
4270+        # Write 'a' to shnum 0. Only tested together with close and read.
4271         bs[0].remote_write(0, 'a')
4272hunk ./src/allmydata/test/test_backends.py 162
4273-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4274-        spaceint = self.s.allocated_size()
4275+
4276+        # Test allocated size.
4277+        spaceint = self.ss.allocated_size()
4278         self.failUnlessReallyEqual(spaceint, 1)
4279 
4280         # XXX (3) Inspect final and fail unless there's nothing there.
4281hunk ./src/allmydata/test/test_backends.py 168
4282+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4283         bs[0].remote_close()
4284         # XXX (4a) Inspect final and fail unless share 0 is there.
4285hunk ./src/allmydata/test/test_backends.py 171
4286+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4287+        #contents = sharesinfinal[0].read_share_data(0,999)
4288+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4289         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4290 
4291         # What happens when there's not enough space for the client's request?
4292hunk ./src/allmydata/test/test_backends.py 177
4293-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4294+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4295 
4296         # Now test the allocated_size method.
4297         # self.failIf(mockexists.called, mockexists.call_args_list)
4298hunk ./src/allmydata/test/test_backends.py 185
4299         #self.failIf(mockrename.called, mockrename.call_args_list)
4300         #self.failIf(mockstat.called, mockstat.call_args_list)
4301 
4302-    def test_handle_incoming(self):
4303-        incomingset = self.s.backend.get_incoming('teststorage_index')
4304-        self.failUnlessReallyEqual(incomingset, set())
4305-
4306-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4307-       
4308-        incomingset = self.s.backend.get_incoming('teststorage_index')
4309-        self.failUnlessReallyEqual(incomingset, set((0,)))
4310-
4311-        bs[0].remote_close()
4312-        self.failUnlessReallyEqual(incomingset, set())
4313-
4314     @mock.patch('os.path.exists')
4315     @mock.patch('os.path.getsize')
4316     @mock.patch('__builtin__.open')
4317hunk ./src/allmydata/test/test_backends.py 208
4318             self.failUnless('r' in mode, mode)
4319             self.failUnless('b' in mode, mode)
4320 
4321-            return StringIO(share_file_data)
4322+            return StringIO(share_data)
4323         mockopen.side_effect = call_open
4324 
4325hunk ./src/allmydata/test/test_backends.py 211
4326-        datalen = len(share_file_data)
4327+        datalen = len(share_data)
4328         def call_getsize(fname):
4329             self.failUnlessReallyEqual(fname, sharefname)
4330             return datalen
4331hunk ./src/allmydata/test/test_backends.py 223
4332         mockexists.side_effect = call_exists
4333 
4334         # Now begin the test.
4335-        bs = self.s.remote_get_buckets('teststorage_index')
4336+        bs = self.ss.remote_get_buckets('teststorage_index')
4337 
4338         self.failUnlessEqual(len(bs), 1)
4339hunk ./src/allmydata/test/test_backends.py 226
4340-        b = bs[0]
4341+        b = bs['0']
4342         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4343hunk ./src/allmydata/test/test_backends.py 228
4344-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4345+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4346         # If you try to read past the end you get the as much data as is there.
4347hunk ./src/allmydata/test/test_backends.py 230
4348-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4349+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4350         # If you start reading past the end of the file you get the empty string.
4351         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4352 
4353}
4354[jacp14 or so
4355wilcoxjg@gmail.com**20110713060346
4356 Ignore-this: 7026810f60879d65b525d450e43ff87a
4357] {
4358hunk ./src/allmydata/storage/backends/das/core.py 102
4359             for f in os.listdir(finalstoragedir):
4360                 if NUM_RE.match(f):
4361                     filename = os.path.join(finalstoragedir, f)
4362-                    yield ImmutableShare(filename, storageindex, f)
4363+                    yield ImmutableShare(filename, storageindex, int(f))
4364         except OSError:
4365             # Commonly caused by there being no shares at all.
4366             pass
4367hunk ./src/allmydata/storage/backends/null/core.py 25
4368     def set_storage_server(self, ss):
4369         self.ss = ss
4370 
4371+    def get_incoming(self, storageindex):
4372+        return set()
4373+
4374 class ImmutableShare:
4375     sharetype = "immutable"
4376 
4377hunk ./src/allmydata/storage/immutable.py 19
4378 
4379     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4380         self.ss = ss
4381-        self._max_size = max_size # don't allow the client to write more than this
4382+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4383+
4384         self._canary = canary
4385         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4386         self.closed = False
4387hunk ./src/allmydata/test/test_backends.py 135
4388         mockopen.side_effect = call_open
4389         self.backend = DASCore(tempdir, expiration_policy)
4390         self.ss = StorageServer(testnodeid, self.backend)
4391-        self.ssinf = StorageServer(testnodeid, self.backend)
4392+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4393+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4394 
4395     @mock.patch('time.time')
4396     def test_write_share(self, mocktime):
4397hunk ./src/allmydata/test/test_backends.py 161
4398         # with the same si, until BucketWriter.remote_close() has been called.
4399         self.failIf(bsa)
4400 
4401-        # Write 'a' to shnum 0. Only tested together with close and read.
4402-        bs[0].remote_write(0, 'a')
4403-
4404         # Test allocated size.
4405         spaceint = self.ss.allocated_size()
4406         self.failUnlessReallyEqual(spaceint, 1)
4407hunk ./src/allmydata/test/test_backends.py 165
4408 
4409-        # XXX (3) Inspect final and fail unless there's nothing there.
4410+        # Write 'a' to shnum 0. Only tested together with close and read.
4411+        bs[0].remote_write(0, 'a')
4412+       
4413+        # Preclose: Inspect final, failUnless nothing there.
4414         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4415         bs[0].remote_close()
4416hunk ./src/allmydata/test/test_backends.py 171
4417-        # XXX (4a) Inspect final and fail unless share 0 is there.
4418-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4419-        #contents = sharesinfinal[0].read_share_data(0,999)
4420-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4421-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4422 
4423hunk ./src/allmydata/test/test_backends.py 172
4424-        # What happens when there's not enough space for the client's request?
4425-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4426+        # Postclose: (Omnibus) failUnless written data is in final.
4427+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4428+        contents = sharesinfinal[0].read_share_data(0,73)
4429+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4430 
4431hunk ./src/allmydata/test/test_backends.py 177
4432-        # Now test the allocated_size method.
4433-        # self.failIf(mockexists.called, mockexists.call_args_list)
4434-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4435-        #self.failIf(mockrename.called, mockrename.call_args_list)
4436-        #self.failIf(mockstat.called, mockstat.call_args_list)
4437+        # Cover interior of for share in get_shares loop.
4438+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4439+       
4440+    @mock.patch('time.time')
4441+    @mock.patch('allmydata.util.fileutil.get_available_space')
4442+    def test_out_of_space(self, mockget_available_space, mocktime):
4443+        mocktime.return_value = 0
4444+       
4445+        def call_get_available_space(dir, reserve):
4446+            return 0
4447+
4448+        mockget_available_space.side_effect = call_get_available_space
4449+       
4450+       
4451+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4452 
4453     @mock.patch('os.path.exists')
4454     @mock.patch('os.path.getsize')
4455hunk ./src/allmydata/test/test_backends.py 234
4456         bs = self.ss.remote_get_buckets('teststorage_index')
4457 
4458         self.failUnlessEqual(len(bs), 1)
4459-        b = bs['0']
4460+        b = bs[0]
4461         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4462         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4463         # If you try to read past the end you get the as much data as is there.
4464}
4465
4466Context:
4467
4468[add Protovis.js-based download-status timeline visualization
4469Brian Warner <warner@lothar.com>**20110629222606
4470 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
4471 
4472 provide status overlap info on the webapi t=json output, add decode/decrypt
4473 rate tooltips, add zoomin/zoomout buttons
4474]
4475[add more download-status data, fix tests
4476Brian Warner <warner@lothar.com>**20110629222555
4477 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
4478]
4479[prepare for viz: improve DownloadStatus events
4480Brian Warner <warner@lothar.com>**20110629222542
4481 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
4482 
4483 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
4484]
4485[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
4486zooko@zooko.com**20110629185711
4487 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
4488]
4489[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
4490david-sarah@jacaranda.org**20110130235809
4491 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
4492]
4493[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
4494david-sarah@jacaranda.org**20110626054124
4495 Ignore-this: abb864427a1b91bd10d5132b4589fd90
4496]
4497[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
4498david-sarah@jacaranda.org**20110623205528
4499 Ignore-this: c63e23146c39195de52fb17c7c49b2da
4500]
4501[Rename test_package_initialization.py to (much shorter) test_import.py .
4502Brian Warner <warner@lothar.com>**20110611190234
4503 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
4504 
4505 The former name was making my 'ls' listings hard to read, by forcing them
4506 down to just two columns.
4507]
4508[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
4509zooko@zooko.com**20110611163741
4510 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
4511 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
4512 fixes #1412
4513]
4514[wui: right-align the size column in the WUI
4515zooko@zooko.com**20110611153758
4516 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
4517 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
4518 fixes #1412
4519]
4520[docs: three minor fixes
4521zooko@zooko.com**20110610121656
4522 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
4523 CREDITS for arc for stats tweak
4524 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
4525 English usage tweak
4526]
4527[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
4528david-sarah@jacaranda.org**20110609223719
4529 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
4530]
4531[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
4532wilcoxjg@gmail.com**20110527120135
4533 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
4534 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
4535 NEWS.rst, stats.py: documentation of change to get_latencies
4536 stats.rst: now documents percentile modification in get_latencies
4537 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
4538 fixes #1392
4539]
4540[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
4541david-sarah@jacaranda.org**20110517011214
4542 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
4543]
4544[docs: convert NEWS to NEWS.rst and change all references to it.
4545david-sarah@jacaranda.org**20110517010255
4546 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
4547]
4548[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
4549david-sarah@jacaranda.org**20110512140559
4550 Ignore-this: 784548fc5367fac5450df1c46890876d
4551]
4552[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
4553david-sarah@jacaranda.org**20110130164923
4554 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
4555]
4556[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
4557zooko@zooko.com**20110128142006
4558 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
4559 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
4560]
4561[M-x whitespace-cleanup
4562zooko@zooko.com**20110510193653
4563 Ignore-this: dea02f831298c0f65ad096960e7df5c7
4564]
4565[docs: fix typo in running.rst, thanks to arch_o_median
4566zooko@zooko.com**20110510193633
4567 Ignore-this: ca06de166a46abbc61140513918e79e8
4568]
4569[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
4570david-sarah@jacaranda.org**20110204204902
4571 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
4572]
4573[relnotes.txt: forseeable -> foreseeable. refs #1342
4574david-sarah@jacaranda.org**20110204204116
4575 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
4576]
4577[replace remaining .html docs with .rst docs
4578zooko@zooko.com**20110510191650
4579 Ignore-this: d557d960a986d4ac8216d1677d236399
4580 Remove install.html (long since deprecated).
4581 Also replace some obsolete references to install.html with references to quickstart.rst.
4582 Fix some broken internal references within docs/historical/historical_known_issues.txt.
4583 Thanks to Ravi Pinjala and Patrick McDonald.
4584 refs #1227
4585]
4586[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
4587zooko@zooko.com**20110428055232
4588 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
4589]
4590[munin tahoe_files plugin: fix incorrect file count
4591francois@ctrlaltdel.ch**20110428055312
4592 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
4593 fixes #1391
4594]
4595[corrected "k must never be smaller than N" to "k must never be greater than N"
4596secorp@allmydata.org**20110425010308
4597 Ignore-this: 233129505d6c70860087f22541805eac
4598]
4599[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
4600david-sarah@jacaranda.org**20110411190738
4601 Ignore-this: 7847d26bc117c328c679f08a7baee519
4602]
4603[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
4604david-sarah@jacaranda.org**20110410155844
4605 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
4606]
4607[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
4608david-sarah@jacaranda.org**20110410155705
4609 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
4610]
4611[remove unused variable detected by pyflakes
4612zooko@zooko.com**20110407172231
4613 Ignore-this: 7344652d5e0720af822070d91f03daf9
4614]
4615[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
4616david-sarah@jacaranda.org**20110401202750
4617 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
4618]
4619[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
4620Brian Warner <warner@lothar.com>**20110325232511
4621 Ignore-this: d5307faa6900f143193bfbe14e0f01a
4622]
4623[control.py: remove all uses of s.get_serverid()
4624warner@lothar.com**20110227011203
4625 Ignore-this: f80a787953bd7fa3d40e828bde00e855
4626]
4627[web: remove some uses of s.get_serverid(), not all
4628warner@lothar.com**20110227011159
4629 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
4630]
4631[immutable/downloader/fetcher.py: remove all get_serverid() calls
4632warner@lothar.com**20110227011156
4633 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
4634]
4635[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
4636warner@lothar.com**20110227011153
4637 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
4638 
4639 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
4640 _shares_from_server dict was being popped incorrectly (using shnum as the
4641 index instead of serverid). I'm still thinking through the consequences of
4642 this bug. It was probably benign and really hard to detect. I think it would
4643 cause us to incorrectly believe that we're pulling too many shares from a
4644 server, and thus prefer a different server rather than asking for a second
4645 share from the first server. The diversity code is intended to spread out the
4646 number of shares simultaneously being requested from each server, but with
4647 this bug, it might be spreading out the total number of shares requested at
4648 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
4649 segment, so the effect doesn't last very long).
4650]
4651[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
4652warner@lothar.com**20110227011150
4653 Ignore-this: d8d56dd8e7b280792b40105e13664554
4654 
4655 test_download.py: create+check MyShare instances better, make sure they share
4656 Server objects, now that finder.py cares
4657]
4658[immutable/downloader/finder.py: reduce use of get_serverid(), one left
4659warner@lothar.com**20110227011146
4660 Ignore-this: 5785be173b491ae8a78faf5142892020
4661]
4662[immutable/offloaded.py: reduce use of get_serverid() a bit more
4663warner@lothar.com**20110227011142
4664 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
4665]
4666[immutable/upload.py: reduce use of get_serverid()
4667warner@lothar.com**20110227011138
4668 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
4669]
4670[immutable/checker.py: remove some uses of s.get_serverid(), not all
4671warner@lothar.com**20110227011134
4672 Ignore-this: e480a37efa9e94e8016d826c492f626e
4673]
4674[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
4675warner@lothar.com**20110227011132
4676 Ignore-this: 6078279ddf42b179996a4b53bee8c421
4677 MockIServer stubs
4678]
4679[upload.py: rearrange _make_trackers a bit, no behavior changes
4680warner@lothar.com**20110227011128
4681 Ignore-this: 296d4819e2af452b107177aef6ebb40f
4682]
4683[happinessutil.py: finally rename merge_peers to merge_servers
4684warner@lothar.com**20110227011124
4685 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
4686]
4687[test_upload.py: factor out FakeServerTracker
4688warner@lothar.com**20110227011120
4689 Ignore-this: 6c182cba90e908221099472cc159325b
4690]
4691[test_upload.py: server-vs-tracker cleanup
4692warner@lothar.com**20110227011115
4693 Ignore-this: 2915133be1a3ba456e8603885437e03
4694]
4695[happinessutil.py: server-vs-tracker cleanup
4696warner@lothar.com**20110227011111
4697 Ignore-this: b856c84033562d7d718cae7cb01085a9
4698]
4699[upload.py: more tracker-vs-server cleanup
4700warner@lothar.com**20110227011107
4701 Ignore-this: bb75ed2afef55e47c085b35def2de315
4702]
4703[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
4704warner@lothar.com**20110227011103
4705 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
4706]
4707[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
4708warner@lothar.com**20110227011100
4709 Ignore-this: 7ea858755cbe5896ac212a925840fe68
4710 
4711 No behavioral changes, just updating variable/method names and log messages.
4712 The effects outside these three files should be minimal: some exception
4713 messages changed (to say "server" instead of "peer"), and some internal class
4714 names were changed. A few things still use "peer" to minimize external
4715 changes, like UploadResults.timings["peer_selection"] and
4716 happinessutil.merge_peers, which can be changed later.
4717]
4718[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
4719warner@lothar.com**20110227011056
4720 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
4721]
4722[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
4723warner@lothar.com**20110227011051
4724 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
4725]
4726[test: increase timeout on a network test because Francois's ARM machine hit that timeout
4727zooko@zooko.com**20110317165909
4728 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
4729 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
4730]
4731[docs/configuration.rst: add a "Frontend Configuration" section
4732Brian Warner <warner@lothar.com>**20110222014323
4733 Ignore-this: 657018aa501fe4f0efef9851628444ca
4734 
4735 this points to docs/frontends/*.rst, which were previously underlinked
4736]
4737[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
4738"Brian Warner <warner@lothar.com>"**20110221061544
4739 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
4740]
4741[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
4742david-sarah@jacaranda.org**20110221015817
4743 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
4744]
4745[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
4746david-sarah@jacaranda.org**20110221020125
4747 Ignore-this: b0744ed58f161bf188e037bad077fc48
4748]
4749[Refactor StorageFarmBroker handling of servers
4750Brian Warner <warner@lothar.com>**20110221015804
4751 Ignore-this: 842144ed92f5717699b8f580eab32a51
4752 
4753 Pass around IServer instance instead of (peerid, rref) tuple. Replace
4754 "descriptor" with "server". Other replacements:
4755 
4756  get_all_servers -> get_connected_servers/get_known_servers
4757  get_servers_for_index -> get_servers_for_psi (now returns IServers)
4758 
4759 This change still needs to be pushed further down: lots of code is now
4760 getting the IServer and then distributing (peerid, rref) internally.
4761 Instead, it ought to distribute the IServer internally and delay
4762 extracting a serverid or rref until the last moment.
4763 
4764 no_network.py was updated to retain parallelism.
4765]
4766[TAG allmydata-tahoe-1.8.2
4767warner@lothar.com**20110131020101]
4768Patch bundle hash:
47695459d3f12afb68c7d06e49046e1ef08460a5746d