Ticket #999: jacp14.darcs.patch

File jacp14.darcs.patch, 200.7 KB (added by arch_o_median, at 2011-07-12T06:11:10Z)
Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
50  * adding comments to clarify what I'm about to do.
51
52Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
53  * branching back, no longer attempting to mock inside TestServerFSBackend
54
55Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
56  * checkpoint12 TestServerFSBackend no longer mocks filesystem
57
58Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
59  * JACP
60
61Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
62  * testing get incoming
63
64Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
65  * ImmutableShareFile does not know its StorageIndex
66
67Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
68  * get_incoming correctly reports the 0 share after it has arrived
69
70Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
71  * jacp14
72
73New patches:
74
75[storage: new mocking tests of storage server read and write
76wilcoxjg@gmail.com**20110325203514
77 Ignore-this: df65c3c4f061dd1516f88662023fdb41
78 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
79] {
80addfile ./src/allmydata/test/test_server.py
81hunk ./src/allmydata/test/test_server.py 1
82+from twisted.trial import unittest
83+
84+from StringIO import StringIO
85+
86+from allmydata.test.common_util import ReallyEqualMixin
87+
88+import mock
89+
90+# This is the code that we're going to be testing.
91+from allmydata.storage.server import StorageServer
92+
93+# The following share file contents was generated with
94+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
95+# with share data == 'a'.
96+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
97+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
98+
99+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
100+
101+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
102+    @mock.patch('__builtin__.open')
103+    def test_create_server(self, mockopen):
104+        """ This tests whether a server instance can be constructed. """
105+
106+        def call_open(fname, mode):
107+            if fname == 'testdir/bucket_counter.state':
108+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
109+            elif fname == 'testdir/lease_checker.state':
110+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
111+            elif fname == 'testdir/lease_checker.history':
112+                return StringIO()
113+        mockopen.side_effect = call_open
114+
115+        # Now begin the test.
116+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
117+
118+        # You passed!
119+
120+class TestServer(unittest.TestCase, ReallyEqualMixin):
121+    @mock.patch('__builtin__.open')
122+    def setUp(self, mockopen):
123+        def call_open(fname, mode):
124+            if fname == 'testdir/bucket_counter.state':
125+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
126+            elif fname == 'testdir/lease_checker.state':
127+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
128+            elif fname == 'testdir/lease_checker.history':
129+                return StringIO()
130+        mockopen.side_effect = call_open
131+
132+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
133+
134+
135+    @mock.patch('time.time')
136+    @mock.patch('os.mkdir')
137+    @mock.patch('__builtin__.open')
138+    @mock.patch('os.listdir')
139+    @mock.patch('os.path.isdir')
140+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
141+        """Handle a report of corruption."""
142+
143+        def call_listdir(dirname):
144+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
145+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
146+
147+        mocklistdir.side_effect = call_listdir
148+
149+        class MockFile:
150+            def __init__(self):
151+                self.buffer = ''
152+                self.pos = 0
153+            def write(self, instring):
154+                begin = self.pos
155+                padlen = begin - len(self.buffer)
156+                if padlen > 0:
157+                    self.buffer += '\x00' * padlen
158+                end = self.pos + len(instring)
159+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
160+                self.pos = end
161+            def close(self):
162+                pass
163+            def seek(self, pos):
164+                self.pos = pos
165+            def read(self, numberbytes):
166+                return self.buffer[self.pos:self.pos+numberbytes]
167+            def tell(self):
168+                return self.pos
169+
170+        mocktime.return_value = 0
171+
172+        sharefile = MockFile()
173+        def call_open(fname, mode):
174+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
175+            return sharefile
176+
177+        mockopen.side_effect = call_open
178+        # Now begin the test.
179+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
180+        print bs
181+        bs[0].remote_write(0, 'a')
182+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
183+
184+
185+    @mock.patch('os.path.exists')
186+    @mock.patch('os.path.getsize')
187+    @mock.patch('__builtin__.open')
188+    @mock.patch('os.listdir')
189+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
190+        """ This tests whether the code correctly finds and reads
191+        shares written out by old (Tahoe-LAFS <= v1.8.2)
192+        servers. There is a similar test in test_download, but that one
193+        is from the perspective of the client and exercises a deeper
194+        stack of code. This one is for exercising just the
195+        StorageServer object. """
196+
197+        def call_listdir(dirname):
198+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
199+            return ['0']
200+
201+        mocklistdir.side_effect = call_listdir
202+
203+        def call_open(fname, mode):
204+            self.failUnlessReallyEqual(fname, sharefname)
205+            self.failUnless('r' in mode, mode)
206+            self.failUnless('b' in mode, mode)
207+
208+            return StringIO(share_file_data)
209+        mockopen.side_effect = call_open
210+
211+        datalen = len(share_file_data)
212+        def call_getsize(fname):
213+            self.failUnlessReallyEqual(fname, sharefname)
214+            return datalen
215+        mockgetsize.side_effect = call_getsize
216+
217+        def call_exists(fname):
218+            self.failUnlessReallyEqual(fname, sharefname)
219+            return True
220+        mockexists.side_effect = call_exists
221+
222+        # Now begin the test.
223+        bs = self.s.remote_get_buckets('teststorage_index')
224+
225+        self.failUnlessEqual(len(bs), 1)
226+        b = bs[0]
227+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
228+        # If you try to read past the end you get the as much data as is there.
229+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
230+        # If you start reading past the end of the file you get the empty string.
231+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
232}
233[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
234wilcoxjg@gmail.com**20110624202850
235 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
236 sloppy not for production
237] {
238move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
239hunk ./src/allmydata/storage/crawler.py 13
240     pass
241 
242 class ShareCrawler(service.MultiService):
243-    """A ShareCrawler subclass is attached to a StorageServer, and
244+    """A subcless of ShareCrawler is attached to a StorageServer, and
245     periodically walks all of its shares, processing each one in some
246     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
247     since large servers can easily have a terabyte of shares, in several
248hunk ./src/allmydata/storage/crawler.py 31
249     We assume that the normal upload/download/get_buckets traffic of a tahoe
250     grid will cause the prefixdir contents to be mostly cached in the kernel,
251     or that the number of buckets in each prefixdir will be small enough to
252-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
253+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
254     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
255     prefix. On this server, each prefixdir took 130ms-200ms to list the first
256     time, and 17ms to list the second time.
257hunk ./src/allmydata/storage/crawler.py 68
258     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
259     minimum_cycle_time = 300 # don't run a cycle faster than this
260 
261-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
262+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
263         service.MultiService.__init__(self)
264         if allowed_cpu_percentage is not None:
265             self.allowed_cpu_percentage = allowed_cpu_percentage
266hunk ./src/allmydata/storage/crawler.py 72
267-        self.server = server
268-        self.sharedir = server.sharedir
269-        self.statefile = statefile
270+        self.backend = backend
271         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
272                          for i in range(2**10)]
273         self.prefixes.sort()
274hunk ./src/allmydata/storage/crawler.py 446
275 
276     minimum_cycle_time = 60*60 # we don't need this more than once an hour
277 
278-    def __init__(self, server, statefile, num_sample_prefixes=1):
279-        ShareCrawler.__init__(self, server, statefile)
280+    def __init__(self, statefile, num_sample_prefixes=1):
281+        ShareCrawler.__init__(self, statefile)
282         self.num_sample_prefixes = num_sample_prefixes
283 
284     def add_initial_state(self):
285hunk ./src/allmydata/storage/expirer.py 15
286     removed.
287 
288     I collect statistics on the leases and make these available to a web
289-    status page, including::
290+    status page, including:
291 
292     Space recovered during this cycle-so-far:
293      actual (only if expiration_enabled=True):
294hunk ./src/allmydata/storage/expirer.py 51
295     slow_start = 360 # wait 6 minutes after startup
296     minimum_cycle_time = 12*60*60 # not more than twice per day
297 
298-    def __init__(self, server, statefile, historyfile,
299+    def __init__(self, statefile, historyfile,
300                  expiration_enabled, mode,
301                  override_lease_duration, # used if expiration_mode=="age"
302                  cutoff_date, # used if expiration_mode=="cutoff-date"
303hunk ./src/allmydata/storage/expirer.py 71
304         else:
305             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
306         self.sharetypes_to_expire = sharetypes
307-        ShareCrawler.__init__(self, server, statefile)
308+        ShareCrawler.__init__(self, statefile)
309 
310     def add_initial_state(self):
311         # we fill ["cycle-to-date"] here (even though they will be reset in
312hunk ./src/allmydata/storage/immutable.py 44
313     sharetype = "immutable"
314 
315     def __init__(self, filename, max_size=None, create=False):
316-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
317+        """ If max_size is not None then I won't allow more than
318+        max_size to be written to me. If create=True then max_size
319+        must not be None. """
320         precondition((max_size is not None) or (not create), max_size, create)
321         self.home = filename
322         self._max_size = max_size
323hunk ./src/allmydata/storage/immutable.py 87
324 
325     def read_share_data(self, offset, length):
326         precondition(offset >= 0)
327-        # reads beyond the end of the data are truncated. Reads that start
328-        # beyond the end of the data return an empty string. I wonder why
329-        # Python doesn't do the following computation for me?
330+        # Reads beyond the end of the data are truncated. Reads that start
331+        # beyond the end of the data return an empty string.
332         seekpos = self._data_offset+offset
333         fsize = os.path.getsize(self.home)
334         actuallength = max(0, min(length, fsize-seekpos))
335hunk ./src/allmydata/storage/immutable.py 198
336             space_freed += os.stat(self.home)[stat.ST_SIZE]
337             self.unlink()
338         return space_freed
339+class NullBucketWriter(Referenceable):
340+    implements(RIBucketWriter)
341 
342hunk ./src/allmydata/storage/immutable.py 201
343+    def remote_write(self, offset, data):
344+        return
345 
346 class BucketWriter(Referenceable):
347     implements(RIBucketWriter)
348hunk ./src/allmydata/storage/server.py 7
349 from twisted.application import service
350 
351 from zope.interface import implements
352-from allmydata.interfaces import RIStorageServer, IStatsProducer
353+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
354 from allmydata.util import fileutil, idlib, log, time_format
355 import allmydata # for __full_version__
356 
357hunk ./src/allmydata/storage/server.py 16
358 from allmydata.storage.lease import LeaseInfo
359 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
360      create_mutable_sharefile
361-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
362+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
363 from allmydata.storage.crawler import BucketCountingCrawler
364 from allmydata.storage.expirer import LeaseCheckingCrawler
365 
366hunk ./src/allmydata/storage/server.py 20
367+from zope.interface import implements
368+
369+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
370+# be started and stopped.
371+class Backend(service.MultiService):
372+    implements(IStatsProducer)
373+    def __init__(self):
374+        service.MultiService.__init__(self)
375+
376+    def get_bucket_shares(self):
377+        """XXX"""
378+        raise NotImplementedError
379+
380+    def get_share(self):
381+        """XXX"""
382+        raise NotImplementedError
383+
384+    def make_bucket_writer(self):
385+        """XXX"""
386+        raise NotImplementedError
387+
388+class NullBackend(Backend):
389+    def __init__(self):
390+        Backend.__init__(self)
391+
392+    def get_available_space(self):
393+        return None
394+
395+    def get_bucket_shares(self, storage_index):
396+        return set()
397+
398+    def get_share(self, storage_index, sharenum):
399+        return None
400+
401+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
402+        return NullBucketWriter()
403+
404+class FSBackend(Backend):
405+    def __init__(self, storedir, readonly=False, reserved_space=0):
406+        Backend.__init__(self)
407+
408+        self._setup_storage(storedir, readonly, reserved_space)
409+        self._setup_corruption_advisory()
410+        self._setup_bucket_counter()
411+        self._setup_lease_checkerf()
412+
413+    def _setup_storage(self, storedir, readonly, reserved_space):
414+        self.storedir = storedir
415+        self.readonly = readonly
416+        self.reserved_space = int(reserved_space)
417+        if self.reserved_space:
418+            if self.get_available_space() is None:
419+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
420+                        umid="0wZ27w", level=log.UNUSUAL)
421+
422+        self.sharedir = os.path.join(self.storedir, "shares")
423+        fileutil.make_dirs(self.sharedir)
424+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
425+        self._clean_incomplete()
426+
427+    def _clean_incomplete(self):
428+        fileutil.rm_dir(self.incomingdir)
429+        fileutil.make_dirs(self.incomingdir)
430+
431+    def _setup_corruption_advisory(self):
432+        # we don't actually create the corruption-advisory dir until necessary
433+        self.corruption_advisory_dir = os.path.join(self.storedir,
434+                                                    "corruption-advisories")
435+
436+    def _setup_bucket_counter(self):
437+        statefile = os.path.join(self.storedir, "bucket_counter.state")
438+        self.bucket_counter = BucketCountingCrawler(statefile)
439+        self.bucket_counter.setServiceParent(self)
440+
441+    def _setup_lease_checkerf(self):
442+        statefile = os.path.join(self.storedir, "lease_checker.state")
443+        historyfile = os.path.join(self.storedir, "lease_checker.history")
444+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
445+                                   expiration_enabled, expiration_mode,
446+                                   expiration_override_lease_duration,
447+                                   expiration_cutoff_date,
448+                                   expiration_sharetypes)
449+        self.lease_checker.setServiceParent(self)
450+
451+    def get_available_space(self):
452+        if self.readonly:
453+            return 0
454+        return fileutil.get_available_space(self.storedir, self.reserved_space)
455+
456+    def get_bucket_shares(self, storage_index):
457+        """Return a list of (shnum, pathname) tuples for files that hold
458+        shares for this storage_index. In each tuple, 'shnum' will always be
459+        the integer form of the last component of 'pathname'."""
460+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
461+        try:
462+            for f in os.listdir(storagedir):
463+                if NUM_RE.match(f):
464+                    filename = os.path.join(storagedir, f)
465+                    yield (int(f), filename)
466+        except OSError:
467+            # Commonly caused by there being no buckets at all.
468+            pass
469+
470 # storage/
471 # storage/shares/incoming
472 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
473hunk ./src/allmydata/storage/server.py 143
474     name = 'storage'
475     LeaseCheckerClass = LeaseCheckingCrawler
476 
477-    def __init__(self, storedir, nodeid, reserved_space=0,
478-                 discard_storage=False, readonly_storage=False,
479+    def __init__(self, nodeid, backend, reserved_space=0,
480+                 readonly_storage=False,
481                  stats_provider=None,
482                  expiration_enabled=False,
483                  expiration_mode="age",
484hunk ./src/allmydata/storage/server.py 155
485         assert isinstance(nodeid, str)
486         assert len(nodeid) == 20
487         self.my_nodeid = nodeid
488-        self.storedir = storedir
489-        sharedir = os.path.join(storedir, "shares")
490-        fileutil.make_dirs(sharedir)
491-        self.sharedir = sharedir
492-        # we don't actually create the corruption-advisory dir until necessary
493-        self.corruption_advisory_dir = os.path.join(storedir,
494-                                                    "corruption-advisories")
495-        self.reserved_space = int(reserved_space)
496-        self.no_storage = discard_storage
497-        self.readonly_storage = readonly_storage
498         self.stats_provider = stats_provider
499         if self.stats_provider:
500             self.stats_provider.register_producer(self)
501hunk ./src/allmydata/storage/server.py 158
502-        self.incomingdir = os.path.join(sharedir, 'incoming')
503-        self._clean_incomplete()
504-        fileutil.make_dirs(self.incomingdir)
505         self._active_writers = weakref.WeakKeyDictionary()
506hunk ./src/allmydata/storage/server.py 159
507+        self.backend = backend
508+        self.backend.setServiceParent(self)
509         log.msg("StorageServer created", facility="tahoe.storage")
510 
511hunk ./src/allmydata/storage/server.py 163
512-        if reserved_space:
513-            if self.get_available_space() is None:
514-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
515-                        umin="0wZ27w", level=log.UNUSUAL)
516-
517         self.latencies = {"allocate": [], # immutable
518                           "write": [],
519                           "close": [],
520hunk ./src/allmydata/storage/server.py 174
521                           "renew": [],
522                           "cancel": [],
523                           }
524-        self.add_bucket_counter()
525-
526-        statefile = os.path.join(self.storedir, "lease_checker.state")
527-        historyfile = os.path.join(self.storedir, "lease_checker.history")
528-        klass = self.LeaseCheckerClass
529-        self.lease_checker = klass(self, statefile, historyfile,
530-                                   expiration_enabled, expiration_mode,
531-                                   expiration_override_lease_duration,
532-                                   expiration_cutoff_date,
533-                                   expiration_sharetypes)
534-        self.lease_checker.setServiceParent(self)
535 
536     def __repr__(self):
537         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
538hunk ./src/allmydata/storage/server.py 178
539 
540-    def add_bucket_counter(self):
541-        statefile = os.path.join(self.storedir, "bucket_counter.state")
542-        self.bucket_counter = BucketCountingCrawler(self, statefile)
543-        self.bucket_counter.setServiceParent(self)
544-
545     def count(self, name, delta=1):
546         if self.stats_provider:
547             self.stats_provider.count("storage_server." + name, delta)
548hunk ./src/allmydata/storage/server.py 233
549             kwargs["facility"] = "tahoe.storage"
550         return log.msg(*args, **kwargs)
551 
552-    def _clean_incomplete(self):
553-        fileutil.rm_dir(self.incomingdir)
554-
555     def get_stats(self):
556         # remember: RIStatsProvider requires that our return dict
557         # contains numeric values.
558hunk ./src/allmydata/storage/server.py 269
559             stats['storage_server.total_bucket_count'] = bucket_count
560         return stats
561 
562-    def get_available_space(self):
563-        """Returns available space for share storage in bytes, or None if no
564-        API to get this information is available."""
565-
566-        if self.readonly_storage:
567-            return 0
568-        return fileutil.get_available_space(self.storedir, self.reserved_space)
569-
570     def allocated_size(self):
571         space = 0
572         for bw in self._active_writers:
573hunk ./src/allmydata/storage/server.py 276
574         return space
575 
576     def remote_get_version(self):
577-        remaining_space = self.get_available_space()
578+        remaining_space = self.backend.get_available_space()
579         if remaining_space is None:
580             # We're on a platform that has no API to get disk stats.
581             remaining_space = 2**64
582hunk ./src/allmydata/storage/server.py 301
583         self.count("allocate")
584         alreadygot = set()
585         bucketwriters = {} # k: shnum, v: BucketWriter
586-        si_dir = storage_index_to_dir(storage_index)
587-        si_s = si_b2a(storage_index)
588 
589hunk ./src/allmydata/storage/server.py 302
590+        si_s = si_b2a(storage_index)
591         log.msg("storage: allocate_buckets %s" % si_s)
592 
593         # in this implementation, the lease information (including secrets)
594hunk ./src/allmydata/storage/server.py 316
595 
596         max_space_per_bucket = allocated_size
597 
598-        remaining_space = self.get_available_space()
599+        remaining_space = self.backend.get_available_space()
600         limited = remaining_space is not None
601         if limited:
602             # this is a bit conservative, since some of this allocated_size()
603hunk ./src/allmydata/storage/server.py 329
604         # they asked about: this will save them a lot of work. Add or update
605         # leases for all of them: if they want us to hold shares for this
606         # file, they'll want us to hold leases for this file.
607-        for (shnum, fn) in self._get_bucket_shares(storage_index):
608+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
609             alreadygot.add(shnum)
610             sf = ShareFile(fn)
611             sf.add_or_renew_lease(lease_info)
612hunk ./src/allmydata/storage/server.py 335
613 
614         for shnum in sharenums:
615-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
616-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
617-            if os.path.exists(finalhome):
618+            share = self.backend.get_share(storage_index, shnum)
619+
620+            if not share:
621+                if (not limited) or (remaining_space >= max_space_per_bucket):
622+                    # ok! we need to create the new share file.
623+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
624+                                      max_space_per_bucket, lease_info, canary)
625+                    bucketwriters[shnum] = bw
626+                    self._active_writers[bw] = 1
627+                    if limited:
628+                        remaining_space -= max_space_per_bucket
629+                else:
630+                    # bummer! not enough space to accept this bucket
631+                    pass
632+
633+            elif share.is_complete():
634                 # great! we already have it. easy.
635                 pass
636hunk ./src/allmydata/storage/server.py 353
637-            elif os.path.exists(incominghome):
638+            elif not share.is_complete():
639                 # Note that we don't create BucketWriters for shnums that
640                 # have a partial share (in incoming/), so if a second upload
641                 # occurs while the first is still in progress, the second
642hunk ./src/allmydata/storage/server.py 359
643                 # uploader will use different storage servers.
644                 pass
645-            elif (not limited) or (remaining_space >= max_space_per_bucket):
646-                # ok! we need to create the new share file.
647-                bw = BucketWriter(self, incominghome, finalhome,
648-                                  max_space_per_bucket, lease_info, canary)
649-                if self.no_storage:
650-                    bw.throw_out_all_data = True
651-                bucketwriters[shnum] = bw
652-                self._active_writers[bw] = 1
653-                if limited:
654-                    remaining_space -= max_space_per_bucket
655-            else:
656-                # bummer! not enough space to accept this bucket
657-                pass
658-
659-        if bucketwriters:
660-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
661 
662         self.add_latency("allocate", time.time() - start)
663         return alreadygot, bucketwriters
664hunk ./src/allmydata/storage/server.py 437
665             self.stats_provider.count('storage_server.bytes_added', consumed_size)
666         del self._active_writers[bw]
667 
668-    def _get_bucket_shares(self, storage_index):
669-        """Return a list of (shnum, pathname) tuples for files that hold
670-        shares for this storage_index. In each tuple, 'shnum' will always be
671-        the integer form of the last component of 'pathname'."""
672-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
673-        try:
674-            for f in os.listdir(storagedir):
675-                if NUM_RE.match(f):
676-                    filename = os.path.join(storagedir, f)
677-                    yield (int(f), filename)
678-        except OSError:
679-            # Commonly caused by there being no buckets at all.
680-            pass
681 
682     def remote_get_buckets(self, storage_index):
683         start = time.time()
684hunk ./src/allmydata/storage/server.py 444
685         si_s = si_b2a(storage_index)
686         log.msg("storage: get_buckets %s" % si_s)
687         bucketreaders = {} # k: sharenum, v: BucketReader
688-        for shnum, filename in self._get_bucket_shares(storage_index):
689+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
690             bucketreaders[shnum] = BucketReader(self, filename,
691                                                 storage_index, shnum)
692         self.add_latency("get", time.time() - start)
693hunk ./src/allmydata/test/test_backends.py 10
694 import mock
695 
696 # This is the code that we're going to be testing.
697-from allmydata.storage.server import StorageServer
698+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
699 
700 # The following share file contents was generated with
701 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
702hunk ./src/allmydata/test/test_backends.py 21
703 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
704 
705 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
706+    @mock.patch('time.time')
707+    @mock.patch('os.mkdir')
708+    @mock.patch('__builtin__.open')
709+    @mock.patch('os.listdir')
710+    @mock.patch('os.path.isdir')
711+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
712+        """ This tests whether a server instance can be constructed
713+        with a null backend. The server instance fails the test if it
714+        tries to read or write to the file system. """
715+
716+        # Now begin the test.
717+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
718+
719+        self.failIf(mockisdir.called)
720+        self.failIf(mocklistdir.called)
721+        self.failIf(mockopen.called)
722+        self.failIf(mockmkdir.called)
723+
724+        # You passed!
725+
726+    @mock.patch('time.time')
727+    @mock.patch('os.mkdir')
728     @mock.patch('__builtin__.open')
729hunk ./src/allmydata/test/test_backends.py 44
730-    def test_create_server(self, mockopen):
731-        """ This tests whether a server instance can be constructed. """
732+    @mock.patch('os.listdir')
733+    @mock.patch('os.path.isdir')
734+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
735+        """ This tests whether a server instance can be constructed
736+        with a filesystem backend. To pass the test, it has to use the
737+        filesystem in only the prescribed ways. """
738 
739         def call_open(fname, mode):
740             if fname == 'testdir/bucket_counter.state':
741hunk ./src/allmydata/test/test_backends.py 58
742                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
743             elif fname == 'testdir/lease_checker.history':
744                 return StringIO()
745+            else:
746+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
747         mockopen.side_effect = call_open
748 
749         # Now begin the test.
750hunk ./src/allmydata/test/test_backends.py 63
751-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
752+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
753+
754+        self.failIf(mockisdir.called)
755+        self.failIf(mocklistdir.called)
756+        self.failIf(mockopen.called)
757+        self.failIf(mockmkdir.called)
758+        self.failIf(mocktime.called)
759 
760         # You passed!
761 
762hunk ./src/allmydata/test/test_backends.py 73
763-class TestServer(unittest.TestCase, ReallyEqualMixin):
764+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
765+    def setUp(self):
766+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
767+
768+    @mock.patch('os.mkdir')
769+    @mock.patch('__builtin__.open')
770+    @mock.patch('os.listdir')
771+    @mock.patch('os.path.isdir')
772+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
773+        """ Write a new share. """
774+
775+        # Now begin the test.
776+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
777+        bs[0].remote_write(0, 'a')
778+        self.failIf(mockisdir.called)
779+        self.failIf(mocklistdir.called)
780+        self.failIf(mockopen.called)
781+        self.failIf(mockmkdir.called)
782+
783+    @mock.patch('os.path.exists')
784+    @mock.patch('os.path.getsize')
785+    @mock.patch('__builtin__.open')
786+    @mock.patch('os.listdir')
787+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
788+        """ This tests whether the code correctly finds and reads
789+        shares written out by old (Tahoe-LAFS <= v1.8.2)
790+        servers. There is a similar test in test_download, but that one
791+        is from the perspective of the client and exercises a deeper
792+        stack of code. This one is for exercising just the
793+        StorageServer object. """
794+
795+        # Now begin the test.
796+        bs = self.s.remote_get_buckets('teststorage_index')
797+
798+        self.failUnlessEqual(len(bs), 0)
799+        self.failIf(mocklistdir.called)
800+        self.failIf(mockopen.called)
801+        self.failIf(mockgetsize.called)
802+        self.failIf(mockexists.called)
803+
804+
805+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
806     @mock.patch('__builtin__.open')
807     def setUp(self, mockopen):
808         def call_open(fname, mode):
809hunk ./src/allmydata/test/test_backends.py 126
810                 return StringIO()
811         mockopen.side_effect = call_open
812 
813-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
814-
815+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
816 
817     @mock.patch('time.time')
818     @mock.patch('os.mkdir')
819hunk ./src/allmydata/test/test_backends.py 134
820     @mock.patch('os.listdir')
821     @mock.patch('os.path.isdir')
822     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
823-        """Handle a report of corruption."""
824+        """ Write a new share. """
825 
826         def call_listdir(dirname):
827             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
828hunk ./src/allmydata/test/test_backends.py 173
829         mockopen.side_effect = call_open
830         # Now begin the test.
831         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
832-        print bs
833         bs[0].remote_write(0, 'a')
834         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
835 
836hunk ./src/allmydata/test/test_backends.py 176
837-
838     @mock.patch('os.path.exists')
839     @mock.patch('os.path.getsize')
840     @mock.patch('__builtin__.open')
841hunk ./src/allmydata/test/test_backends.py 218
842 
843         self.failUnlessEqual(len(bs), 1)
844         b = bs[0]
845+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
846         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
847         # If you try to read past the end you get the as much data as is there.
848         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
849hunk ./src/allmydata/test/test_backends.py 224
850         # If you start reading past the end of the file you get the empty string.
851         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
852+
853+
854}
855[a temp patch used as a snapshot
856wilcoxjg@gmail.com**20110626052732
857 Ignore-this: 95f05e314eaec870afa04c76d979aa44
858] {
859hunk ./docs/configuration.rst 637
860   [storage]
861   enabled = True
862   readonly = True
863-  sizelimit = 10000000000
864 
865 
866   [helper]
867hunk ./docs/garbage-collection.rst 16
868 
869 When a file or directory in the virtual filesystem is no longer referenced,
870 the space that its shares occupied on each storage server can be freed,
871-making room for other shares. Tahoe currently uses a garbage collection
872+making room for other shares. Tahoe uses a garbage collection
873 ("GC") mechanism to implement this space-reclamation process. Each share has
874 one or more "leases", which are managed by clients who want the
875 file/directory to be retained. The storage server accepts each share for a
876hunk ./docs/garbage-collection.rst 34
877 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
878 If lease renewal occurs quickly and with 100% reliability, than any renewal
879 time that is shorter than the lease duration will suffice, but a larger ratio
880-of duration-over-renewal-time will be more robust in the face of occasional
881+of lease duration to renewal time will be more robust in the face of occasional
882 delays or failures.
883 
884 The current recommended values for a small Tahoe grid are to renew the leases
885replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
886hunk ./src/allmydata/client.py 260
887             sharetypes.append("mutable")
888         expiration_sharetypes = tuple(sharetypes)
889 
890+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
891+            xyz
892+        xyz
893         ss = StorageServer(storedir, self.nodeid,
894                            reserved_space=reserved,
895                            discard_storage=discard,
896hunk ./src/allmydata/storage/crawler.py 234
897         f = open(tmpfile, "wb")
898         pickle.dump(self.state, f)
899         f.close()
900-        fileutil.move_into_place(tmpfile, self.statefile)
901+        fileutil.move_into_place(tmpfile, self.statefname)
902 
903     def startService(self):
904         # arrange things to look like we were just sleeping, so
905}
906[snapshot of progress on backend implementation (not suitable for trunk)
907wilcoxjg@gmail.com**20110626053244
908 Ignore-this: 50c764af791c2b99ada8289546806a0a
909] {
910adddir ./src/allmydata/storage/backends
911adddir ./src/allmydata/storage/backends/das
912move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
913adddir ./src/allmydata/storage/backends/null
914hunk ./src/allmydata/interfaces.py 270
915         store that on disk.
916         """
917 
918+class IStorageBackend(Interface):
919+    """
920+    Objects of this kind live on the server side and are used by the
921+    storage server object.
922+    """
923+    def get_available_space(self, reserved_space):
924+        """ Returns available space for share storage in bytes, or
925+        None if this information is not available or if the available
926+        space is unlimited.
927+
928+        If the backend is configured for read-only mode then this will
929+        return 0.
930+
931+        reserved_space is how many bytes to subtract from the answer, so
932+        you can pass how many bytes you would like to leave unused on this
933+        filesystem as reserved_space. """
934+
935+    def get_bucket_shares(self):
936+        """XXX"""
937+
938+    def get_share(self):
939+        """XXX"""
940+
941+    def make_bucket_writer(self):
942+        """XXX"""
943+
944+class IStorageBackendShare(Interface):
945+    """
946+    This object contains as much as all of the share data.  It is intended
947+    for lazy evaluation such that in many use cases substantially less than
948+    all of the share data will be accessed.
949+    """
950+    def is_complete(self):
951+        """
952+        Returns the share state, or None if the share does not exist.
953+        """
954+
955 class IStorageBucketWriter(Interface):
956     """
957     Objects of this kind live on the client side.
958hunk ./src/allmydata/interfaces.py 2492
959 
960 class EmptyPathnameComponentError(Exception):
961     """The webapi disallows empty pathname components."""
962+
963+class IShareStore(Interface):
964+    pass
965+
966addfile ./src/allmydata/storage/backends/__init__.py
967addfile ./src/allmydata/storage/backends/das/__init__.py
968addfile ./src/allmydata/storage/backends/das/core.py
969hunk ./src/allmydata/storage/backends/das/core.py 1
970+from allmydata.interfaces import IStorageBackend
971+from allmydata.storage.backends.base import Backend
972+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
973+from allmydata.util.assertutil import precondition
974+
975+import os, re, weakref, struct, time
976+
977+from foolscap.api import Referenceable
978+from twisted.application import service
979+
980+from zope.interface import implements
981+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
982+from allmydata.util import fileutil, idlib, log, time_format
983+import allmydata # for __full_version__
984+
985+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
986+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
987+from allmydata.storage.lease import LeaseInfo
988+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
989+     create_mutable_sharefile
990+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
991+from allmydata.storage.crawler import FSBucketCountingCrawler
992+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
993+
994+from zope.interface import implements
995+
996+class DASCore(Backend):
997+    implements(IStorageBackend)
998+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
999+        Backend.__init__(self)
1000+
1001+        self._setup_storage(storedir, readonly, reserved_space)
1002+        self._setup_corruption_advisory()
1003+        self._setup_bucket_counter()
1004+        self._setup_lease_checkerf(expiration_policy)
1005+
1006+    def _setup_storage(self, storedir, readonly, reserved_space):
1007+        self.storedir = storedir
1008+        self.readonly = readonly
1009+        self.reserved_space = int(reserved_space)
1010+        if self.reserved_space:
1011+            if self.get_available_space() is None:
1012+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1013+                        umid="0wZ27w", level=log.UNUSUAL)
1014+
1015+        self.sharedir = os.path.join(self.storedir, "shares")
1016+        fileutil.make_dirs(self.sharedir)
1017+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1018+        self._clean_incomplete()
1019+
1020+    def _clean_incomplete(self):
1021+        fileutil.rm_dir(self.incomingdir)
1022+        fileutil.make_dirs(self.incomingdir)
1023+
1024+    def _setup_corruption_advisory(self):
1025+        # we don't actually create the corruption-advisory dir until necessary
1026+        self.corruption_advisory_dir = os.path.join(self.storedir,
1027+                                                    "corruption-advisories")
1028+
1029+    def _setup_bucket_counter(self):
1030+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1031+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1032+        self.bucket_counter.setServiceParent(self)
1033+
1034+    def _setup_lease_checkerf(self, expiration_policy):
1035+        statefile = os.path.join(self.storedir, "lease_checker.state")
1036+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1037+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1038+        self.lease_checker.setServiceParent(self)
1039+
1040+    def get_available_space(self):
1041+        if self.readonly:
1042+            return 0
1043+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1044+
1045+    def get_shares(self, storage_index):
1046+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1047+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1048+        try:
1049+            for f in os.listdir(finalstoragedir):
1050+                if NUM_RE.match(f):
1051+                    filename = os.path.join(finalstoragedir, f)
1052+                    yield FSBShare(filename, int(f))
1053+        except OSError:
1054+            # Commonly caused by there being no buckets at all.
1055+            pass
1056+       
1057+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1058+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1059+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1060+        return bw
1061+       
1062+
1063+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1064+# and share data. The share data is accessed by RIBucketWriter.write and
1065+# RIBucketReader.read . The lease information is not accessible through these
1066+# interfaces.
1067+
1068+# The share file has the following layout:
1069+#  0x00: share file version number, four bytes, current version is 1
1070+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1071+#  0x08: number of leases, four bytes big-endian
1072+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1073+#  A+0x0c = B: first lease. Lease format is:
1074+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1075+#   B+0x04: renew secret, 32 bytes (SHA256)
1076+#   B+0x24: cancel secret, 32 bytes (SHA256)
1077+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1078+#   B+0x48: next lease, or end of record
1079+
1080+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1081+# but it is still filled in by storage servers in case the storage server
1082+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1083+# share file is moved from one storage server to another. The value stored in
1084+# this field is truncated, so if the actual share data length is >= 2**32,
1085+# then the value stored in this field will be the actual share data length
1086+# modulo 2**32.
1087+
1088+class ImmutableShare:
1089+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1090+    sharetype = "immutable"
1091+
1092+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1093+        """ If max_size is not None then I won't allow more than
1094+        max_size to be written to me. If create=True then max_size
1095+        must not be None. """
1096+        precondition((max_size is not None) or (not create), max_size, create)
1097+        self.shnum = shnum
1098+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1099+        self._max_size = max_size
1100+        if create:
1101+            # touch the file, so later callers will see that we're working on
1102+            # it. Also construct the metadata.
1103+            assert not os.path.exists(self.fname)
1104+            fileutil.make_dirs(os.path.dirname(self.fname))
1105+            f = open(self.fname, 'wb')
1106+            # The second field -- the four-byte share data length -- is no
1107+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1108+            # there in case someone downgrades a storage server from >=
1109+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1110+            # server to another, etc. We do saturation -- a share data length
1111+            # larger than 2**32-1 (what can fit into the field) is marked as
1112+            # the largest length that can fit into the field. That way, even
1113+            # if this does happen, the old < v1.3.0 server will still allow
1114+            # clients to read the first part of the share.
1115+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1116+            f.close()
1117+            self._lease_offset = max_size + 0x0c
1118+            self._num_leases = 0
1119+        else:
1120+            f = open(self.fname, 'rb')
1121+            filesize = os.path.getsize(self.fname)
1122+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1123+            f.close()
1124+            if version != 1:
1125+                msg = "sharefile %s had version %d but we wanted 1" % \
1126+                      (self.fname, version)
1127+                raise UnknownImmutableContainerVersionError(msg)
1128+            self._num_leases = num_leases
1129+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1130+        self._data_offset = 0xc
1131+
1132+    def unlink(self):
1133+        os.unlink(self.fname)
1134+
1135+    def read_share_data(self, offset, length):
1136+        precondition(offset >= 0)
1137+        # Reads beyond the end of the data are truncated. Reads that start
1138+        # beyond the end of the data return an empty string.
1139+        seekpos = self._data_offset+offset
1140+        fsize = os.path.getsize(self.fname)
1141+        actuallength = max(0, min(length, fsize-seekpos))
1142+        if actuallength == 0:
1143+            return ""
1144+        f = open(self.fname, 'rb')
1145+        f.seek(seekpos)
1146+        return f.read(actuallength)
1147+
1148+    def write_share_data(self, offset, data):
1149+        length = len(data)
1150+        precondition(offset >= 0, offset)
1151+        if self._max_size is not None and offset+length > self._max_size:
1152+            raise DataTooLargeError(self._max_size, offset, length)
1153+        f = open(self.fname, 'rb+')
1154+        real_offset = self._data_offset+offset
1155+        f.seek(real_offset)
1156+        assert f.tell() == real_offset
1157+        f.write(data)
1158+        f.close()
1159+
1160+    def _write_lease_record(self, f, lease_number, lease_info):
1161+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1162+        f.seek(offset)
1163+        assert f.tell() == offset
1164+        f.write(lease_info.to_immutable_data())
1165+
1166+    def _read_num_leases(self, f):
1167+        f.seek(0x08)
1168+        (num_leases,) = struct.unpack(">L", f.read(4))
1169+        return num_leases
1170+
1171+    def _write_num_leases(self, f, num_leases):
1172+        f.seek(0x08)
1173+        f.write(struct.pack(">L", num_leases))
1174+
1175+    def _truncate_leases(self, f, num_leases):
1176+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1177+
1178+    def get_leases(self):
1179+        """Yields a LeaseInfo instance for all leases."""
1180+        f = open(self.fname, 'rb')
1181+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1182+        f.seek(self._lease_offset)
1183+        for i in range(num_leases):
1184+            data = f.read(self.LEASE_SIZE)
1185+            if data:
1186+                yield LeaseInfo().from_immutable_data(data)
1187+
1188+    def add_lease(self, lease_info):
1189+        f = open(self.fname, 'rb+')
1190+        num_leases = self._read_num_leases(f)
1191+        self._write_lease_record(f, num_leases, lease_info)
1192+        self._write_num_leases(f, num_leases+1)
1193+        f.close()
1194+
1195+    def renew_lease(self, renew_secret, new_expire_time):
1196+        for i,lease in enumerate(self.get_leases()):
1197+            if constant_time_compare(lease.renew_secret, renew_secret):
1198+                # yup. See if we need to update the owner time.
1199+                if new_expire_time > lease.expiration_time:
1200+                    # yes
1201+                    lease.expiration_time = new_expire_time
1202+                    f = open(self.fname, 'rb+')
1203+                    self._write_lease_record(f, i, lease)
1204+                    f.close()
1205+                return
1206+        raise IndexError("unable to renew non-existent lease")
1207+
1208+    def add_or_renew_lease(self, lease_info):
1209+        try:
1210+            self.renew_lease(lease_info.renew_secret,
1211+                             lease_info.expiration_time)
1212+        except IndexError:
1213+            self.add_lease(lease_info)
1214+
1215+
1216+    def cancel_lease(self, cancel_secret):
1217+        """Remove a lease with the given cancel_secret. If the last lease is
1218+        cancelled, the file will be removed. Return the number of bytes that
1219+        were freed (by truncating the list of leases, and possibly by
1220+        deleting the file. Raise IndexError if there was no lease with the
1221+        given cancel_secret.
1222+        """
1223+
1224+        leases = list(self.get_leases())
1225+        num_leases_removed = 0
1226+        for i,lease in enumerate(leases):
1227+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1228+                leases[i] = None
1229+                num_leases_removed += 1
1230+        if not num_leases_removed:
1231+            raise IndexError("unable to find matching lease to cancel")
1232+        if num_leases_removed:
1233+            # pack and write out the remaining leases. We write these out in
1234+            # the same order as they were added, so that if we crash while
1235+            # doing this, we won't lose any non-cancelled leases.
1236+            leases = [l for l in leases if l] # remove the cancelled leases
1237+            f = open(self.fname, 'rb+')
1238+            for i,lease in enumerate(leases):
1239+                self._write_lease_record(f, i, lease)
1240+            self._write_num_leases(f, len(leases))
1241+            self._truncate_leases(f, len(leases))
1242+            f.close()
1243+        space_freed = self.LEASE_SIZE * num_leases_removed
1244+        if not len(leases):
1245+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1246+            self.unlink()
1247+        return space_freed
1248hunk ./src/allmydata/storage/backends/das/expirer.py 2
1249 import time, os, pickle, struct
1250-from allmydata.storage.crawler import ShareCrawler
1251-from allmydata.storage.shares import get_share_file
1252+from allmydata.storage.crawler import FSShareCrawler
1253 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1254      UnknownImmutableContainerVersionError
1255 from twisted.python import log as twlog
1256hunk ./src/allmydata/storage/backends/das/expirer.py 7
1257 
1258-class LeaseCheckingCrawler(ShareCrawler):
1259+class FSLeaseCheckingCrawler(FSShareCrawler):
1260     """I examine the leases on all shares, determining which are still valid
1261     and which have expired. I can remove the expired leases (if so
1262     configured), and the share will be deleted when the last lease is
1263hunk ./src/allmydata/storage/backends/das/expirer.py 50
1264     slow_start = 360 # wait 6 minutes after startup
1265     minimum_cycle_time = 12*60*60 # not more than twice per day
1266 
1267-    def __init__(self, statefile, historyfile,
1268-                 expiration_enabled, mode,
1269-                 override_lease_duration, # used if expiration_mode=="age"
1270-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1271-                 sharetypes):
1272+    def __init__(self, statefile, historyfile, expiration_policy):
1273         self.historyfile = historyfile
1274hunk ./src/allmydata/storage/backends/das/expirer.py 52
1275-        self.expiration_enabled = expiration_enabled
1276-        self.mode = mode
1277+        self.expiration_enabled = expiration_policy['enabled']
1278+        self.mode = expiration_policy['mode']
1279         self.override_lease_duration = None
1280         self.cutoff_date = None
1281         if self.mode == "age":
1282hunk ./src/allmydata/storage/backends/das/expirer.py 57
1283-            assert isinstance(override_lease_duration, (int, type(None)))
1284-            self.override_lease_duration = override_lease_duration # seconds
1285+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1286+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1287         elif self.mode == "cutoff-date":
1288hunk ./src/allmydata/storage/backends/das/expirer.py 60
1289-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1290+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1291             assert cutoff_date is not None
1292hunk ./src/allmydata/storage/backends/das/expirer.py 62
1293-            self.cutoff_date = cutoff_date
1294+            self.cutoff_date = expiration_policy['cutoff_date']
1295         else:
1296hunk ./src/allmydata/storage/backends/das/expirer.py 64
1297-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1298-        self.sharetypes_to_expire = sharetypes
1299-        ShareCrawler.__init__(self, statefile)
1300+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1301+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1302+        FSShareCrawler.__init__(self, statefile)
1303 
1304     def add_initial_state(self):
1305         # we fill ["cycle-to-date"] here (even though they will be reset in
1306hunk ./src/allmydata/storage/backends/das/expirer.py 156
1307 
1308     def process_share(self, sharefilename):
1309         # first, find out what kind of a share it is
1310-        sf = get_share_file(sharefilename)
1311+        f = open(sharefilename, "rb")
1312+        prefix = f.read(32)
1313+        f.close()
1314+        if prefix == MutableShareFile.MAGIC:
1315+            sf = MutableShareFile(sharefilename)
1316+        else:
1317+            # otherwise assume it's immutable
1318+            sf = FSBShare(sharefilename)
1319         sharetype = sf.sharetype
1320         now = time.time()
1321         s = self.stat(sharefilename)
1322addfile ./src/allmydata/storage/backends/null/__init__.py
1323addfile ./src/allmydata/storage/backends/null/core.py
1324hunk ./src/allmydata/storage/backends/null/core.py 1
1325+from allmydata.storage.backends.base import Backend
1326+
1327+class NullCore(Backend):
1328+    def __init__(self):
1329+        Backend.__init__(self)
1330+
1331+    def get_available_space(self):
1332+        return None
1333+
1334+    def get_shares(self, storage_index):
1335+        return set()
1336+
1337+    def get_share(self, storage_index, sharenum):
1338+        return None
1339+
1340+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1341+        return NullBucketWriter()
1342hunk ./src/allmydata/storage/crawler.py 12
1343 class TimeSliceExceeded(Exception):
1344     pass
1345 
1346-class ShareCrawler(service.MultiService):
1347+class FSShareCrawler(service.MultiService):
1348     """A subcless of ShareCrawler is attached to a StorageServer, and
1349     periodically walks all of its shares, processing each one in some
1350     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1351hunk ./src/allmydata/storage/crawler.py 68
1352     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1353     minimum_cycle_time = 300 # don't run a cycle faster than this
1354 
1355-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1356+    def __init__(self, statefname, allowed_cpu_percentage=None):
1357         service.MultiService.__init__(self)
1358         if allowed_cpu_percentage is not None:
1359             self.allowed_cpu_percentage = allowed_cpu_percentage
1360hunk ./src/allmydata/storage/crawler.py 72
1361-        self.backend = backend
1362+        self.statefname = statefname
1363         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1364                          for i in range(2**10)]
1365         self.prefixes.sort()
1366hunk ./src/allmydata/storage/crawler.py 192
1367         #                            of the last bucket to be processed, or
1368         #                            None if we are sleeping between cycles
1369         try:
1370-            f = open(self.statefile, "rb")
1371+            f = open(self.statefname, "rb")
1372             state = pickle.load(f)
1373             f.close()
1374         except EnvironmentError:
1375hunk ./src/allmydata/storage/crawler.py 230
1376         else:
1377             last_complete_prefix = self.prefixes[lcpi]
1378         self.state["last-complete-prefix"] = last_complete_prefix
1379-        tmpfile = self.statefile + ".tmp"
1380+        tmpfile = self.statefname + ".tmp"
1381         f = open(tmpfile, "wb")
1382         pickle.dump(self.state, f)
1383         f.close()
1384hunk ./src/allmydata/storage/crawler.py 433
1385         pass
1386 
1387 
1388-class BucketCountingCrawler(ShareCrawler):
1389+class FSBucketCountingCrawler(FSShareCrawler):
1390     """I keep track of how many buckets are being managed by this server.
1391     This is equivalent to the number of distributed files and directories for
1392     which I am providing storage. The actual number of files+directories in
1393hunk ./src/allmydata/storage/crawler.py 446
1394 
1395     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1396 
1397-    def __init__(self, statefile, num_sample_prefixes=1):
1398-        ShareCrawler.__init__(self, statefile)
1399+    def __init__(self, statefname, num_sample_prefixes=1):
1400+        FSShareCrawler.__init__(self, statefname)
1401         self.num_sample_prefixes = num_sample_prefixes
1402 
1403     def add_initial_state(self):
1404hunk ./src/allmydata/storage/immutable.py 14
1405 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1406      DataTooLargeError
1407 
1408-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1409-# and share data. The share data is accessed by RIBucketWriter.write and
1410-# RIBucketReader.read . The lease information is not accessible through these
1411-# interfaces.
1412-
1413-# The share file has the following layout:
1414-#  0x00: share file version number, four bytes, current version is 1
1415-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1416-#  0x08: number of leases, four bytes big-endian
1417-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1418-#  A+0x0c = B: first lease. Lease format is:
1419-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1420-#   B+0x04: renew secret, 32 bytes (SHA256)
1421-#   B+0x24: cancel secret, 32 bytes (SHA256)
1422-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1423-#   B+0x48: next lease, or end of record
1424-
1425-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1426-# but it is still filled in by storage servers in case the storage server
1427-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1428-# share file is moved from one storage server to another. The value stored in
1429-# this field is truncated, so if the actual share data length is >= 2**32,
1430-# then the value stored in this field will be the actual share data length
1431-# modulo 2**32.
1432-
1433-class ShareFile:
1434-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1435-    sharetype = "immutable"
1436-
1437-    def __init__(self, filename, max_size=None, create=False):
1438-        """ If max_size is not None then I won't allow more than
1439-        max_size to be written to me. If create=True then max_size
1440-        must not be None. """
1441-        precondition((max_size is not None) or (not create), max_size, create)
1442-        self.home = filename
1443-        self._max_size = max_size
1444-        if create:
1445-            # touch the file, so later callers will see that we're working on
1446-            # it. Also construct the metadata.
1447-            assert not os.path.exists(self.home)
1448-            fileutil.make_dirs(os.path.dirname(self.home))
1449-            f = open(self.home, 'wb')
1450-            # The second field -- the four-byte share data length -- is no
1451-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1452-            # there in case someone downgrades a storage server from >=
1453-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1454-            # server to another, etc. We do saturation -- a share data length
1455-            # larger than 2**32-1 (what can fit into the field) is marked as
1456-            # the largest length that can fit into the field. That way, even
1457-            # if this does happen, the old < v1.3.0 server will still allow
1458-            # clients to read the first part of the share.
1459-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1460-            f.close()
1461-            self._lease_offset = max_size + 0x0c
1462-            self._num_leases = 0
1463-        else:
1464-            f = open(self.home, 'rb')
1465-            filesize = os.path.getsize(self.home)
1466-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1467-            f.close()
1468-            if version != 1:
1469-                msg = "sharefile %s had version %d but we wanted 1" % \
1470-                      (filename, version)
1471-                raise UnknownImmutableContainerVersionError(msg)
1472-            self._num_leases = num_leases
1473-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1474-        self._data_offset = 0xc
1475-
1476-    def unlink(self):
1477-        os.unlink(self.home)
1478-
1479-    def read_share_data(self, offset, length):
1480-        precondition(offset >= 0)
1481-        # Reads beyond the end of the data are truncated. Reads that start
1482-        # beyond the end of the data return an empty string.
1483-        seekpos = self._data_offset+offset
1484-        fsize = os.path.getsize(self.home)
1485-        actuallength = max(0, min(length, fsize-seekpos))
1486-        if actuallength == 0:
1487-            return ""
1488-        f = open(self.home, 'rb')
1489-        f.seek(seekpos)
1490-        return f.read(actuallength)
1491-
1492-    def write_share_data(self, offset, data):
1493-        length = len(data)
1494-        precondition(offset >= 0, offset)
1495-        if self._max_size is not None and offset+length > self._max_size:
1496-            raise DataTooLargeError(self._max_size, offset, length)
1497-        f = open(self.home, 'rb+')
1498-        real_offset = self._data_offset+offset
1499-        f.seek(real_offset)
1500-        assert f.tell() == real_offset
1501-        f.write(data)
1502-        f.close()
1503-
1504-    def _write_lease_record(self, f, lease_number, lease_info):
1505-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1506-        f.seek(offset)
1507-        assert f.tell() == offset
1508-        f.write(lease_info.to_immutable_data())
1509-
1510-    def _read_num_leases(self, f):
1511-        f.seek(0x08)
1512-        (num_leases,) = struct.unpack(">L", f.read(4))
1513-        return num_leases
1514-
1515-    def _write_num_leases(self, f, num_leases):
1516-        f.seek(0x08)
1517-        f.write(struct.pack(">L", num_leases))
1518-
1519-    def _truncate_leases(self, f, num_leases):
1520-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1521-
1522-    def get_leases(self):
1523-        """Yields a LeaseInfo instance for all leases."""
1524-        f = open(self.home, 'rb')
1525-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1526-        f.seek(self._lease_offset)
1527-        for i in range(num_leases):
1528-            data = f.read(self.LEASE_SIZE)
1529-            if data:
1530-                yield LeaseInfo().from_immutable_data(data)
1531-
1532-    def add_lease(self, lease_info):
1533-        f = open(self.home, 'rb+')
1534-        num_leases = self._read_num_leases(f)
1535-        self._write_lease_record(f, num_leases, lease_info)
1536-        self._write_num_leases(f, num_leases+1)
1537-        f.close()
1538-
1539-    def renew_lease(self, renew_secret, new_expire_time):
1540-        for i,lease in enumerate(self.get_leases()):
1541-            if constant_time_compare(lease.renew_secret, renew_secret):
1542-                # yup. See if we need to update the owner time.
1543-                if new_expire_time > lease.expiration_time:
1544-                    # yes
1545-                    lease.expiration_time = new_expire_time
1546-                    f = open(self.home, 'rb+')
1547-                    self._write_lease_record(f, i, lease)
1548-                    f.close()
1549-                return
1550-        raise IndexError("unable to renew non-existent lease")
1551-
1552-    def add_or_renew_lease(self, lease_info):
1553-        try:
1554-            self.renew_lease(lease_info.renew_secret,
1555-                             lease_info.expiration_time)
1556-        except IndexError:
1557-            self.add_lease(lease_info)
1558-
1559-
1560-    def cancel_lease(self, cancel_secret):
1561-        """Remove a lease with the given cancel_secret. If the last lease is
1562-        cancelled, the file will be removed. Return the number of bytes that
1563-        were freed (by truncating the list of leases, and possibly by
1564-        deleting the file. Raise IndexError if there was no lease with the
1565-        given cancel_secret.
1566-        """
1567-
1568-        leases = list(self.get_leases())
1569-        num_leases_removed = 0
1570-        for i,lease in enumerate(leases):
1571-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1572-                leases[i] = None
1573-                num_leases_removed += 1
1574-        if not num_leases_removed:
1575-            raise IndexError("unable to find matching lease to cancel")
1576-        if num_leases_removed:
1577-            # pack and write out the remaining leases. We write these out in
1578-            # the same order as they were added, so that if we crash while
1579-            # doing this, we won't lose any non-cancelled leases.
1580-            leases = [l for l in leases if l] # remove the cancelled leases
1581-            f = open(self.home, 'rb+')
1582-            for i,lease in enumerate(leases):
1583-                self._write_lease_record(f, i, lease)
1584-            self._write_num_leases(f, len(leases))
1585-            self._truncate_leases(f, len(leases))
1586-            f.close()
1587-        space_freed = self.LEASE_SIZE * num_leases_removed
1588-        if not len(leases):
1589-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1590-            self.unlink()
1591-        return space_freed
1592-class NullBucketWriter(Referenceable):
1593-    implements(RIBucketWriter)
1594-
1595-    def remote_write(self, offset, data):
1596-        return
1597-
1598 class BucketWriter(Referenceable):
1599     implements(RIBucketWriter)
1600 
1601hunk ./src/allmydata/storage/immutable.py 17
1602-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1603+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1604         self.ss = ss
1605hunk ./src/allmydata/storage/immutable.py 19
1606-        self.incominghome = incominghome
1607-        self.finalhome = finalhome
1608         self._max_size = max_size # don't allow the client to write more than this
1609         self._canary = canary
1610         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1611hunk ./src/allmydata/storage/immutable.py 24
1612         self.closed = False
1613         self.throw_out_all_data = False
1614-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1615+        self._sharefile = immutableshare
1616         # also, add our lease to the file now, so that other ones can be
1617         # added by simultaneous uploaders
1618         self._sharefile.add_lease(lease_info)
1619hunk ./src/allmydata/storage/server.py 16
1620 from allmydata.storage.lease import LeaseInfo
1621 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1622      create_mutable_sharefile
1623-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1624-from allmydata.storage.crawler import BucketCountingCrawler
1625-from allmydata.storage.expirer import LeaseCheckingCrawler
1626 
1627 from zope.interface import implements
1628 
1629hunk ./src/allmydata/storage/server.py 19
1630-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1631-# be started and stopped.
1632-class Backend(service.MultiService):
1633-    implements(IStatsProducer)
1634-    def __init__(self):
1635-        service.MultiService.__init__(self)
1636-
1637-    def get_bucket_shares(self):
1638-        """XXX"""
1639-        raise NotImplementedError
1640-
1641-    def get_share(self):
1642-        """XXX"""
1643-        raise NotImplementedError
1644-
1645-    def make_bucket_writer(self):
1646-        """XXX"""
1647-        raise NotImplementedError
1648-
1649-class NullBackend(Backend):
1650-    def __init__(self):
1651-        Backend.__init__(self)
1652-
1653-    def get_available_space(self):
1654-        return None
1655-
1656-    def get_bucket_shares(self, storage_index):
1657-        return set()
1658-
1659-    def get_share(self, storage_index, sharenum):
1660-        return None
1661-
1662-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1663-        return NullBucketWriter()
1664-
1665-class FSBackend(Backend):
1666-    def __init__(self, storedir, readonly=False, reserved_space=0):
1667-        Backend.__init__(self)
1668-
1669-        self._setup_storage(storedir, readonly, reserved_space)
1670-        self._setup_corruption_advisory()
1671-        self._setup_bucket_counter()
1672-        self._setup_lease_checkerf()
1673-
1674-    def _setup_storage(self, storedir, readonly, reserved_space):
1675-        self.storedir = storedir
1676-        self.readonly = readonly
1677-        self.reserved_space = int(reserved_space)
1678-        if self.reserved_space:
1679-            if self.get_available_space() is None:
1680-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1681-                        umid="0wZ27w", level=log.UNUSUAL)
1682-
1683-        self.sharedir = os.path.join(self.storedir, "shares")
1684-        fileutil.make_dirs(self.sharedir)
1685-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1686-        self._clean_incomplete()
1687-
1688-    def _clean_incomplete(self):
1689-        fileutil.rm_dir(self.incomingdir)
1690-        fileutil.make_dirs(self.incomingdir)
1691-
1692-    def _setup_corruption_advisory(self):
1693-        # we don't actually create the corruption-advisory dir until necessary
1694-        self.corruption_advisory_dir = os.path.join(self.storedir,
1695-                                                    "corruption-advisories")
1696-
1697-    def _setup_bucket_counter(self):
1698-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1699-        self.bucket_counter = BucketCountingCrawler(statefile)
1700-        self.bucket_counter.setServiceParent(self)
1701-
1702-    def _setup_lease_checkerf(self):
1703-        statefile = os.path.join(self.storedir, "lease_checker.state")
1704-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1705-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1706-                                   expiration_enabled, expiration_mode,
1707-                                   expiration_override_lease_duration,
1708-                                   expiration_cutoff_date,
1709-                                   expiration_sharetypes)
1710-        self.lease_checker.setServiceParent(self)
1711-
1712-    def get_available_space(self):
1713-        if self.readonly:
1714-            return 0
1715-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1716-
1717-    def get_bucket_shares(self, storage_index):
1718-        """Return a list of (shnum, pathname) tuples for files that hold
1719-        shares for this storage_index. In each tuple, 'shnum' will always be
1720-        the integer form of the last component of 'pathname'."""
1721-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1722-        try:
1723-            for f in os.listdir(storagedir):
1724-                if NUM_RE.match(f):
1725-                    filename = os.path.join(storagedir, f)
1726-                    yield (int(f), filename)
1727-        except OSError:
1728-            # Commonly caused by there being no buckets at all.
1729-            pass
1730-
1731 # storage/
1732 # storage/shares/incoming
1733 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1734hunk ./src/allmydata/storage/server.py 32
1735 # $SHARENUM matches this regex:
1736 NUM_RE=re.compile("^[0-9]+$")
1737 
1738-
1739-
1740 class StorageServer(service.MultiService, Referenceable):
1741     implements(RIStorageServer, IStatsProducer)
1742     name = 'storage'
1743hunk ./src/allmydata/storage/server.py 35
1744-    LeaseCheckerClass = LeaseCheckingCrawler
1745 
1746     def __init__(self, nodeid, backend, reserved_space=0,
1747                  readonly_storage=False,
1748hunk ./src/allmydata/storage/server.py 38
1749-                 stats_provider=None,
1750-                 expiration_enabled=False,
1751-                 expiration_mode="age",
1752-                 expiration_override_lease_duration=None,
1753-                 expiration_cutoff_date=None,
1754-                 expiration_sharetypes=("mutable", "immutable")):
1755+                 stats_provider=None ):
1756         service.MultiService.__init__(self)
1757         assert isinstance(nodeid, str)
1758         assert len(nodeid) == 20
1759hunk ./src/allmydata/storage/server.py 217
1760         # they asked about: this will save them a lot of work. Add or update
1761         # leases for all of them: if they want us to hold shares for this
1762         # file, they'll want us to hold leases for this file.
1763-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1764-            alreadygot.add(shnum)
1765-            sf = ShareFile(fn)
1766-            sf.add_or_renew_lease(lease_info)
1767-
1768-        for shnum in sharenums:
1769-            share = self.backend.get_share(storage_index, shnum)
1770+        for share in self.backend.get_shares(storage_index):
1771+            alreadygot.add(share.shnum)
1772+            share.add_or_renew_lease(lease_info)
1773 
1774hunk ./src/allmydata/storage/server.py 221
1775-            if not share:
1776-                if (not limited) or (remaining_space >= max_space_per_bucket):
1777-                    # ok! we need to create the new share file.
1778-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1779-                                      max_space_per_bucket, lease_info, canary)
1780-                    bucketwriters[shnum] = bw
1781-                    self._active_writers[bw] = 1
1782-                    if limited:
1783-                        remaining_space -= max_space_per_bucket
1784-                else:
1785-                    # bummer! not enough space to accept this bucket
1786-                    pass
1787+        for shnum in (sharenums - alreadygot):
1788+            if (not limited) or (remaining_space >= max_space_per_bucket):
1789+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1790+                self.backend.set_storage_server(self)
1791+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1792+                                                     max_space_per_bucket, lease_info, canary)
1793+                bucketwriters[shnum] = bw
1794+                self._active_writers[bw] = 1
1795+                if limited:
1796+                    remaining_space -= max_space_per_bucket
1797 
1798hunk ./src/allmydata/storage/server.py 232
1799-            elif share.is_complete():
1800-                # great! we already have it. easy.
1801-                pass
1802-            elif not share.is_complete():
1803-                # Note that we don't create BucketWriters for shnums that
1804-                # have a partial share (in incoming/), so if a second upload
1805-                # occurs while the first is still in progress, the second
1806-                # uploader will use different storage servers.
1807-                pass
1808+        #XXX We SHOULD DOCUMENT LATER.
1809 
1810         self.add_latency("allocate", time.time() - start)
1811         return alreadygot, bucketwriters
1812hunk ./src/allmydata/storage/server.py 238
1813 
1814     def _iter_share_files(self, storage_index):
1815-        for shnum, filename in self._get_bucket_shares(storage_index):
1816+        for shnum, filename in self._get_shares(storage_index):
1817             f = open(filename, 'rb')
1818             header = f.read(32)
1819             f.close()
1820hunk ./src/allmydata/storage/server.py 318
1821         si_s = si_b2a(storage_index)
1822         log.msg("storage: get_buckets %s" % si_s)
1823         bucketreaders = {} # k: sharenum, v: BucketReader
1824-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1825+        for shnum, filename in self.backend.get_shares(storage_index):
1826             bucketreaders[shnum] = BucketReader(self, filename,
1827                                                 storage_index, shnum)
1828         self.add_latency("get", time.time() - start)
1829hunk ./src/allmydata/storage/server.py 334
1830         # since all shares get the same lease data, we just grab the leases
1831         # from the first share
1832         try:
1833-            shnum, filename = self._get_bucket_shares(storage_index).next()
1834+            shnum, filename = self._get_shares(storage_index).next()
1835             sf = ShareFile(filename)
1836             return sf.get_leases()
1837         except StopIteration:
1838hunk ./src/allmydata/storage/shares.py 1
1839-#! /usr/bin/python
1840-
1841-from allmydata.storage.mutable import MutableShareFile
1842-from allmydata.storage.immutable import ShareFile
1843-
1844-def get_share_file(filename):
1845-    f = open(filename, "rb")
1846-    prefix = f.read(32)
1847-    f.close()
1848-    if prefix == MutableShareFile.MAGIC:
1849-        return MutableShareFile(filename)
1850-    # otherwise assume it's immutable
1851-    return ShareFile(filename)
1852-
1853rmfile ./src/allmydata/storage/shares.py
1854hunk ./src/allmydata/test/common_util.py 20
1855 
1856 def flip_one_bit(s, offset=0, size=None):
1857     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1858-    than offset+size. """
1859+    than offset+size. Return the new string. """
1860     if size is None:
1861         size=len(s)-offset
1862     i = randrange(offset, offset+size)
1863hunk ./src/allmydata/test/test_backends.py 7
1864 
1865 from allmydata.test.common_util import ReallyEqualMixin
1866 
1867-import mock
1868+import mock, os
1869 
1870 # This is the code that we're going to be testing.
1871hunk ./src/allmydata/test/test_backends.py 10
1872-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1873+from allmydata.storage.server import StorageServer
1874+
1875+from allmydata.storage.backends.das.core import DASCore
1876+from allmydata.storage.backends.null.core import NullCore
1877+
1878 
1879 # The following share file contents was generated with
1880 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1881hunk ./src/allmydata/test/test_backends.py 22
1882 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1883 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1884 
1885-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1886+tempdir = 'teststoredir'
1887+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1888+sharefname = os.path.join(sharedirname, '0')
1889 
1890 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1891     @mock.patch('time.time')
1892hunk ./src/allmydata/test/test_backends.py 58
1893         filesystem in only the prescribed ways. """
1894 
1895         def call_open(fname, mode):
1896-            if fname == 'testdir/bucket_counter.state':
1897-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1898-            elif fname == 'testdir/lease_checker.state':
1899-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1900-            elif fname == 'testdir/lease_checker.history':
1901+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1902+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1903+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1904+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1905+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1906                 return StringIO()
1907             else:
1908                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1909hunk ./src/allmydata/test/test_backends.py 124
1910     @mock.patch('__builtin__.open')
1911     def setUp(self, mockopen):
1912         def call_open(fname, mode):
1913-            if fname == 'testdir/bucket_counter.state':
1914-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1915-            elif fname == 'testdir/lease_checker.state':
1916-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1917-            elif fname == 'testdir/lease_checker.history':
1918+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1919+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1920+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1921+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1922+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1923                 return StringIO()
1924         mockopen.side_effect = call_open
1925hunk ./src/allmydata/test/test_backends.py 131
1926-
1927-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1928+        expiration_policy = {'enabled' : False,
1929+                             'mode' : 'age',
1930+                             'override_lease_duration' : None,
1931+                             'cutoff_date' : None,
1932+                             'sharetypes' : None}
1933+        testbackend = DASCore(tempdir, expiration_policy)
1934+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1935 
1936     @mock.patch('time.time')
1937     @mock.patch('os.mkdir')
1938hunk ./src/allmydata/test/test_backends.py 148
1939         """ Write a new share. """
1940 
1941         def call_listdir(dirname):
1942-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1943-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1944+            self.failUnlessReallyEqual(dirname, sharedirname)
1945+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1946 
1947         mocklistdir.side_effect = call_listdir
1948 
1949hunk ./src/allmydata/test/test_backends.py 178
1950 
1951         sharefile = MockFile()
1952         def call_open(fname, mode):
1953-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1954+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1955             return sharefile
1956 
1957         mockopen.side_effect = call_open
1958hunk ./src/allmydata/test/test_backends.py 200
1959         StorageServer object. """
1960 
1961         def call_listdir(dirname):
1962-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1963+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1964             return ['0']
1965 
1966         mocklistdir.side_effect = call_listdir
1967}
1968[checkpoint patch
1969wilcoxjg@gmail.com**20110626165715
1970 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1971] {
1972hunk ./src/allmydata/storage/backends/das/core.py 21
1973 from allmydata.storage.lease import LeaseInfo
1974 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1975      create_mutable_sharefile
1976-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1977+from allmydata.storage.immutable import BucketWriter, BucketReader
1978 from allmydata.storage.crawler import FSBucketCountingCrawler
1979 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1980 
1981hunk ./src/allmydata/storage/backends/das/core.py 27
1982 from zope.interface import implements
1983 
1984+# $SHARENUM matches this regex:
1985+NUM_RE=re.compile("^[0-9]+$")
1986+
1987 class DASCore(Backend):
1988     implements(IStorageBackend)
1989     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1990hunk ./src/allmydata/storage/backends/das/core.py 80
1991         return fileutil.get_available_space(self.storedir, self.reserved_space)
1992 
1993     def get_shares(self, storage_index):
1994-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1995+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
1996         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1997         try:
1998             for f in os.listdir(finalstoragedir):
1999hunk ./src/allmydata/storage/backends/das/core.py 86
2000                 if NUM_RE.match(f):
2001                     filename = os.path.join(finalstoragedir, f)
2002-                    yield FSBShare(filename, int(f))
2003+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
2004         except OSError:
2005             # Commonly caused by there being no buckets at all.
2006             pass
2007hunk ./src/allmydata/storage/backends/das/core.py 95
2008         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
2009         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2010         return bw
2011+
2012+    def set_storage_server(self, ss):
2013+        self.ss = ss
2014         
2015 
2016 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2017hunk ./src/allmydata/storage/server.py 29
2018 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2019 # base-32 chars).
2020 
2021-# $SHARENUM matches this regex:
2022-NUM_RE=re.compile("^[0-9]+$")
2023 
2024 class StorageServer(service.MultiService, Referenceable):
2025     implements(RIStorageServer, IStatsProducer)
2026}
2027[checkpoint4
2028wilcoxjg@gmail.com**20110628202202
2029 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2030] {
2031hunk ./src/allmydata/storage/backends/das/core.py 96
2032         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2033         return bw
2034 
2035+    def make_bucket_reader(self, share):
2036+        return BucketReader(self.ss, share)
2037+
2038     def set_storage_server(self, ss):
2039         self.ss = ss
2040         
2041hunk ./src/allmydata/storage/backends/das/core.py 138
2042         must not be None. """
2043         precondition((max_size is not None) or (not create), max_size, create)
2044         self.shnum = shnum
2045+        self.storage_index = storageindex
2046         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2047         self._max_size = max_size
2048         if create:
2049hunk ./src/allmydata/storage/backends/das/core.py 173
2050             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2051         self._data_offset = 0xc
2052 
2053+    def get_shnum(self):
2054+        return self.shnum
2055+
2056     def unlink(self):
2057         os.unlink(self.fname)
2058 
2059hunk ./src/allmydata/storage/backends/null/core.py 2
2060 from allmydata.storage.backends.base import Backend
2061+from allmydata.storage.immutable import BucketWriter, BucketReader
2062 
2063 class NullCore(Backend):
2064     def __init__(self):
2065hunk ./src/allmydata/storage/backends/null/core.py 17
2066     def get_share(self, storage_index, sharenum):
2067         return None
2068 
2069-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2070-        return NullBucketWriter()
2071+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2072+       
2073+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2074+
2075+    def set_storage_server(self, ss):
2076+        self.ss = ss
2077+
2078+class ImmutableShare:
2079+    sharetype = "immutable"
2080+
2081+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2082+        """ If max_size is not None then I won't allow more than
2083+        max_size to be written to me. If create=True then max_size
2084+        must not be None. """
2085+        precondition((max_size is not None) or (not create), max_size, create)
2086+        self.shnum = shnum
2087+        self.storage_index = storageindex
2088+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2089+        self._max_size = max_size
2090+        if create:
2091+            # touch the file, so later callers will see that we're working on
2092+            # it. Also construct the metadata.
2093+            assert not os.path.exists(self.fname)
2094+            fileutil.make_dirs(os.path.dirname(self.fname))
2095+            f = open(self.fname, 'wb')
2096+            # The second field -- the four-byte share data length -- is no
2097+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2098+            # there in case someone downgrades a storage server from >=
2099+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2100+            # server to another, etc. We do saturation -- a share data length
2101+            # larger than 2**32-1 (what can fit into the field) is marked as
2102+            # the largest length that can fit into the field. That way, even
2103+            # if this does happen, the old < v1.3.0 server will still allow
2104+            # clients to read the first part of the share.
2105+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2106+            f.close()
2107+            self._lease_offset = max_size + 0x0c
2108+            self._num_leases = 0
2109+        else:
2110+            f = open(self.fname, 'rb')
2111+            filesize = os.path.getsize(self.fname)
2112+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2113+            f.close()
2114+            if version != 1:
2115+                msg = "sharefile %s had version %d but we wanted 1" % \
2116+                      (self.fname, version)
2117+                raise UnknownImmutableContainerVersionError(msg)
2118+            self._num_leases = num_leases
2119+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2120+        self._data_offset = 0xc
2121+
2122+    def get_shnum(self):
2123+        return self.shnum
2124+
2125+    def unlink(self):
2126+        os.unlink(self.fname)
2127+
2128+    def read_share_data(self, offset, length):
2129+        precondition(offset >= 0)
2130+        # Reads beyond the end of the data are truncated. Reads that start
2131+        # beyond the end of the data return an empty string.
2132+        seekpos = self._data_offset+offset
2133+        fsize = os.path.getsize(self.fname)
2134+        actuallength = max(0, min(length, fsize-seekpos))
2135+        if actuallength == 0:
2136+            return ""
2137+        f = open(self.fname, 'rb')
2138+        f.seek(seekpos)
2139+        return f.read(actuallength)
2140+
2141+    def write_share_data(self, offset, data):
2142+        length = len(data)
2143+        precondition(offset >= 0, offset)
2144+        if self._max_size is not None and offset+length > self._max_size:
2145+            raise DataTooLargeError(self._max_size, offset, length)
2146+        f = open(self.fname, 'rb+')
2147+        real_offset = self._data_offset+offset
2148+        f.seek(real_offset)
2149+        assert f.tell() == real_offset
2150+        f.write(data)
2151+        f.close()
2152+
2153+    def _write_lease_record(self, f, lease_number, lease_info):
2154+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2155+        f.seek(offset)
2156+        assert f.tell() == offset
2157+        f.write(lease_info.to_immutable_data())
2158+
2159+    def _read_num_leases(self, f):
2160+        f.seek(0x08)
2161+        (num_leases,) = struct.unpack(">L", f.read(4))
2162+        return num_leases
2163+
2164+    def _write_num_leases(self, f, num_leases):
2165+        f.seek(0x08)
2166+        f.write(struct.pack(">L", num_leases))
2167+
2168+    def _truncate_leases(self, f, num_leases):
2169+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2170+
2171+    def get_leases(self):
2172+        """Yields a LeaseInfo instance for all leases."""
2173+        f = open(self.fname, 'rb')
2174+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2175+        f.seek(self._lease_offset)
2176+        for i in range(num_leases):
2177+            data = f.read(self.LEASE_SIZE)
2178+            if data:
2179+                yield LeaseInfo().from_immutable_data(data)
2180+
2181+    def add_lease(self, lease_info):
2182+        f = open(self.fname, 'rb+')
2183+        num_leases = self._read_num_leases(f)
2184+        self._write_lease_record(f, num_leases, lease_info)
2185+        self._write_num_leases(f, num_leases+1)
2186+        f.close()
2187+
2188+    def renew_lease(self, renew_secret, new_expire_time):
2189+        for i,lease in enumerate(self.get_leases()):
2190+            if constant_time_compare(lease.renew_secret, renew_secret):
2191+                # yup. See if we need to update the owner time.
2192+                if new_expire_time > lease.expiration_time:
2193+                    # yes
2194+                    lease.expiration_time = new_expire_time
2195+                    f = open(self.fname, 'rb+')
2196+                    self._write_lease_record(f, i, lease)
2197+                    f.close()
2198+                return
2199+        raise IndexError("unable to renew non-existent lease")
2200+
2201+    def add_or_renew_lease(self, lease_info):
2202+        try:
2203+            self.renew_lease(lease_info.renew_secret,
2204+                             lease_info.expiration_time)
2205+        except IndexError:
2206+            self.add_lease(lease_info)
2207+
2208+
2209+    def cancel_lease(self, cancel_secret):
2210+        """Remove a lease with the given cancel_secret. If the last lease is
2211+        cancelled, the file will be removed. Return the number of bytes that
2212+        were freed (by truncating the list of leases, and possibly by
2213+        deleting the file. Raise IndexError if there was no lease with the
2214+        given cancel_secret.
2215+        """
2216+
2217+        leases = list(self.get_leases())
2218+        num_leases_removed = 0
2219+        for i,lease in enumerate(leases):
2220+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2221+                leases[i] = None
2222+                num_leases_removed += 1
2223+        if not num_leases_removed:
2224+            raise IndexError("unable to find matching lease to cancel")
2225+        if num_leases_removed:
2226+            # pack and write out the remaining leases. We write these out in
2227+            # the same order as they were added, so that if we crash while
2228+            # doing this, we won't lose any non-cancelled leases.
2229+            leases = [l for l in leases if l] # remove the cancelled leases
2230+            f = open(self.fname, 'rb+')
2231+            for i,lease in enumerate(leases):
2232+                self._write_lease_record(f, i, lease)
2233+            self._write_num_leases(f, len(leases))
2234+            self._truncate_leases(f, len(leases))
2235+            f.close()
2236+        space_freed = self.LEASE_SIZE * num_leases_removed
2237+        if not len(leases):
2238+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2239+            self.unlink()
2240+        return space_freed
2241hunk ./src/allmydata/storage/immutable.py 114
2242 class BucketReader(Referenceable):
2243     implements(RIBucketReader)
2244 
2245-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2246+    def __init__(self, ss, share):
2247         self.ss = ss
2248hunk ./src/allmydata/storage/immutable.py 116
2249-        self._share_file = ShareFile(sharefname)
2250-        self.storage_index = storage_index
2251-        self.shnum = shnum
2252+        self._share_file = share
2253+        self.storage_index = share.storage_index
2254+        self.shnum = share.shnum
2255 
2256     def __repr__(self):
2257         return "<%s %s %s>" % (self.__class__.__name__,
2258hunk ./src/allmydata/storage/server.py 316
2259         si_s = si_b2a(storage_index)
2260         log.msg("storage: get_buckets %s" % si_s)
2261         bucketreaders = {} # k: sharenum, v: BucketReader
2262-        for shnum, filename in self.backend.get_shares(storage_index):
2263-            bucketreaders[shnum] = BucketReader(self, filename,
2264-                                                storage_index, shnum)
2265+        self.backend.set_storage_server(self)
2266+        for share in self.backend.get_shares(storage_index):
2267+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2268         self.add_latency("get", time.time() - start)
2269         return bucketreaders
2270 
2271hunk ./src/allmydata/test/test_backends.py 25
2272 tempdir = 'teststoredir'
2273 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2274 sharefname = os.path.join(sharedirname, '0')
2275+expiration_policy = {'enabled' : False,
2276+                     'mode' : 'age',
2277+                     'override_lease_duration' : None,
2278+                     'cutoff_date' : None,
2279+                     'sharetypes' : None}
2280 
2281 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2282     @mock.patch('time.time')
2283hunk ./src/allmydata/test/test_backends.py 43
2284         tries to read or write to the file system. """
2285 
2286         # Now begin the test.
2287-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2288+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2289 
2290         self.failIf(mockisdir.called)
2291         self.failIf(mocklistdir.called)
2292hunk ./src/allmydata/test/test_backends.py 74
2293         mockopen.side_effect = call_open
2294 
2295         # Now begin the test.
2296-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2297+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2298 
2299         self.failIf(mockisdir.called)
2300         self.failIf(mocklistdir.called)
2301hunk ./src/allmydata/test/test_backends.py 86
2302 
2303 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2304     def setUp(self):
2305-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2306+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2307 
2308     @mock.patch('os.mkdir')
2309     @mock.patch('__builtin__.open')
2310hunk ./src/allmydata/test/test_backends.py 136
2311             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2312                 return StringIO()
2313         mockopen.side_effect = call_open
2314-        expiration_policy = {'enabled' : False,
2315-                             'mode' : 'age',
2316-                             'override_lease_duration' : None,
2317-                             'cutoff_date' : None,
2318-                             'sharetypes' : None}
2319         testbackend = DASCore(tempdir, expiration_policy)
2320         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2321 
2322}
2323[checkpoint5
2324wilcoxjg@gmail.com**20110705034626
2325 Ignore-this: 255780bd58299b0aa33c027e9d008262
2326] {
2327addfile ./src/allmydata/storage/backends/base.py
2328hunk ./src/allmydata/storage/backends/base.py 1
2329+from twisted.application import service
2330+
2331+class Backend(service.MultiService):
2332+    def __init__(self):
2333+        service.MultiService.__init__(self)
2334hunk ./src/allmydata/storage/backends/null/core.py 19
2335 
2336     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2337         
2338+        immutableshare = ImmutableShare()
2339         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2340 
2341     def set_storage_server(self, ss):
2342hunk ./src/allmydata/storage/backends/null/core.py 28
2343 class ImmutableShare:
2344     sharetype = "immutable"
2345 
2346-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2347+    def __init__(self):
2348         """ If max_size is not None then I won't allow more than
2349         max_size to be written to me. If create=True then max_size
2350         must not be None. """
2351hunk ./src/allmydata/storage/backends/null/core.py 32
2352-        precondition((max_size is not None) or (not create), max_size, create)
2353-        self.shnum = shnum
2354-        self.storage_index = storageindex
2355-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2356-        self._max_size = max_size
2357-        if create:
2358-            # touch the file, so later callers will see that we're working on
2359-            # it. Also construct the metadata.
2360-            assert not os.path.exists(self.fname)
2361-            fileutil.make_dirs(os.path.dirname(self.fname))
2362-            f = open(self.fname, 'wb')
2363-            # The second field -- the four-byte share data length -- is no
2364-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2365-            # there in case someone downgrades a storage server from >=
2366-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2367-            # server to another, etc. We do saturation -- a share data length
2368-            # larger than 2**32-1 (what can fit into the field) is marked as
2369-            # the largest length that can fit into the field. That way, even
2370-            # if this does happen, the old < v1.3.0 server will still allow
2371-            # clients to read the first part of the share.
2372-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2373-            f.close()
2374-            self._lease_offset = max_size + 0x0c
2375-            self._num_leases = 0
2376-        else:
2377-            f = open(self.fname, 'rb')
2378-            filesize = os.path.getsize(self.fname)
2379-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2380-            f.close()
2381-            if version != 1:
2382-                msg = "sharefile %s had version %d but we wanted 1" % \
2383-                      (self.fname, version)
2384-                raise UnknownImmutableContainerVersionError(msg)
2385-            self._num_leases = num_leases
2386-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2387-        self._data_offset = 0xc
2388+        pass
2389 
2390     def get_shnum(self):
2391         return self.shnum
2392hunk ./src/allmydata/storage/backends/null/core.py 54
2393         return f.read(actuallength)
2394 
2395     def write_share_data(self, offset, data):
2396-        length = len(data)
2397-        precondition(offset >= 0, offset)
2398-        if self._max_size is not None and offset+length > self._max_size:
2399-            raise DataTooLargeError(self._max_size, offset, length)
2400-        f = open(self.fname, 'rb+')
2401-        real_offset = self._data_offset+offset
2402-        f.seek(real_offset)
2403-        assert f.tell() == real_offset
2404-        f.write(data)
2405-        f.close()
2406+        pass
2407 
2408     def _write_lease_record(self, f, lease_number, lease_info):
2409         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2410hunk ./src/allmydata/storage/backends/null/core.py 84
2411             if data:
2412                 yield LeaseInfo().from_immutable_data(data)
2413 
2414-    def add_lease(self, lease_info):
2415-        f = open(self.fname, 'rb+')
2416-        num_leases = self._read_num_leases(f)
2417-        self._write_lease_record(f, num_leases, lease_info)
2418-        self._write_num_leases(f, num_leases+1)
2419-        f.close()
2420+    def add_lease(self, lease):
2421+        pass
2422 
2423     def renew_lease(self, renew_secret, new_expire_time):
2424         for i,lease in enumerate(self.get_leases()):
2425hunk ./src/allmydata/test/test_backends.py 32
2426                      'sharetypes' : None}
2427 
2428 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2429-    @mock.patch('time.time')
2430-    @mock.patch('os.mkdir')
2431-    @mock.patch('__builtin__.open')
2432-    @mock.patch('os.listdir')
2433-    @mock.patch('os.path.isdir')
2434-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2435-        """ This tests whether a server instance can be constructed
2436-        with a null backend. The server instance fails the test if it
2437-        tries to read or write to the file system. """
2438-
2439-        # Now begin the test.
2440-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2441-
2442-        self.failIf(mockisdir.called)
2443-        self.failIf(mocklistdir.called)
2444-        self.failIf(mockopen.called)
2445-        self.failIf(mockmkdir.called)
2446-
2447-        # You passed!
2448-
2449     @mock.patch('time.time')
2450     @mock.patch('os.mkdir')
2451     @mock.patch('__builtin__.open')
2452hunk ./src/allmydata/test/test_backends.py 53
2453                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2454         mockopen.side_effect = call_open
2455 
2456-        # Now begin the test.
2457-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2458-
2459-        self.failIf(mockisdir.called)
2460-        self.failIf(mocklistdir.called)
2461-        self.failIf(mockopen.called)
2462-        self.failIf(mockmkdir.called)
2463-        self.failIf(mocktime.called)
2464-
2465-        # You passed!
2466-
2467-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2468-    def setUp(self):
2469-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2470-
2471-    @mock.patch('os.mkdir')
2472-    @mock.patch('__builtin__.open')
2473-    @mock.patch('os.listdir')
2474-    @mock.patch('os.path.isdir')
2475-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2476-        """ Write a new share. """
2477-
2478-        # Now begin the test.
2479-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2480-        bs[0].remote_write(0, 'a')
2481-        self.failIf(mockisdir.called)
2482-        self.failIf(mocklistdir.called)
2483-        self.failIf(mockopen.called)
2484-        self.failIf(mockmkdir.called)
2485+        def call_isdir(fname):
2486+            if fname == os.path.join(tempdir,'shares'):
2487+                return True
2488+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2489+                return True
2490+            else:
2491+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2492+        mockisdir.side_effect = call_isdir
2493 
2494hunk ./src/allmydata/test/test_backends.py 62
2495-    @mock.patch('os.path.exists')
2496-    @mock.patch('os.path.getsize')
2497-    @mock.patch('__builtin__.open')
2498-    @mock.patch('os.listdir')
2499-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2500-        """ This tests whether the code correctly finds and reads
2501-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2502-        servers. There is a similar test in test_download, but that one
2503-        is from the perspective of the client and exercises a deeper
2504-        stack of code. This one is for exercising just the
2505-        StorageServer object. """
2506+        def call_mkdir(fname, mode):
2507+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2508+            self.failUnlessEqual(0777, mode)
2509+            if fname == tempdir:
2510+                return None
2511+            elif fname == os.path.join(tempdir,'shares'):
2512+                return None
2513+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2514+                return None
2515+            else:
2516+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2517+        mockmkdir.side_effect = call_mkdir
2518 
2519         # Now begin the test.
2520hunk ./src/allmydata/test/test_backends.py 76
2521-        bs = self.s.remote_get_buckets('teststorage_index')
2522+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2523 
2524hunk ./src/allmydata/test/test_backends.py 78
2525-        self.failUnlessEqual(len(bs), 0)
2526-        self.failIf(mocklistdir.called)
2527-        self.failIf(mockopen.called)
2528-        self.failIf(mockgetsize.called)
2529-        self.failIf(mockexists.called)
2530+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2531 
2532 
2533 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2534hunk ./src/allmydata/test/test_backends.py 193
2535         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2536 
2537 
2538+
2539+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2540+    @mock.patch('time.time')
2541+    @mock.patch('os.mkdir')
2542+    @mock.patch('__builtin__.open')
2543+    @mock.patch('os.listdir')
2544+    @mock.patch('os.path.isdir')
2545+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2546+        """ This tests whether a file system backend instance can be
2547+        constructed. To pass the test, it has to use the
2548+        filesystem in only the prescribed ways. """
2549+
2550+        def call_open(fname, mode):
2551+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2552+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2553+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2554+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2555+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2556+                return StringIO()
2557+            else:
2558+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2559+        mockopen.side_effect = call_open
2560+
2561+        def call_isdir(fname):
2562+            if fname == os.path.join(tempdir,'shares'):
2563+                return True
2564+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2565+                return True
2566+            else:
2567+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2568+        mockisdir.side_effect = call_isdir
2569+
2570+        def call_mkdir(fname, mode):
2571+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2572+            self.failUnlessEqual(0777, mode)
2573+            if fname == tempdir:
2574+                return None
2575+            elif fname == os.path.join(tempdir,'shares'):
2576+                return None
2577+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2578+                return None
2579+            else:
2580+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2581+        mockmkdir.side_effect = call_mkdir
2582+
2583+        # Now begin the test.
2584+        DASCore('teststoredir', expiration_policy)
2585+
2586+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2587}
2588[checkpoint 6
2589wilcoxjg@gmail.com**20110706190824
2590 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2591] {
2592hunk ./src/allmydata/interfaces.py 100
2593                          renew_secret=LeaseRenewSecret,
2594                          cancel_secret=LeaseCancelSecret,
2595                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2596-                         allocated_size=Offset, canary=Referenceable):
2597+                         allocated_size=Offset,
2598+                         canary=Referenceable):
2599         """
2600hunk ./src/allmydata/interfaces.py 103
2601-        @param storage_index: the index of the bucket to be created or
2602+        @param storage_index: the index of the shares to be created or
2603                               increfed.
2604hunk ./src/allmydata/interfaces.py 105
2605-        @param sharenums: these are the share numbers (probably between 0 and
2606-                          99) that the sender is proposing to store on this
2607-                          server.
2608-        @param renew_secret: This is the secret used to protect bucket refresh
2609+        @param renew_secret: This is the secret used to protect shares refresh
2610                              This secret is generated by the client and
2611                              stored for later comparison by the server. Each
2612                              server is given a different secret.
2613hunk ./src/allmydata/interfaces.py 109
2614-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2615-        @param canary: If the canary is lost before close(), the bucket is
2616+        @param cancel_secret: Like renew_secret, but protects shares decref.
2617+        @param sharenums: these are the share numbers (probably between 0 and
2618+                          99) that the sender is proposing to store on this
2619+                          server.
2620+        @param allocated_size: XXX The size of the shares the client wishes to store.
2621+        @param canary: If the canary is lost before close(), the shares are
2622                        deleted.
2623hunk ./src/allmydata/interfaces.py 116
2624+
2625         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2626                  already have and allocated is what we hereby agree to accept.
2627                  New leases are added for shares in both lists.
2628hunk ./src/allmydata/interfaces.py 128
2629                   renew_secret=LeaseRenewSecret,
2630                   cancel_secret=LeaseCancelSecret):
2631         """
2632-        Add a new lease on the given bucket. If the renew_secret matches an
2633+        Add a new lease on the given shares. If the renew_secret matches an
2634         existing lease, that lease will be renewed instead. If there is no
2635         bucket for the given storage_index, return silently. (note that in
2636         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2637hunk ./src/allmydata/storage/server.py 17
2638 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2639      create_mutable_sharefile
2640 
2641-from zope.interface import implements
2642-
2643 # storage/
2644 # storage/shares/incoming
2645 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2646hunk ./src/allmydata/test/test_backends.py 6
2647 from StringIO import StringIO
2648 
2649 from allmydata.test.common_util import ReallyEqualMixin
2650+from allmydata.util.assertutil import _assert
2651 
2652 import mock, os
2653 
2654hunk ./src/allmydata/test/test_backends.py 92
2655                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2656             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2657                 return StringIO()
2658+            else:
2659+                _assert(False, "The tester code doesn't recognize this case.") 
2660+
2661         mockopen.side_effect = call_open
2662         testbackend = DASCore(tempdir, expiration_policy)
2663         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2664hunk ./src/allmydata/test/test_backends.py 109
2665 
2666         def call_listdir(dirname):
2667             self.failUnlessReallyEqual(dirname, sharedirname)
2668-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2669+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2670 
2671         mocklistdir.side_effect = call_listdir
2672 
2673hunk ./src/allmydata/test/test_backends.py 113
2674+        def call_isdir(dirname):
2675+            self.failUnlessReallyEqual(dirname, sharedirname)
2676+            return True
2677+
2678+        mockisdir.side_effect = call_isdir
2679+
2680+        def call_mkdir(dirname, permissions):
2681+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2682+                self.Fail
2683+            else:
2684+                return True
2685+
2686+        mockmkdir.side_effect = call_mkdir
2687+
2688         class MockFile:
2689             def __init__(self):
2690                 self.buffer = ''
2691hunk ./src/allmydata/test/test_backends.py 156
2692             return sharefile
2693 
2694         mockopen.side_effect = call_open
2695+
2696         # Now begin the test.
2697         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2698         bs[0].remote_write(0, 'a')
2699hunk ./src/allmydata/test/test_backends.py 161
2700         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2701+       
2702+        # Now test the allocated_size method.
2703+        spaceint = self.s.allocated_size()
2704 
2705     @mock.patch('os.path.exists')
2706     @mock.patch('os.path.getsize')
2707}
2708[checkpoint 7
2709wilcoxjg@gmail.com**20110706200820
2710 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2711] hunk ./src/allmydata/test/test_backends.py 164
2712         
2713         # Now test the allocated_size method.
2714         spaceint = self.s.allocated_size()
2715+        self.failUnlessReallyEqual(spaceint, 1)
2716 
2717     @mock.patch('os.path.exists')
2718     @mock.patch('os.path.getsize')
2719[checkpoint8
2720wilcoxjg@gmail.com**20110706223126
2721 Ignore-this: 97336180883cb798b16f15411179f827
2722   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2723] hunk ./src/allmydata/test/test_backends.py 32
2724                      'cutoff_date' : None,
2725                      'sharetypes' : None}
2726 
2727+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2728+    def setUp(self):
2729+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2730+
2731+    @mock.patch('os.mkdir')
2732+    @mock.patch('__builtin__.open')
2733+    @mock.patch('os.listdir')
2734+    @mock.patch('os.path.isdir')
2735+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2736+        """ Write a new share. """
2737+
2738+        # Now begin the test.
2739+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2740+        bs[0].remote_write(0, 'a')
2741+        self.failIf(mockisdir.called)
2742+        self.failIf(mocklistdir.called)
2743+        self.failIf(mockopen.called)
2744+        self.failIf(mockmkdir.called)
2745+
2746 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2747     @mock.patch('time.time')
2748     @mock.patch('os.mkdir')
2749[checkpoint 9
2750wilcoxjg@gmail.com**20110707042942
2751 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2752] {
2753hunk ./src/allmydata/storage/backends/das/core.py 88
2754                     filename = os.path.join(finalstoragedir, f)
2755                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2756         except OSError:
2757-            # Commonly caused by there being no buckets at all.
2758+            # Commonly caused by there being no shares at all.
2759             pass
2760         
2761     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2762hunk ./src/allmydata/storage/backends/das/core.py 141
2763         self.storage_index = storageindex
2764         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2765         self._max_size = max_size
2766+        self.incomingdir = os.path.join(sharedir, 'incoming')
2767+        si_dir = storage_index_to_dir(storageindex)
2768+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2769+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2770         if create:
2771             # touch the file, so later callers will see that we're working on
2772             # it. Also construct the metadata.
2773hunk ./src/allmydata/storage/backends/das/core.py 177
2774             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2775         self._data_offset = 0xc
2776 
2777+    def close(self):
2778+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2779+        fileutil.rename(self.incominghome, self.finalhome)
2780+        try:
2781+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2782+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2783+            # these directories lying around forever, but the delete might
2784+            # fail if we're working on another share for the same storage
2785+            # index (like ab/abcde/5). The alternative approach would be to
2786+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2787+            # ShareWriter), each of which is responsible for a single
2788+            # directory on disk, and have them use reference counting of
2789+            # their children to know when they should do the rmdir. This
2790+            # approach is simpler, but relies on os.rmdir refusing to delete
2791+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2792+            os.rmdir(os.path.dirname(self.incominghome))
2793+            # we also delete the grandparent (prefix) directory, .../ab ,
2794+            # again to avoid leaving directories lying around. This might
2795+            # fail if there is another bucket open that shares a prefix (like
2796+            # ab/abfff).
2797+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2798+            # we leave the great-grandparent (incoming/) directory in place.
2799+        except EnvironmentError:
2800+            # ignore the "can't rmdir because the directory is not empty"
2801+            # exceptions, those are normal consequences of the
2802+            # above-mentioned conditions.
2803+            pass
2804+        pass
2805+       
2806+    def stat(self):
2807+        return os.stat(self.finalhome)[stat.ST_SIZE]
2808+
2809     def get_shnum(self):
2810         return self.shnum
2811 
2812hunk ./src/allmydata/storage/immutable.py 7
2813 
2814 from zope.interface import implements
2815 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2816-from allmydata.util import base32, fileutil, log
2817+from allmydata.util import base32, log
2818 from allmydata.util.assertutil import precondition
2819 from allmydata.util.hashutil import constant_time_compare
2820 from allmydata.storage.lease import LeaseInfo
2821hunk ./src/allmydata/storage/immutable.py 44
2822     def remote_close(self):
2823         precondition(not self.closed)
2824         start = time.time()
2825-
2826-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2827-        fileutil.rename(self.incominghome, self.finalhome)
2828-        try:
2829-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2830-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2831-            # these directories lying around forever, but the delete might
2832-            # fail if we're working on another share for the same storage
2833-            # index (like ab/abcde/5). The alternative approach would be to
2834-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2835-            # ShareWriter), each of which is responsible for a single
2836-            # directory on disk, and have them use reference counting of
2837-            # their children to know when they should do the rmdir. This
2838-            # approach is simpler, but relies on os.rmdir refusing to delete
2839-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2840-            os.rmdir(os.path.dirname(self.incominghome))
2841-            # we also delete the grandparent (prefix) directory, .../ab ,
2842-            # again to avoid leaving directories lying around. This might
2843-            # fail if there is another bucket open that shares a prefix (like
2844-            # ab/abfff).
2845-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2846-            # we leave the great-grandparent (incoming/) directory in place.
2847-        except EnvironmentError:
2848-            # ignore the "can't rmdir because the directory is not empty"
2849-            # exceptions, those are normal consequences of the
2850-            # above-mentioned conditions.
2851-            pass
2852+        self._sharefile.close()
2853         self._sharefile = None
2854         self.closed = True
2855         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2856hunk ./src/allmydata/storage/immutable.py 49
2857 
2858-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2859+        filelen = self._sharefile.stat()
2860         self.ss.bucket_writer_closed(self, filelen)
2861         self.ss.add_latency("close", time.time() - start)
2862         self.ss.count("close")
2863hunk ./src/allmydata/storage/server.py 45
2864         self._active_writers = weakref.WeakKeyDictionary()
2865         self.backend = backend
2866         self.backend.setServiceParent(self)
2867+        self.backend.set_storage_server(self)
2868         log.msg("StorageServer created", facility="tahoe.storage")
2869 
2870         self.latencies = {"allocate": [], # immutable
2871hunk ./src/allmydata/storage/server.py 220
2872 
2873         for shnum in (sharenums - alreadygot):
2874             if (not limited) or (remaining_space >= max_space_per_bucket):
2875-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2876-                self.backend.set_storage_server(self)
2877                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2878                                                      max_space_per_bucket, lease_info, canary)
2879                 bucketwriters[shnum] = bw
2880hunk ./src/allmydata/test/test_backends.py 117
2881         mockopen.side_effect = call_open
2882         testbackend = DASCore(tempdir, expiration_policy)
2883         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2884-
2885+   
2886+    @mock.patch('allmydata.util.fileutil.get_available_space')
2887     @mock.patch('time.time')
2888     @mock.patch('os.mkdir')
2889     @mock.patch('__builtin__.open')
2890hunk ./src/allmydata/test/test_backends.py 124
2891     @mock.patch('os.listdir')
2892     @mock.patch('os.path.isdir')
2893-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2894+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2895+                             mockget_available_space):
2896         """ Write a new share. """
2897 
2898         def call_listdir(dirname):
2899hunk ./src/allmydata/test/test_backends.py 148
2900 
2901         mockmkdir.side_effect = call_mkdir
2902 
2903+        def call_get_available_space(storedir, reserved_space):
2904+            self.failUnlessReallyEqual(storedir, tempdir)
2905+            return 1
2906+
2907+        mockget_available_space.side_effect = call_get_available_space
2908+
2909         class MockFile:
2910             def __init__(self):
2911                 self.buffer = ''
2912hunk ./src/allmydata/test/test_backends.py 188
2913         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2914         bs[0].remote_write(0, 'a')
2915         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2916-       
2917+
2918+        # What happens when there's not enough space for the client's request?
2919+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2920+
2921         # Now test the allocated_size method.
2922         spaceint = self.s.allocated_size()
2923         self.failUnlessReallyEqual(spaceint, 1)
2924}
2925[checkpoint10
2926wilcoxjg@gmail.com**20110707172049
2927 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2928] {
2929hunk ./src/allmydata/test/test_backends.py 20
2930 # The following share file contents was generated with
2931 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2932 # with share data == 'a'.
2933-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2934+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2935+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2936+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2937 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2938 
2939hunk ./src/allmydata/test/test_backends.py 25
2940+testnodeid = 'testnodeidxxxxxxxxxx'
2941 tempdir = 'teststoredir'
2942 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2943 sharefname = os.path.join(sharedirname, '0')
2944hunk ./src/allmydata/test/test_backends.py 37
2945 
2946 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2947     def setUp(self):
2948-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2949+        self.s = StorageServer(testnodeid, backend=NullCore())
2950 
2951     @mock.patch('os.mkdir')
2952     @mock.patch('__builtin__.open')
2953hunk ./src/allmydata/test/test_backends.py 99
2954         mockmkdir.side_effect = call_mkdir
2955 
2956         # Now begin the test.
2957-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2958+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2959 
2960         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2961 
2962hunk ./src/allmydata/test/test_backends.py 119
2963 
2964         mockopen.side_effect = call_open
2965         testbackend = DASCore(tempdir, expiration_policy)
2966-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2967-   
2968+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
2969+       
2970+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
2971     @mock.patch('allmydata.util.fileutil.get_available_space')
2972     @mock.patch('time.time')
2973     @mock.patch('os.mkdir')
2974hunk ./src/allmydata/test/test_backends.py 129
2975     @mock.patch('os.listdir')
2976     @mock.patch('os.path.isdir')
2977     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2978-                             mockget_available_space):
2979+                             mockget_available_space, mockget_shares):
2980         """ Write a new share. """
2981 
2982         def call_listdir(dirname):
2983hunk ./src/allmydata/test/test_backends.py 139
2984         mocklistdir.side_effect = call_listdir
2985 
2986         def call_isdir(dirname):
2987+            #XXX Should there be any other tests here?
2988             self.failUnlessReallyEqual(dirname, sharedirname)
2989             return True
2990 
2991hunk ./src/allmydata/test/test_backends.py 159
2992 
2993         mockget_available_space.side_effect = call_get_available_space
2994 
2995+        mocktime.return_value = 0
2996+        class MockShare:
2997+            def __init__(self):
2998+                self.shnum = 1
2999+               
3000+            def add_or_renew_lease(elf, lease_info):
3001+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3002+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3003+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3004+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3005+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3006+               
3007+
3008+        share = MockShare()
3009+        def call_get_shares(storageindex):
3010+            return [share]
3011+
3012+        mockget_shares.side_effect = call_get_shares
3013+
3014         class MockFile:
3015             def __init__(self):
3016                 self.buffer = ''
3017hunk ./src/allmydata/test/test_backends.py 199
3018             def tell(self):
3019                 return self.pos
3020 
3021-        mocktime.return_value = 0
3022 
3023         sharefile = MockFile()
3024         def call_open(fname, mode):
3025}
3026[jacp 11
3027wilcoxjg@gmail.com**20110708213919
3028 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3029] {
3030hunk ./src/allmydata/storage/backends/das/core.py 144
3031         self.incomingdir = os.path.join(sharedir, 'incoming')
3032         si_dir = storage_index_to_dir(storageindex)
3033         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3034+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3035         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3036         if create:
3037             # touch the file, so later callers will see that we're working on
3038hunk ./src/allmydata/storage/backends/das/core.py 208
3039         pass
3040         
3041     def stat(self):
3042-        return os.stat(self.finalhome)[stat.ST_SIZE]
3043+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3044 
3045     def get_shnum(self):
3046         return self.shnum
3047hunk ./src/allmydata/storage/immutable.py 44
3048     def remote_close(self):
3049         precondition(not self.closed)
3050         start = time.time()
3051+
3052         self._sharefile.close()
3053hunk ./src/allmydata/storage/immutable.py 46
3054+        filelen = self._sharefile.stat()
3055         self._sharefile = None
3056hunk ./src/allmydata/storage/immutable.py 48
3057+
3058         self.closed = True
3059         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3060 
3061hunk ./src/allmydata/storage/immutable.py 52
3062-        filelen = self._sharefile.stat()
3063         self.ss.bucket_writer_closed(self, filelen)
3064         self.ss.add_latency("close", time.time() - start)
3065         self.ss.count("close")
3066hunk ./src/allmydata/storage/server.py 220
3067 
3068         for shnum in (sharenums - alreadygot):
3069             if (not limited) or (remaining_space >= max_space_per_bucket):
3070-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3071-                                                     max_space_per_bucket, lease_info, canary)
3072+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3073                 bucketwriters[shnum] = bw
3074                 self._active_writers[bw] = 1
3075                 if limited:
3076hunk ./src/allmydata/test/test_backends.py 20
3077 # The following share file contents was generated with
3078 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3079 # with share data == 'a'.
3080-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3081-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3082+renew_secret  = 'x'*32
3083+cancel_secret = 'y'*32
3084 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3085 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3086 
3087hunk ./src/allmydata/test/test_backends.py 27
3088 testnodeid = 'testnodeidxxxxxxxxxx'
3089 tempdir = 'teststoredir'
3090-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3091-sharefname = os.path.join(sharedirname, '0')
3092+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3093+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3094+shareincomingname = os.path.join(sharedirincomingname, '0')
3095+sharefname = os.path.join(sharedirfinalname, '0')
3096+
3097 expiration_policy = {'enabled' : False,
3098                      'mode' : 'age',
3099                      'override_lease_duration' : None,
3100hunk ./src/allmydata/test/test_backends.py 123
3101         mockopen.side_effect = call_open
3102         testbackend = DASCore(tempdir, expiration_policy)
3103         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3104-       
3105+
3106+    @mock.patch('allmydata.util.fileutil.rename')
3107+    @mock.patch('allmydata.util.fileutil.make_dirs')
3108+    @mock.patch('os.path.exists')
3109+    @mock.patch('os.stat')
3110     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3111     @mock.patch('allmydata.util.fileutil.get_available_space')
3112     @mock.patch('time.time')
3113hunk ./src/allmydata/test/test_backends.py 136
3114     @mock.patch('os.listdir')
3115     @mock.patch('os.path.isdir')
3116     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3117-                             mockget_available_space, mockget_shares):
3118+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3119+                             mockmake_dirs, mockrename):
3120         """ Write a new share. """
3121 
3122         def call_listdir(dirname):
3123hunk ./src/allmydata/test/test_backends.py 141
3124-            self.failUnlessReallyEqual(dirname, sharedirname)
3125+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3126             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3127 
3128         mocklistdir.side_effect = call_listdir
3129hunk ./src/allmydata/test/test_backends.py 148
3130 
3131         def call_isdir(dirname):
3132             #XXX Should there be any other tests here?
3133-            self.failUnlessReallyEqual(dirname, sharedirname)
3134+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3135             return True
3136 
3137         mockisdir.side_effect = call_isdir
3138hunk ./src/allmydata/test/test_backends.py 154
3139 
3140         def call_mkdir(dirname, permissions):
3141-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3142+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3143                 self.Fail
3144             else:
3145                 return True
3146hunk ./src/allmydata/test/test_backends.py 208
3147                 return self.pos
3148 
3149 
3150-        sharefile = MockFile()
3151+        fobj = MockFile()
3152         def call_open(fname, mode):
3153             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3154hunk ./src/allmydata/test/test_backends.py 211
3155-            return sharefile
3156+            return fobj
3157 
3158         mockopen.side_effect = call_open
3159 
3160hunk ./src/allmydata/test/test_backends.py 215
3161+        def call_make_dirs(dname):
3162+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3163+           
3164+        mockmake_dirs.side_effect = call_make_dirs
3165+
3166+        def call_rename(src, dst):
3167+           self.failUnlessReallyEqual(src, shareincomingname)
3168+           self.failUnlessReallyEqual(dst, sharefname)
3169+           
3170+        mockrename.side_effect = call_rename
3171+
3172+        def call_exists(fname):
3173+            self.failUnlessReallyEqual(fname, sharefname)
3174+
3175+        mockexists.side_effect = call_exists
3176+
3177         # Now begin the test.
3178         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3179         bs[0].remote_write(0, 'a')
3180hunk ./src/allmydata/test/test_backends.py 234
3181-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3182+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3183+        spaceint = self.s.allocated_size()
3184+        self.failUnlessReallyEqual(spaceint, 1)
3185+
3186+        bs[0].remote_close()
3187 
3188         # What happens when there's not enough space for the client's request?
3189hunk ./src/allmydata/test/test_backends.py 241
3190-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3191+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3192 
3193         # Now test the allocated_size method.
3194hunk ./src/allmydata/test/test_backends.py 244
3195-        spaceint = self.s.allocated_size()
3196-        self.failUnlessReallyEqual(spaceint, 1)
3197+        #self.failIf(mockexists.called, mockexists.call_args_list)
3198+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3199+        #self.failIf(mockrename.called, mockrename.call_args_list)
3200+        #self.failIf(mockstat.called, mockstat.call_args_list)
3201 
3202     @mock.patch('os.path.exists')
3203     @mock.patch('os.path.getsize')
3204}
3205[checkpoint12 testing correct behavior with regard to incoming and final
3206wilcoxjg@gmail.com**20110710191915
3207 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3208] {
3209hunk ./src/allmydata/storage/backends/das/core.py 74
3210         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3211         self.lease_checker.setServiceParent(self)
3212 
3213+    def get_incoming(self, storageindex):
3214+        return set((1,))
3215+
3216     def get_available_space(self):
3217         if self.readonly:
3218             return 0
3219hunk ./src/allmydata/storage/server.py 77
3220         """Return a dict, indexed by category, that contains a dict of
3221         latency numbers for each category. If there are sufficient samples
3222         for unambiguous interpretation, each dict will contain the
3223-        following keys: mean, 01_0_percentile, 10_0_percentile,
3224+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3225         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3226         99_0_percentile, 99_9_percentile.  If there are insufficient
3227         samples for a given percentile to be interpreted unambiguously
3228hunk ./src/allmydata/storage/server.py 120
3229 
3230     def get_stats(self):
3231         # remember: RIStatsProvider requires that our return dict
3232-        # contains numeric values.
3233+        # contains numeric, or None values.
3234         stats = { 'storage_server.allocated': self.allocated_size(), }
3235         stats['storage_server.reserved_space'] = self.reserved_space
3236         for category,ld in self.get_latencies().items():
3237hunk ./src/allmydata/storage/server.py 185
3238         start = time.time()
3239         self.count("allocate")
3240         alreadygot = set()
3241+        incoming = set()
3242         bucketwriters = {} # k: shnum, v: BucketWriter
3243 
3244         si_s = si_b2a(storage_index)
3245hunk ./src/allmydata/storage/server.py 219
3246             alreadygot.add(share.shnum)
3247             share.add_or_renew_lease(lease_info)
3248 
3249-        for shnum in (sharenums - alreadygot):
3250+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3251+        incoming = self.backend.get_incoming(storageindex)
3252+
3253+        for shnum in ((sharenums - alreadygot) - incoming):
3254             if (not limited) or (remaining_space >= max_space_per_bucket):
3255                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3256                 bucketwriters[shnum] = bw
3257hunk ./src/allmydata/storage/server.py 229
3258                 self._active_writers[bw] = 1
3259                 if limited:
3260                     remaining_space -= max_space_per_bucket
3261-
3262-        #XXX We SHOULD DOCUMENT LATER.
3263+            else:
3264+                # Bummer not enough space to accept this share.
3265+                pass
3266 
3267         self.add_latency("allocate", time.time() - start)
3268         return alreadygot, bucketwriters
3269hunk ./src/allmydata/storage/server.py 323
3270         self.add_latency("get", time.time() - start)
3271         return bucketreaders
3272 
3273-    def get_leases(self, storage_index):
3274+    def remote_get_incoming(self, storageindex):
3275+        incoming_share_set = self.backend.get_incoming(storageindex)
3276+        return incoming_share_set
3277+
3278+    def get_leases(self, storageindex):
3279         """Provide an iterator that yields all of the leases attached to this
3280         bucket. Each lease is returned as a LeaseInfo instance.
3281 
3282hunk ./src/allmydata/storage/server.py 337
3283         # since all shares get the same lease data, we just grab the leases
3284         # from the first share
3285         try:
3286-            shnum, filename = self._get_shares(storage_index).next()
3287+            shnum, filename = self._get_shares(storageindex).next()
3288             sf = ShareFile(filename)
3289             return sf.get_leases()
3290         except StopIteration:
3291hunk ./src/allmydata/test/test_backends.py 182
3292 
3293         share = MockShare()
3294         def call_get_shares(storageindex):
3295-            return [share]
3296+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3297+            return []#share]
3298 
3299         mockget_shares.side_effect = call_get_shares
3300 
3301hunk ./src/allmydata/test/test_backends.py 222
3302         mockmake_dirs.side_effect = call_make_dirs
3303 
3304         def call_rename(src, dst):
3305-           self.failUnlessReallyEqual(src, shareincomingname)
3306-           self.failUnlessReallyEqual(dst, sharefname)
3307+            self.failUnlessReallyEqual(src, shareincomingname)
3308+            self.failUnlessReallyEqual(dst, sharefname)
3309             
3310         mockrename.side_effect = call_rename
3311 
3312hunk ./src/allmydata/test/test_backends.py 233
3313         mockexists.side_effect = call_exists
3314 
3315         # Now begin the test.
3316+
3317+        # XXX (0) ???  Fail unless something is not properly set-up?
3318         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3319hunk ./src/allmydata/test/test_backends.py 236
3320+
3321+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3322+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3323+
3324+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3325+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3326+        # with the same si, until BucketWriter.remote_close() has been called.
3327+        # self.failIf(bsa)
3328+
3329+        # XXX (3) Inspect final and fail unless there's nothing there.
3330         bs[0].remote_write(0, 'a')
3331hunk ./src/allmydata/test/test_backends.py 247
3332+        # XXX (4a) Inspect final and fail unless share 0 is there.
3333+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3334         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3335         spaceint = self.s.allocated_size()
3336         self.failUnlessReallyEqual(spaceint, 1)
3337hunk ./src/allmydata/test/test_backends.py 253
3338 
3339+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3340         bs[0].remote_close()
3341 
3342         # What happens when there's not enough space for the client's request?
3343hunk ./src/allmydata/test/test_backends.py 260
3344         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3345 
3346         # Now test the allocated_size method.
3347-        #self.failIf(mockexists.called, mockexists.call_args_list)
3348+        # self.failIf(mockexists.called, mockexists.call_args_list)
3349         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3350         #self.failIf(mockrename.called, mockrename.call_args_list)
3351         #self.failIf(mockstat.called, mockstat.call_args_list)
3352}
3353[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3354wilcoxjg@gmail.com**20110710195139
3355 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3356] {
3357hunk ./src/allmydata/storage/server.py 220
3358             share.add_or_renew_lease(lease_info)
3359 
3360         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3361-        incoming = self.backend.get_incoming(storageindex)
3362+        incoming = self.backend.get_incoming(storage_index)
3363 
3364         for shnum in ((sharenums - alreadygot) - incoming):
3365             if (not limited) or (remaining_space >= max_space_per_bucket):
3366hunk ./src/allmydata/storage/server.py 323
3367         self.add_latency("get", time.time() - start)
3368         return bucketreaders
3369 
3370-    def remote_get_incoming(self, storageindex):
3371-        incoming_share_set = self.backend.get_incoming(storageindex)
3372+    def remote_get_incoming(self, storage_index):
3373+        incoming_share_set = self.backend.get_incoming(storage_index)
3374         return incoming_share_set
3375 
3376hunk ./src/allmydata/storage/server.py 327
3377-    def get_leases(self, storageindex):
3378+    def get_leases(self, storage_index):
3379         """Provide an iterator that yields all of the leases attached to this
3380         bucket. Each lease is returned as a LeaseInfo instance.
3381 
3382hunk ./src/allmydata/storage/server.py 337
3383         # since all shares get the same lease data, we just grab the leases
3384         # from the first share
3385         try:
3386-            shnum, filename = self._get_shares(storageindex).next()
3387+            shnum, filename = self._get_shares(storage_index).next()
3388             sf = ShareFile(filename)
3389             return sf.get_leases()
3390         except StopIteration:
3391replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3392}
3393[adding comments to clarify what I'm about to do.
3394wilcoxjg@gmail.com**20110710220623
3395 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3396] {
3397hunk ./src/allmydata/storage/backends/das/core.py 8
3398 
3399 import os, re, weakref, struct, time
3400 
3401-from foolscap.api import Referenceable
3402+#from foolscap.api import Referenceable
3403 from twisted.application import service
3404 
3405 from zope.interface import implements
3406hunk ./src/allmydata/storage/backends/das/core.py 12
3407-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3408+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3409 from allmydata.util import fileutil, idlib, log, time_format
3410 import allmydata # for __full_version__
3411 
3412hunk ./src/allmydata/storage/server.py 219
3413             alreadygot.add(share.shnum)
3414             share.add_or_renew_lease(lease_info)
3415 
3416-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3417+        # fill incoming with all shares that are incoming use a set operation
3418+        # since there's no need to operate on individual pieces
3419         incoming = self.backend.get_incoming(storageindex)
3420 
3421         for shnum in ((sharenums - alreadygot) - incoming):
3422hunk ./src/allmydata/test/test_backends.py 245
3423         # with the same si, until BucketWriter.remote_close() has been called.
3424         # self.failIf(bsa)
3425 
3426-        # XXX (3) Inspect final and fail unless there's nothing there.
3427         bs[0].remote_write(0, 'a')
3428hunk ./src/allmydata/test/test_backends.py 246
3429-        # XXX (4a) Inspect final and fail unless share 0 is there.
3430-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3431         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3432         spaceint = self.s.allocated_size()
3433         self.failUnlessReallyEqual(spaceint, 1)
3434hunk ./src/allmydata/test/test_backends.py 250
3435 
3436-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3437+        # XXX (3) Inspect final and fail unless there's nothing there.
3438         bs[0].remote_close()
3439hunk ./src/allmydata/test/test_backends.py 252
3440+        # XXX (4a) Inspect final and fail unless share 0 is there.
3441+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3442 
3443         # What happens when there's not enough space for the client's request?
3444         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3445}
3446[branching back, no longer attempting to mock inside TestServerFSBackend
3447wilcoxjg@gmail.com**20110711190849
3448 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3449] {
3450hunk ./src/allmydata/storage/backends/das/core.py 75
3451         self.lease_checker.setServiceParent(self)
3452 
3453     def get_incoming(self, storageindex):
3454-        return set((1,))
3455-
3456-    def get_available_space(self):
3457-        if self.readonly:
3458-            return 0
3459-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3460+        """Return the set of incoming shnums."""
3461+        return set(os.listdir(self.incomingdir))
3462 
3463     def get_shares(self, storage_index):
3464         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3465hunk ./src/allmydata/storage/backends/das/core.py 90
3466             # Commonly caused by there being no shares at all.
3467             pass
3468         
3469+    def get_available_space(self):
3470+        if self.readonly:
3471+            return 0
3472+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3473+
3474     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3475         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3476         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3477hunk ./src/allmydata/test/test_backends.py 27
3478 
3479 testnodeid = 'testnodeidxxxxxxxxxx'
3480 tempdir = 'teststoredir'
3481-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3482-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3483+basedir = os.path.join(tempdir, 'shares')
3484+baseincdir = os.path.join(basedir, 'incoming')
3485+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3486+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3487 shareincomingname = os.path.join(sharedirincomingname, '0')
3488 sharefname = os.path.join(sharedirfinalname, '0')
3489 
3490hunk ./src/allmydata/test/test_backends.py 142
3491                              mockmake_dirs, mockrename):
3492         """ Write a new share. """
3493 
3494-        def call_listdir(dirname):
3495-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3496-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3497-
3498-        mocklistdir.side_effect = call_listdir
3499-
3500-        def call_isdir(dirname):
3501-            #XXX Should there be any other tests here?
3502-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3503-            return True
3504-
3505-        mockisdir.side_effect = call_isdir
3506-
3507-        def call_mkdir(dirname, permissions):
3508-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3509-                self.Fail
3510-            else:
3511-                return True
3512-
3513-        mockmkdir.side_effect = call_mkdir
3514-
3515-        def call_get_available_space(storedir, reserved_space):
3516-            self.failUnlessReallyEqual(storedir, tempdir)
3517-            return 1
3518-
3519-        mockget_available_space.side_effect = call_get_available_space
3520-
3521-        mocktime.return_value = 0
3522         class MockShare:
3523             def __init__(self):
3524                 self.shnum = 1
3525hunk ./src/allmydata/test/test_backends.py 152
3526                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3527                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3528                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3529-               
3530 
3531         share = MockShare()
3532hunk ./src/allmydata/test/test_backends.py 154
3533-        def call_get_shares(storageindex):
3534-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3535-            return []#share]
3536-
3537-        mockget_shares.side_effect = call_get_shares
3538 
3539         class MockFile:
3540             def __init__(self):
3541hunk ./src/allmydata/test/test_backends.py 176
3542             def tell(self):
3543                 return self.pos
3544 
3545-
3546         fobj = MockFile()
3547hunk ./src/allmydata/test/test_backends.py 177
3548+
3549+        directories = {}
3550+        def call_listdir(dirname):
3551+            if dirname not in directories:
3552+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3553+            else:
3554+                return directories[dirname].get_contents()
3555+
3556+        mocklistdir.side_effect = call_listdir
3557+
3558+        class MockDir:
3559+            def __init__(self, dirname):
3560+                self.name = dirname
3561+                self.contents = []
3562+   
3563+            def get_contents(self):
3564+                return self.contents
3565+
3566+        def call_isdir(dirname):
3567+            #XXX Should there be any other tests here?
3568+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3569+            return True
3570+
3571+        mockisdir.side_effect = call_isdir
3572+
3573+        def call_mkdir(dirname, permissions):
3574+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3575+                self.Fail
3576+            if dirname in directories:
3577+                raise OSError(17, "File exists: '%s'" % dirname)
3578+                self.Fail
3579+            elif dirname not in directories:
3580+                directories[dirname] = MockDir(dirname)
3581+                return True
3582+
3583+        mockmkdir.side_effect = call_mkdir
3584+
3585+        def call_get_available_space(storedir, reserved_space):
3586+            self.failUnlessReallyEqual(storedir, tempdir)
3587+            return 1
3588+
3589+        mockget_available_space.side_effect = call_get_available_space
3590+
3591+        mocktime.return_value = 0
3592+        def call_get_shares(storageindex):
3593+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3594+            return []#share]
3595+
3596+        mockget_shares.side_effect = call_get_shares
3597+
3598         def call_open(fname, mode):
3599             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3600             return fobj
3601}
3602[checkpoint12 TestServerFSBackend no longer mocks filesystem
3603wilcoxjg@gmail.com**20110711193357
3604 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3605] {
3606hunk ./src/allmydata/storage/backends/das/core.py 23
3607      create_mutable_sharefile
3608 from allmydata.storage.immutable import BucketWriter, BucketReader
3609 from allmydata.storage.crawler import FSBucketCountingCrawler
3610+from allmydata.util.hashutil import constant_time_compare
3611 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3612 
3613 from zope.interface import implements
3614hunk ./src/allmydata/storage/backends/das/core.py 28
3615 
3616+# storage/
3617+# storage/shares/incoming
3618+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3619+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3620+# storage/shares/$START/$STORAGEINDEX
3621+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3622+
3623+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3624+# base-32 chars).
3625 # $SHARENUM matches this regex:
3626 NUM_RE=re.compile("^[0-9]+$")
3627 
3628hunk ./src/allmydata/test/test_backends.py 126
3629         testbackend = DASCore(tempdir, expiration_policy)
3630         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3631 
3632-    @mock.patch('allmydata.util.fileutil.rename')
3633-    @mock.patch('allmydata.util.fileutil.make_dirs')
3634-    @mock.patch('os.path.exists')
3635-    @mock.patch('os.stat')
3636-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3637-    @mock.patch('allmydata.util.fileutil.get_available_space')
3638     @mock.patch('time.time')
3639hunk ./src/allmydata/test/test_backends.py 127
3640-    @mock.patch('os.mkdir')
3641-    @mock.patch('__builtin__.open')
3642-    @mock.patch('os.listdir')
3643-    @mock.patch('os.path.isdir')
3644-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3645-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3646-                             mockmake_dirs, mockrename):
3647+    def test_write_share(self, mocktime):
3648         """ Write a new share. """
3649 
3650         class MockShare:
3651hunk ./src/allmydata/test/test_backends.py 143
3652 
3653         share = MockShare()
3654 
3655-        class MockFile:
3656-            def __init__(self):
3657-                self.buffer = ''
3658-                self.pos = 0
3659-            def write(self, instring):
3660-                begin = self.pos
3661-                padlen = begin - len(self.buffer)
3662-                if padlen > 0:
3663-                    self.buffer += '\x00' * padlen
3664-                end = self.pos + len(instring)
3665-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3666-                self.pos = end
3667-            def close(self):
3668-                pass
3669-            def seek(self, pos):
3670-                self.pos = pos
3671-            def read(self, numberbytes):
3672-                return self.buffer[self.pos:self.pos+numberbytes]
3673-            def tell(self):
3674-                return self.pos
3675-
3676-        fobj = MockFile()
3677-
3678-        directories = {}
3679-        def call_listdir(dirname):
3680-            if dirname not in directories:
3681-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3682-            else:
3683-                return directories[dirname].get_contents()
3684-
3685-        mocklistdir.side_effect = call_listdir
3686-
3687-        class MockDir:
3688-            def __init__(self, dirname):
3689-                self.name = dirname
3690-                self.contents = []
3691-   
3692-            def get_contents(self):
3693-                return self.contents
3694-
3695-        def call_isdir(dirname):
3696-            #XXX Should there be any other tests here?
3697-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3698-            return True
3699-
3700-        mockisdir.side_effect = call_isdir
3701-
3702-        def call_mkdir(dirname, permissions):
3703-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3704-                self.Fail
3705-            if dirname in directories:
3706-                raise OSError(17, "File exists: '%s'" % dirname)
3707-                self.Fail
3708-            elif dirname not in directories:
3709-                directories[dirname] = MockDir(dirname)
3710-                return True
3711-
3712-        mockmkdir.side_effect = call_mkdir
3713-
3714-        def call_get_available_space(storedir, reserved_space):
3715-            self.failUnlessReallyEqual(storedir, tempdir)
3716-            return 1
3717-
3718-        mockget_available_space.side_effect = call_get_available_space
3719-
3720-        mocktime.return_value = 0
3721-        def call_get_shares(storageindex):
3722-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3723-            return []#share]
3724-
3725-        mockget_shares.side_effect = call_get_shares
3726-
3727-        def call_open(fname, mode):
3728-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3729-            return fobj
3730-
3731-        mockopen.side_effect = call_open
3732-
3733-        def call_make_dirs(dname):
3734-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3735-           
3736-        mockmake_dirs.side_effect = call_make_dirs
3737-
3738-        def call_rename(src, dst):
3739-            self.failUnlessReallyEqual(src, shareincomingname)
3740-            self.failUnlessReallyEqual(dst, sharefname)
3741-           
3742-        mockrename.side_effect = call_rename
3743-
3744-        def call_exists(fname):
3745-            self.failUnlessReallyEqual(fname, sharefname)
3746-
3747-        mockexists.side_effect = call_exists
3748-
3749         # Now begin the test.
3750 
3751         # XXX (0) ???  Fail unless something is not properly set-up?
3752}
3753[JACP
3754wilcoxjg@gmail.com**20110711194407
3755 Ignore-this: b54745de777c4bb58d68d708f010bbb
3756] {
3757hunk ./src/allmydata/storage/backends/das/core.py 86
3758 
3759     def get_incoming(self, storageindex):
3760         """Return the set of incoming shnums."""
3761-        return set(os.listdir(self.incomingdir))
3762+        try:
3763+            incominglist = os.listdir(self.incomingdir)
3764+            print "incominglist: ", incominglist
3765+            return set(incominglist)
3766+        except OSError:
3767+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3768+            pass
3769 
3770     def get_shares(self, storage_index):
3771         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3772hunk ./src/allmydata/storage/server.py 17
3773 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3774      create_mutable_sharefile
3775 
3776-# storage/
3777-# storage/shares/incoming
3778-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3779-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3780-# storage/shares/$START/$STORAGEINDEX
3781-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3782-
3783-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3784-# base-32 chars).
3785-
3786-
3787 class StorageServer(service.MultiService, Referenceable):
3788     implements(RIStorageServer, IStatsProducer)
3789     name = 'storage'
3790}
3791[testing get incoming
3792wilcoxjg@gmail.com**20110711210224
3793 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3794] {
3795hunk ./src/allmydata/storage/backends/das/core.py 87
3796     def get_incoming(self, storageindex):
3797         """Return the set of incoming shnums."""
3798         try:
3799-            incominglist = os.listdir(self.incomingdir)
3800+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3801+            incominglist = os.listdir(incomingsharesdir)
3802             print "incominglist: ", incominglist
3803             return set(incominglist)
3804         except OSError:
3805hunk ./src/allmydata/storage/backends/das/core.py 92
3806-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3807-            pass
3808-
3809+            # XXX I'd like to make this more specific. If there are no shares at all.
3810+            return set()
3811+           
3812     def get_shares(self, storage_index):
3813         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3814         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3815hunk ./src/allmydata/test/test_backends.py 149
3816         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3817 
3818         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3819+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3820         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3821 
3822hunk ./src/allmydata/test/test_backends.py 152
3823-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3824         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3825         # with the same si, until BucketWriter.remote_close() has been called.
3826         # self.failIf(bsa)
3827}
3828[ImmutableShareFile does not know its StorageIndex
3829wilcoxjg@gmail.com**20110711211424
3830 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3831] {
3832hunk ./src/allmydata/storage/backends/das/core.py 112
3833             return 0
3834         return fileutil.get_available_space(self.storedir, self.reserved_space)
3835 
3836-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3837-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3838+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3839+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3840+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3841+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3842         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3843         return bw
3844 
3845hunk ./src/allmydata/storage/backends/das/core.py 155
3846     LEASE_SIZE = struct.calcsize(">L32s32sL")
3847     sharetype = "immutable"
3848 
3849-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3850+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3851         """ If max_size is not None then I won't allow more than
3852         max_size to be written to me. If create=True then max_size
3853         must not be None. """
3854}
3855[get_incoming correctly reports the 0 share after it has arrived
3856wilcoxjg@gmail.com**20110712025157
3857 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3858] {
3859hunk ./src/allmydata/storage/backends/das/core.py 1
3860+import os, re, weakref, struct, time, stat
3861+
3862 from allmydata.interfaces import IStorageBackend
3863 from allmydata.storage.backends.base import Backend
3864 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3865hunk ./src/allmydata/storage/backends/das/core.py 8
3866 from allmydata.util.assertutil import precondition
3867 
3868-import os, re, weakref, struct, time
3869-
3870 #from foolscap.api import Referenceable
3871 from twisted.application import service
3872 
3873hunk ./src/allmydata/storage/backends/das/core.py 89
3874         try:
3875             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3876             incominglist = os.listdir(incomingsharesdir)
3877-            print "incominglist: ", incominglist
3878-            return set(incominglist)
3879+            incomingshnums = [int(x) for x in incominglist]
3880+            return set(incomingshnums)
3881         except OSError:
3882             # XXX I'd like to make this more specific. If there are no shares at all.
3883             return set()
3884hunk ./src/allmydata/storage/backends/das/core.py 113
3885         return fileutil.get_available_space(self.storedir, self.reserved_space)
3886 
3887     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3888-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3889-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3890-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3891+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3892+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3893+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3894         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3895         return bw
3896 
3897hunk ./src/allmydata/storage/backends/das/core.py 160
3898         max_size to be written to me. If create=True then max_size
3899         must not be None. """
3900         precondition((max_size is not None) or (not create), max_size, create)
3901-        self.shnum = shnum
3902-        self.storage_index = storageindex
3903-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3904         self._max_size = max_size
3905hunk ./src/allmydata/storage/backends/das/core.py 161
3906-        self.incomingdir = os.path.join(sharedir, 'incoming')
3907-        si_dir = storage_index_to_dir(storageindex)
3908-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3909-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3910-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3911+        self.incominghome = incominghome
3912+        self.finalhome = finalhome
3913         if create:
3914             # touch the file, so later callers will see that we're working on
3915             # it. Also construct the metadata.
3916hunk ./src/allmydata/storage/backends/das/core.py 166
3917-            assert not os.path.exists(self.fname)
3918-            fileutil.make_dirs(os.path.dirname(self.fname))
3919-            f = open(self.fname, 'wb')
3920+            assert not os.path.exists(self.finalhome)
3921+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3922+            f = open(self.incominghome, 'wb')
3923             # The second field -- the four-byte share data length -- is no
3924             # longer used as of Tahoe v1.3.0, but we continue to write it in
3925             # there in case someone downgrades a storage server from >=
3926hunk ./src/allmydata/storage/backends/das/core.py 183
3927             self._lease_offset = max_size + 0x0c
3928             self._num_leases = 0
3929         else:
3930-            f = open(self.fname, 'rb')
3931-            filesize = os.path.getsize(self.fname)
3932+            f = open(self.finalhome, 'rb')
3933+            filesize = os.path.getsize(self.finalhome)
3934             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3935             f.close()
3936             if version != 1:
3937hunk ./src/allmydata/storage/backends/das/core.py 189
3938                 msg = "sharefile %s had version %d but we wanted 1" % \
3939-                      (self.fname, version)
3940+                      (self.finalhome, version)
3941                 raise UnknownImmutableContainerVersionError(msg)
3942             self._num_leases = num_leases
3943             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3944hunk ./src/allmydata/storage/backends/das/core.py 225
3945         pass
3946         
3947     def stat(self):
3948-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3949+        return os.stat(self.finalhome)[stat.ST_SIZE]
3950+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
3951 
3952     def get_shnum(self):
3953         return self.shnum
3954hunk ./src/allmydata/storage/backends/das/core.py 232
3955 
3956     def unlink(self):
3957-        os.unlink(self.fname)
3958+        os.unlink(self.finalhome)
3959 
3960     def read_share_data(self, offset, length):
3961         precondition(offset >= 0)
3962hunk ./src/allmydata/storage/backends/das/core.py 239
3963         # Reads beyond the end of the data are truncated. Reads that start
3964         # beyond the end of the data return an empty string.
3965         seekpos = self._data_offset+offset
3966-        fsize = os.path.getsize(self.fname)
3967+        fsize = os.path.getsize(self.finalhome)
3968         actuallength = max(0, min(length, fsize-seekpos))
3969         if actuallength == 0:
3970             return ""
3971hunk ./src/allmydata/storage/backends/das/core.py 243
3972-        f = open(self.fname, 'rb')
3973+        f = open(self.finalhome, 'rb')
3974         f.seek(seekpos)
3975         return f.read(actuallength)
3976 
3977hunk ./src/allmydata/storage/backends/das/core.py 252
3978         precondition(offset >= 0, offset)
3979         if self._max_size is not None and offset+length > self._max_size:
3980             raise DataTooLargeError(self._max_size, offset, length)
3981-        f = open(self.fname, 'rb+')
3982+        f = open(self.incominghome, 'rb+')
3983         real_offset = self._data_offset+offset
3984         f.seek(real_offset)
3985         assert f.tell() == real_offset
3986hunk ./src/allmydata/storage/backends/das/core.py 279
3987 
3988     def get_leases(self):
3989         """Yields a LeaseInfo instance for all leases."""
3990-        f = open(self.fname, 'rb')
3991+        f = open(self.finalhome, 'rb')
3992         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3993         f.seek(self._lease_offset)
3994         for i in range(num_leases):
3995hunk ./src/allmydata/storage/backends/das/core.py 288
3996                 yield LeaseInfo().from_immutable_data(data)
3997 
3998     def add_lease(self, lease_info):
3999-        f = open(self.fname, 'rb+')
4000+        f = open(self.incominghome, 'rb+')
4001         num_leases = self._read_num_leases(f)
4002         self._write_lease_record(f, num_leases, lease_info)
4003         self._write_num_leases(f, num_leases+1)
4004hunk ./src/allmydata/storage/backends/das/core.py 301
4005                 if new_expire_time > lease.expiration_time:
4006                     # yes
4007                     lease.expiration_time = new_expire_time
4008-                    f = open(self.fname, 'rb+')
4009+                    f = open(self.finalhome, 'rb+')
4010                     self._write_lease_record(f, i, lease)
4011                     f.close()
4012                 return
4013hunk ./src/allmydata/storage/backends/das/core.py 336
4014             # the same order as they were added, so that if we crash while
4015             # doing this, we won't lose any non-cancelled leases.
4016             leases = [l for l in leases if l] # remove the cancelled leases
4017-            f = open(self.fname, 'rb+')
4018+            f = open(self.finalhome, 'rb+')
4019             for i,lease in enumerate(leases):
4020                 self._write_lease_record(f, i, lease)
4021             self._write_num_leases(f, len(leases))
4022hunk ./src/allmydata/storage/backends/das/core.py 344
4023             f.close()
4024         space_freed = self.LEASE_SIZE * num_leases_removed
4025         if not len(leases):
4026-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
4027+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
4028             self.unlink()
4029         return space_freed
4030hunk ./src/allmydata/test/test_backends.py 129
4031     @mock.patch('time.time')
4032     def test_write_share(self, mocktime):
4033         """ Write a new share. """
4034-
4035-        class MockShare:
4036-            def __init__(self):
4037-                self.shnum = 1
4038-               
4039-            def add_or_renew_lease(elf, lease_info):
4040-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4041-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4042-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4043-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4044-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4045-
4046-        share = MockShare()
4047-
4048         # Now begin the test.
4049 
4050         # XXX (0) ???  Fail unless something is not properly set-up?
4051hunk ./src/allmydata/test/test_backends.py 143
4052         # self.failIf(bsa)
4053 
4054         bs[0].remote_write(0, 'a')
4055-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4056+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4057         spaceint = self.s.allocated_size()
4058         self.failUnlessReallyEqual(spaceint, 1)
4059 
4060hunk ./src/allmydata/test/test_backends.py 161
4061         #self.failIf(mockrename.called, mockrename.call_args_list)
4062         #self.failIf(mockstat.called, mockstat.call_args_list)
4063 
4064+    def test_handle_incoming(self):
4065+        incomingset = self.s.backend.get_incoming('teststorage_index')
4066+        self.failUnlessReallyEqual(incomingset, set())
4067+
4068+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4069+       
4070+        incomingset = self.s.backend.get_incoming('teststorage_index')
4071+        self.failUnlessReallyEqual(incomingset, set((0,)))
4072+
4073+        bs[0].remote_close()
4074+        self.failUnlessReallyEqual(incomingset, set())
4075+
4076     @mock.patch('os.path.exists')
4077     @mock.patch('os.path.getsize')
4078     @mock.patch('__builtin__.open')
4079hunk ./src/allmydata/test/test_backends.py 223
4080         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4081 
4082 
4083-
4084 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4085     @mock.patch('time.time')
4086     @mock.patch('os.mkdir')
4087hunk ./src/allmydata/test/test_backends.py 271
4088         DASCore('teststoredir', expiration_policy)
4089 
4090         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4091+
4092}
4093[jacp14
4094wilcoxjg@gmail.com**20110712061211
4095 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4096] {
4097hunk ./src/allmydata/storage/backends/das/core.py 95
4098             # XXX I'd like to make this more specific. If there are no shares at all.
4099             return set()
4100             
4101-    def get_shares(self, storage_index):
4102+    def get_shares(self, storageindex):
4103         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4104hunk ./src/allmydata/storage/backends/das/core.py 97
4105-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4106+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4107         try:
4108             for f in os.listdir(finalstoragedir):
4109                 if NUM_RE.match(f):
4110hunk ./src/allmydata/storage/backends/das/core.py 102
4111                     filename = os.path.join(finalstoragedir, f)
4112-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4113+                    yield ImmutableShare(filename, storageindex, f)
4114         except OSError:
4115             # Commonly caused by there being no shares at all.
4116             pass
4117hunk ./src/allmydata/storage/backends/das/core.py 115
4118     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4119         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4120         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4121-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4122+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4123         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4124         return bw
4125 
4126hunk ./src/allmydata/storage/backends/das/core.py 155
4127     LEASE_SIZE = struct.calcsize(">L32s32sL")
4128     sharetype = "immutable"
4129 
4130-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4131+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4132         """ If max_size is not None then I won't allow more than
4133         max_size to be written to me. If create=True then max_size
4134         must not be None. """
4135hunk ./src/allmydata/storage/backends/das/core.py 160
4136         precondition((max_size is not None) or (not create), max_size, create)
4137+        self.storageindex = storageindex
4138         self._max_size = max_size
4139         self.incominghome = incominghome
4140         self.finalhome = finalhome
4141hunk ./src/allmydata/storage/backends/das/core.py 164
4142+        self.shnum = shnum
4143         if create:
4144             # touch the file, so later callers will see that we're working on
4145             # it. Also construct the metadata.
4146hunk ./src/allmydata/storage/backends/das/core.py 212
4147             # their children to know when they should do the rmdir. This
4148             # approach is simpler, but relies on os.rmdir refusing to delete
4149             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4150+            #print "os.path.dirname(self.incominghome): "
4151+            #print os.path.dirname(self.incominghome)
4152             os.rmdir(os.path.dirname(self.incominghome))
4153             # we also delete the grandparent (prefix) directory, .../ab ,
4154             # again to avoid leaving directories lying around. This might
4155hunk ./src/allmydata/storage/immutable.py 93
4156     def __init__(self, ss, share):
4157         self.ss = ss
4158         self._share_file = share
4159-        self.storage_index = share.storage_index
4160+        self.storageindex = share.storageindex
4161         self.shnum = share.shnum
4162 
4163     def __repr__(self):
4164hunk ./src/allmydata/storage/immutable.py 98
4165         return "<%s %s %s>" % (self.__class__.__name__,
4166-                               base32.b2a_l(self.storage_index[:8], 60),
4167+                               base32.b2a_l(self.storageindex[:8], 60),
4168                                self.shnum)
4169 
4170     def remote_read(self, offset, length):
4171hunk ./src/allmydata/storage/immutable.py 110
4172 
4173     def remote_advise_corrupt_share(self, reason):
4174         return self.ss.remote_advise_corrupt_share("immutable",
4175-                                                   self.storage_index,
4176+                                                   self.storageindex,
4177                                                    self.shnum,
4178                                                    reason)
4179hunk ./src/allmydata/test/test_backends.py 20
4180 # The following share file contents was generated with
4181 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4182 # with share data == 'a'.
4183-renew_secret  = 'x'*32
4184-cancel_secret = 'y'*32
4185-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4186-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4187+shareversionnumber = '\x00\x00\x00\x01'
4188+sharedatalength = '\x00\x00\x00\x01'
4189+numberofleases = '\x00\x00\x00\x01'
4190+shareinputdata = 'a'
4191+ownernumber = '\x00\x00\x00\x00'
4192+renewsecret  = 'x'*32
4193+cancelsecret = 'y'*32
4194+expirationtime = '\x00(\xde\x80'
4195+nextlease = ''
4196+containerdata = shareversionnumber + sharedatalength + numberofleases
4197+client_data = shareinputdata + ownernumber + renewsecret + \
4198+    cancelsecret + expirationtime + nextlease
4199+share_data = containerdata + client_data
4200+
4201 
4202 testnodeid = 'testnodeidxxxxxxxxxx'
4203 tempdir = 'teststoredir'
4204hunk ./src/allmydata/test/test_backends.py 52
4205 
4206 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4207     def setUp(self):
4208-        self.s = StorageServer(testnodeid, backend=NullCore())
4209+        self.ss = StorageServer(testnodeid, backend=NullCore())
4210 
4211     @mock.patch('os.mkdir')
4212     @mock.patch('__builtin__.open')
4213hunk ./src/allmydata/test/test_backends.py 62
4214         """ Write a new share. """
4215 
4216         # Now begin the test.
4217-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4218+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4219         bs[0].remote_write(0, 'a')
4220         self.failIf(mockisdir.called)
4221         self.failIf(mocklistdir.called)
4222hunk ./src/allmydata/test/test_backends.py 133
4223                 _assert(False, "The tester code doesn't recognize this case.") 
4224 
4225         mockopen.side_effect = call_open
4226-        testbackend = DASCore(tempdir, expiration_policy)
4227-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4228+        self.backend = DASCore(tempdir, expiration_policy)
4229+        self.ss = StorageServer(testnodeid, self.backend)
4230+        self.ssinf = StorageServer(testnodeid, self.backend)
4231 
4232     @mock.patch('time.time')
4233     def test_write_share(self, mocktime):
4234hunk ./src/allmydata/test/test_backends.py 142
4235         """ Write a new share. """
4236         # Now begin the test.
4237 
4238-        # XXX (0) ???  Fail unless something is not properly set-up?
4239-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4240+        mocktime.return_value = 0
4241+        # Inspect incoming and fail unless it's empty.
4242+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4243+        self.failUnlessReallyEqual(incomingset, set())
4244+       
4245+        # Among other things, populate incoming with the sharenum: 0.
4246+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4247 
4248hunk ./src/allmydata/test/test_backends.py 150
4249-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4250-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4251-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4252+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4253+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4254+       
4255+        # Attempt to create a second share writer with the same share.
4256+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4257 
4258hunk ./src/allmydata/test/test_backends.py 156
4259-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4260+        # Show that no sharewriter results from a remote_allocate_buckets
4261         # with the same si, until BucketWriter.remote_close() has been called.
4262hunk ./src/allmydata/test/test_backends.py 158
4263-        # self.failIf(bsa)
4264+        self.failIf(bsa)
4265 
4266hunk ./src/allmydata/test/test_backends.py 160
4267+        # Write 'a' to shnum 0. Only tested together with close and read.
4268         bs[0].remote_write(0, 'a')
4269hunk ./src/allmydata/test/test_backends.py 162
4270-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4271-        spaceint = self.s.allocated_size()
4272+
4273+        # Test allocated size.
4274+        spaceint = self.ss.allocated_size()
4275         self.failUnlessReallyEqual(spaceint, 1)
4276 
4277         # XXX (3) Inspect final and fail unless there's nothing there.
4278hunk ./src/allmydata/test/test_backends.py 168
4279+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4280         bs[0].remote_close()
4281         # XXX (4a) Inspect final and fail unless share 0 is there.
4282hunk ./src/allmydata/test/test_backends.py 171
4283+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4284+        #contents = sharesinfinal[0].read_share_data(0,999)
4285+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4286         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4287 
4288         # What happens when there's not enough space for the client's request?
4289hunk ./src/allmydata/test/test_backends.py 177
4290-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4291+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4292 
4293         # Now test the allocated_size method.
4294         # self.failIf(mockexists.called, mockexists.call_args_list)
4295hunk ./src/allmydata/test/test_backends.py 185
4296         #self.failIf(mockrename.called, mockrename.call_args_list)
4297         #self.failIf(mockstat.called, mockstat.call_args_list)
4298 
4299-    def test_handle_incoming(self):
4300-        incomingset = self.s.backend.get_incoming('teststorage_index')
4301-        self.failUnlessReallyEqual(incomingset, set())
4302-
4303-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4304-       
4305-        incomingset = self.s.backend.get_incoming('teststorage_index')
4306-        self.failUnlessReallyEqual(incomingset, set((0,)))
4307-
4308-        bs[0].remote_close()
4309-        self.failUnlessReallyEqual(incomingset, set())
4310-
4311     @mock.patch('os.path.exists')
4312     @mock.patch('os.path.getsize')
4313     @mock.patch('__builtin__.open')
4314hunk ./src/allmydata/test/test_backends.py 208
4315             self.failUnless('r' in mode, mode)
4316             self.failUnless('b' in mode, mode)
4317 
4318-            return StringIO(share_file_data)
4319+            return StringIO(share_data)
4320         mockopen.side_effect = call_open
4321 
4322hunk ./src/allmydata/test/test_backends.py 211
4323-        datalen = len(share_file_data)
4324+        datalen = len(share_data)
4325         def call_getsize(fname):
4326             self.failUnlessReallyEqual(fname, sharefname)
4327             return datalen
4328hunk ./src/allmydata/test/test_backends.py 223
4329         mockexists.side_effect = call_exists
4330 
4331         # Now begin the test.
4332-        bs = self.s.remote_get_buckets('teststorage_index')
4333+        bs = self.ss.remote_get_buckets('teststorage_index')
4334 
4335         self.failUnlessEqual(len(bs), 1)
4336hunk ./src/allmydata/test/test_backends.py 226
4337-        b = bs[0]
4338+        b = bs['0']
4339         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4340hunk ./src/allmydata/test/test_backends.py 228
4341-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4342+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4343         # If you try to read past the end you get the as much data as is there.
4344hunk ./src/allmydata/test/test_backends.py 230
4345-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4346+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4347         # If you start reading past the end of the file you get the empty string.
4348         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4349 
4350}
4351
4352Context:
4353
4354[add Protovis.js-based download-status timeline visualization
4355Brian Warner <warner@lothar.com>**20110629222606
4356 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
4357 
4358 provide status overlap info on the webapi t=json output, add decode/decrypt
4359 rate tooltips, add zoomin/zoomout buttons
4360]
4361[add more download-status data, fix tests
4362Brian Warner <warner@lothar.com>**20110629222555
4363 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
4364]
4365[prepare for viz: improve DownloadStatus events
4366Brian Warner <warner@lothar.com>**20110629222542
4367 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
4368 
4369 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
4370]
4371[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
4372zooko@zooko.com**20110629185711
4373 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
4374]
4375[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
4376david-sarah@jacaranda.org**20110130235809
4377 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
4378]
4379[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
4380david-sarah@jacaranda.org**20110626054124
4381 Ignore-this: abb864427a1b91bd10d5132b4589fd90
4382]
4383[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
4384david-sarah@jacaranda.org**20110623205528
4385 Ignore-this: c63e23146c39195de52fb17c7c49b2da
4386]
4387[Rename test_package_initialization.py to (much shorter) test_import.py .
4388Brian Warner <warner@lothar.com>**20110611190234
4389 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
4390 
4391 The former name was making my 'ls' listings hard to read, by forcing them
4392 down to just two columns.
4393]
4394[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
4395zooko@zooko.com**20110611163741
4396 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
4397 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
4398 fixes #1412
4399]
4400[wui: right-align the size column in the WUI
4401zooko@zooko.com**20110611153758
4402 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
4403 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
4404 fixes #1412
4405]
4406[docs: three minor fixes
4407zooko@zooko.com**20110610121656
4408 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
4409 CREDITS for arc for stats tweak
4410 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
4411 English usage tweak
4412]
4413[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
4414david-sarah@jacaranda.org**20110609223719
4415 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
4416]
4417[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
4418wilcoxjg@gmail.com**20110527120135
4419 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
4420 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
4421 NEWS.rst, stats.py: documentation of change to get_latencies
4422 stats.rst: now documents percentile modification in get_latencies
4423 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
4424 fixes #1392
4425]
4426[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
4427david-sarah@jacaranda.org**20110517011214
4428 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
4429]
4430[docs: convert NEWS to NEWS.rst and change all references to it.
4431david-sarah@jacaranda.org**20110517010255
4432 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
4433]
4434[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
4435david-sarah@jacaranda.org**20110512140559
4436 Ignore-this: 784548fc5367fac5450df1c46890876d
4437]
4438[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
4439david-sarah@jacaranda.org**20110130164923
4440 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
4441]
4442[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
4443zooko@zooko.com**20110128142006
4444 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
4445 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
4446]
4447[M-x whitespace-cleanup
4448zooko@zooko.com**20110510193653
4449 Ignore-this: dea02f831298c0f65ad096960e7df5c7
4450]
4451[docs: fix typo in running.rst, thanks to arch_o_median
4452zooko@zooko.com**20110510193633
4453 Ignore-this: ca06de166a46abbc61140513918e79e8
4454]
4455[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
4456david-sarah@jacaranda.org**20110204204902
4457 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
4458]
4459[relnotes.txt: forseeable -> foreseeable. refs #1342
4460david-sarah@jacaranda.org**20110204204116
4461 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
4462]
4463[replace remaining .html docs with .rst docs
4464zooko@zooko.com**20110510191650
4465 Ignore-this: d557d960a986d4ac8216d1677d236399
4466 Remove install.html (long since deprecated).
4467 Also replace some obsolete references to install.html with references to quickstart.rst.
4468 Fix some broken internal references within docs/historical/historical_known_issues.txt.
4469 Thanks to Ravi Pinjala and Patrick McDonald.
4470 refs #1227
4471]
4472[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
4473zooko@zooko.com**20110428055232
4474 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
4475]
4476[munin tahoe_files plugin: fix incorrect file count
4477francois@ctrlaltdel.ch**20110428055312
4478 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
4479 fixes #1391
4480]
4481[corrected "k must never be smaller than N" to "k must never be greater than N"
4482secorp@allmydata.org**20110425010308
4483 Ignore-this: 233129505d6c70860087f22541805eac
4484]
4485[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
4486david-sarah@jacaranda.org**20110411190738
4487 Ignore-this: 7847d26bc117c328c679f08a7baee519
4488]
4489[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
4490david-sarah@jacaranda.org**20110410155844
4491 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
4492]
4493[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
4494david-sarah@jacaranda.org**20110410155705
4495 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
4496]
4497[remove unused variable detected by pyflakes
4498zooko@zooko.com**20110407172231
4499 Ignore-this: 7344652d5e0720af822070d91f03daf9
4500]
4501[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
4502david-sarah@jacaranda.org**20110401202750
4503 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
4504]
4505[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
4506Brian Warner <warner@lothar.com>**20110325232511
4507 Ignore-this: d5307faa6900f143193bfbe14e0f01a
4508]
4509[control.py: remove all uses of s.get_serverid()
4510warner@lothar.com**20110227011203
4511 Ignore-this: f80a787953bd7fa3d40e828bde00e855
4512]
4513[web: remove some uses of s.get_serverid(), not all
4514warner@lothar.com**20110227011159
4515 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
4516]
4517[immutable/downloader/fetcher.py: remove all get_serverid() calls
4518warner@lothar.com**20110227011156
4519 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
4520]
4521[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
4522warner@lothar.com**20110227011153
4523 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
4524 
4525 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
4526 _shares_from_server dict was being popped incorrectly (using shnum as the
4527 index instead of serverid). I'm still thinking through the consequences of
4528 this bug. It was probably benign and really hard to detect. I think it would
4529 cause us to incorrectly believe that we're pulling too many shares from a
4530 server, and thus prefer a different server rather than asking for a second
4531 share from the first server. The diversity code is intended to spread out the
4532 number of shares simultaneously being requested from each server, but with
4533 this bug, it might be spreading out the total number of shares requested at
4534 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
4535 segment, so the effect doesn't last very long).
4536]
4537[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
4538warner@lothar.com**20110227011150
4539 Ignore-this: d8d56dd8e7b280792b40105e13664554
4540 
4541 test_download.py: create+check MyShare instances better, make sure they share
4542 Server objects, now that finder.py cares
4543]
4544[immutable/downloader/finder.py: reduce use of get_serverid(), one left
4545warner@lothar.com**20110227011146
4546 Ignore-this: 5785be173b491ae8a78faf5142892020
4547]
4548[immutable/offloaded.py: reduce use of get_serverid() a bit more
4549warner@lothar.com**20110227011142
4550 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
4551]
4552[immutable/upload.py: reduce use of get_serverid()
4553warner@lothar.com**20110227011138
4554 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
4555]
4556[immutable/checker.py: remove some uses of s.get_serverid(), not all
4557warner@lothar.com**20110227011134
4558 Ignore-this: e480a37efa9e94e8016d826c492f626e
4559]
4560[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
4561warner@lothar.com**20110227011132
4562 Ignore-this: 6078279ddf42b179996a4b53bee8c421
4563 MockIServer stubs
4564]
4565[upload.py: rearrange _make_trackers a bit, no behavior changes
4566warner@lothar.com**20110227011128
4567 Ignore-this: 296d4819e2af452b107177aef6ebb40f
4568]
4569[happinessutil.py: finally rename merge_peers to merge_servers
4570warner@lothar.com**20110227011124
4571 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
4572]
4573[test_upload.py: factor out FakeServerTracker
4574warner@lothar.com**20110227011120
4575 Ignore-this: 6c182cba90e908221099472cc159325b
4576]
4577[test_upload.py: server-vs-tracker cleanup
4578warner@lothar.com**20110227011115
4579 Ignore-this: 2915133be1a3ba456e8603885437e03
4580]
4581[happinessutil.py: server-vs-tracker cleanup
4582warner@lothar.com**20110227011111
4583 Ignore-this: b856c84033562d7d718cae7cb01085a9
4584]
4585[upload.py: more tracker-vs-server cleanup
4586warner@lothar.com**20110227011107
4587 Ignore-this: bb75ed2afef55e47c085b35def2de315
4588]
4589[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
4590warner@lothar.com**20110227011103
4591 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
4592]
4593[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
4594warner@lothar.com**20110227011100
4595 Ignore-this: 7ea858755cbe5896ac212a925840fe68
4596 
4597 No behavioral changes, just updating variable/method names and log messages.
4598 The effects outside these three files should be minimal: some exception
4599 messages changed (to say "server" instead of "peer"), and some internal class
4600 names were changed. A few things still use "peer" to minimize external
4601 changes, like UploadResults.timings["peer_selection"] and
4602 happinessutil.merge_peers, which can be changed later.
4603]
4604[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
4605warner@lothar.com**20110227011056
4606 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
4607]
4608[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
4609warner@lothar.com**20110227011051
4610 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
4611]
4612[test: increase timeout on a network test because Francois's ARM machine hit that timeout
4613zooko@zooko.com**20110317165909
4614 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
4615 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
4616]
4617[docs/configuration.rst: add a "Frontend Configuration" section
4618Brian Warner <warner@lothar.com>**20110222014323
4619 Ignore-this: 657018aa501fe4f0efef9851628444ca
4620 
4621 this points to docs/frontends/*.rst, which were previously underlinked
4622]
4623[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
4624"Brian Warner <warner@lothar.com>"**20110221061544
4625 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
4626]
4627[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
4628david-sarah@jacaranda.org**20110221015817
4629 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
4630]
4631[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
4632david-sarah@jacaranda.org**20110221020125
4633 Ignore-this: b0744ed58f161bf188e037bad077fc48
4634]
4635[Refactor StorageFarmBroker handling of servers
4636Brian Warner <warner@lothar.com>**20110221015804
4637 Ignore-this: 842144ed92f5717699b8f580eab32a51
4638 
4639 Pass around IServer instance instead of (peerid, rref) tuple. Replace
4640 "descriptor" with "server". Other replacements:
4641 
4642  get_all_servers -> get_connected_servers/get_known_servers
4643  get_servers_for_index -> get_servers_for_psi (now returns IServers)
4644 
4645 This change still needs to be pushed further down: lots of code is now
4646 getting the IServer and then distributing (peerid, rref) internally.
4647 Instead, it ought to distribute the IServer internally and delay
4648 extracting a serverid or rref until the last moment.
4649 
4650 no_network.py was updated to retain parallelism.
4651]
4652[TAG allmydata-tahoe-1.8.2
4653warner@lothar.com**20110131020101]
4654Patch bundle hash:
4655b29669cf1b70dd67326f23be1e8e3229af721bf1