Ticket #1465: addresseszookocomment01_20110809.darcs.patch

File addresseszookocomment01_20110809.darcs.patch, 137.2 KB (added by Zancas, at 2011-08-10T05:50:57Z)

changes the name of a patch to something more descriptive per zookos comment

Line 
1Tue Aug  9 13:39:10 MDT 2011  wilcoxjg@gmail.com
2  * storage: add tests of the new feature of having the storage backend in a separate object from the server
3
4Tue Aug  9 14:09:29 MDT 2011  wilcoxjg@gmail.com
5  * Added directories and new modules for the null backend
6
7Tue Aug  9 14:12:49 MDT 2011  wilcoxjg@gmail.com
8  * changes to null/core.py and storage/common.py necessary for test with null backend to pass
9
10Tue Aug  9 14:16:47 MDT 2011  wilcoxjg@gmail.com
11  * change storage/server.py to new "backend pluggable" version
12
13Tue Aug  9 14:18:22 MDT 2011  wilcoxjg@gmail.com
14  * modify null/core.py such that the correct interfaces are implemented
15
16Tue Aug  9 14:22:32 MDT 2011  wilcoxjg@gmail.com
17  * make changes to storage/immutable.py most changes are part of movement to DAS specific backend.
18
19Tue Aug  9 14:26:20 MDT 2011  wilcoxjg@gmail.com
20  * creates backends/das/core.py
21
22Tue Aug  9 14:31:23 MDT 2011  wilcoxjg@gmail.com
23  * change backends/das/core.py to correct which interfaces are implemented
24
25Tue Aug  9 14:33:21 MDT 2011  wilcoxjg@gmail.com
26  * util/fileutil.py now expects and manipulates twisted.python.filepath.FilePath objects
27
28Tue Aug  9 14:35:19 MDT 2011  wilcoxjg@gmail.com
29  * add expirer.py
30
31Tue Aug  9 14:38:11 MDT 2011  wilcoxjg@gmail.com
32  * Changes I have made that aren't necessary for the test_backends.py suite to pass.
33
34Tue Aug  9 21:37:51 MDT 2011  wilcoxjg@gmail.com
35  * add __init__.py to backend and core and null
36
37New patches:
38
39[storage: add tests of the new feature of having the storage backend in a separate object from the server
40wilcoxjg@gmail.com**20110809193910
41 Ignore-this: 72b64dab1a9ce668607a4ece4429e29a
42] {
43addfile ./src/allmydata/test/test_backends.py
44hunk ./src/allmydata/test/test_backends.py 1
45+import os, stat
46+from twisted.trial import unittest
47+from allmydata.util.log import msg
48+from allmydata.test.common_util import ReallyEqualMixin
49+import mock
50+# This is the code that we're going to be testing.
51+from allmydata.storage.server import StorageServer
52+from allmydata.storage.backends.das.core import DASCore
53+from allmydata.storage.backends.null.core import NullCore
54+from allmydata.storage.common import si_si2dir
55+# The following share file content was generated with
56+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
57+# with share data == 'a'. The total size of this input
58+# is 85 bytes.
59+shareversionnumber = '\x00\x00\x00\x01'
60+sharedatalength = '\x00\x00\x00\x01'
61+numberofleases = '\x00\x00\x00\x01'
62+shareinputdata = 'a'
63+ownernumber = '\x00\x00\x00\x00'
64+renewsecret  = 'x'*32
65+cancelsecret = 'y'*32
66+expirationtime = '\x00(\xde\x80'
67+nextlease = ''
68+containerdata = shareversionnumber + sharedatalength + numberofleases
69+client_data = shareinputdata + ownernumber + renewsecret + \
70+    cancelsecret + expirationtime + nextlease
71+share_data = containerdata + client_data
72+testnodeid = 'testnodeidxxxxxxxxxx'
73+expiration_policy = {'enabled' : False,
74+                     'mode' : 'age',
75+                     'override_lease_duration' : None,
76+                     'cutoff_date' : None,
77+                     'sharetypes' : None}
78+
79+
80+class MockFileSystem(unittest.TestCase):
81+    """ I simulate a filesystem that the code under test can use. I simulate
82+    just the parts of the filesystem that the current implementation of DAS
83+    backend needs. """
84+    def setUp(self):
85+        # Make patcher, patch, and make effects for fs using functions.
86+        msg( "%s.setUp()" % (self,))
87+        self.mockedfilepaths = {}
88+        #keys are pathnames, values are MockFilePath objects. This is necessary because
89+        #MockFilePath behavior sometimes depends on the filesystem. Where it does,
90+        #self.mockedfilepaths has the relevent info.
91+        self.storedir = MockFilePath('teststoredir', self.mockedfilepaths)
92+        self.basedir = self.storedir.child('shares')
93+        self.baseincdir = self.basedir.child('incoming')
94+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
95+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
96+        self.shareincomingname = self.sharedirincomingname.child('0')
97+        self.sharefinalname = self.sharedirfinalname.child('0')
98+
99+        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
100+        FakePath = self.FilePathFake.__enter__()
101+
102+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
103+        FakeBCC = self.BCountingCrawler.__enter__()
104+        FakeBCC.side_effect = self.call_FakeBCC
105+
106+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
107+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
108+        FakeLCC.side_effect = self.call_FakeLCC
109+
110+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
111+        GetSpace = self.get_available_space.__enter__()
112+        GetSpace.side_effect = self.call_get_available_space
113+
114+        self.statforsize = mock.patch('allmydata.storage.backends.das.core.filepath.stat')
115+        getsize = self.statforsize.__enter__()
116+        getsize.side_effect = self.call_statforsize
117+
118+    def call_FakeBCC(self, StateFile):
119+        return MockBCC()
120+
121+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
122+        return MockLCC()
123+
124+    def call_get_available_space(self, storedir, reservedspace):
125+        # The input vector has an input size of 85.
126+        return 85 - reservedspace
127+
128+    def call_statforsize(self, fakefpname):
129+        return self.mockedfilepaths[fakefpname].fileobject.size()
130+
131+    def tearDown(self):
132+        msg( "%s.tearDown()" % (self,))
133+        FakePath = self.FilePathFake.__exit__()       
134+        self.mockedfilepaths = {}
135+
136+
137+class MockFilePath:
138+    def __init__(self, pathstring, ffpathsenvironment, existance=False):
139+        #  I can't jsut make the values MockFileObjects because they may be directories.
140+        self.mockedfilepaths = ffpathsenvironment
141+        self.path = pathstring
142+        self.existance = existance
143+        if not self.mockedfilepaths.has_key(self.path):
144+            #  The first MockFilePath object is special
145+            self.mockedfilepaths[self.path] = self
146+            self.fileobject = None
147+        else:
148+            self.fileobject = self.mockedfilepaths[self.path].fileobject
149+        self.spawn = {}
150+        self.antecedent = os.path.dirname(self.path)
151+
152+    def setContent(self, contentstring):
153+        # This method rewrites the data in the file that corresponds to its path
154+        # name whether it preexisted or not.
155+        self.fileobject = MockFileObject(contentstring)
156+        self.existance = True
157+        self.mockedfilepaths[self.path].fileobject = self.fileobject
158+        self.mockedfilepaths[self.path].existance = self.existance
159+        self.setparents()
160+       
161+    def create(self):
162+        # This method chokes if there's a pre-existing file!
163+        if self.mockedfilepaths[self.path].fileobject:
164+            raise OSError
165+        else:
166+            self.fileobject = MockFileObject(contentstring)
167+            self.existance = True
168+            self.mockedfilepaths[self.path].fileobject = self.fileobject
169+            self.mockedfilepaths[self.path].existance = self.existance
170+            self.setparents()       
171+
172+    def open(self, mode='r'):
173+        # XXX Makes no use of mode.
174+        if not self.mockedfilepaths[self.path].fileobject:
175+            # If there's no fileobject there already then make one and put it there.
176+            self.fileobject = MockFileObject()
177+            self.existance = True
178+            self.mockedfilepaths[self.path].fileobject = self.fileobject
179+            self.mockedfilepaths[self.path].existance = self.existance
180+        else:
181+            # Otherwise get a ref to it.
182+            self.fileobject = self.mockedfilepaths[self.path].fileobject
183+            self.existance = self.mockedfilepaths[self.path].existance
184+        return self.fileobject.open(mode)
185+
186+    def child(self, childstring):
187+        arg2child = os.path.join(self.path, childstring)
188+        child = MockFilePath(arg2child, self.mockedfilepaths)
189+        return child
190+
191+    def children(self):
192+        childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)]
193+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
194+        childrenfromffs = [ffp for ffp in childrenfromffs if ffp.exists()]
195+        self.spawn = frozenset(childrenfromffs)
196+        return self.spawn 
197+
198+    def parent(self):
199+        if self.mockedfilepaths.has_key(self.antecedent):
200+            parent = self.mockedfilepaths[self.antecedent]
201+        else:
202+            parent = MockFilePath(self.antecedent, self.mockedfilepaths)
203+        return parent
204+
205+    def parents(self):
206+        antecedents = []
207+        def f(fps, antecedents):
208+            newfps = os.path.split(fps)[0]
209+            if newfps:
210+                antecedents.append(newfps)
211+                f(newfps, antecedents)
212+        f(self.path, antecedents)
213+        return antecedents
214+
215+    def setparents(self):
216+        for fps in self.parents():
217+            if not self.mockedfilepaths.has_key(fps):
218+                self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths, exists=True)
219+
220+    def basename(self):
221+        return os.path.split(self.path)[1]
222+
223+    def moveTo(self, newffp):
224+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
225+        if self.mockedfilepaths[newffp.path].exists():
226+            raise OSError
227+        else:
228+            self.mockedfilepaths[newffp.path] = self
229+            self.path = newffp.path
230+
231+    def getsize(self):
232+        return self.fileobject.getsize()
233+
234+    def exists(self):
235+        return self.existance
236+
237+    def isdir(self):
238+        return True
239+
240+    def makedirs(self):
241+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
242+        pass
243+
244+    def remove(self):
245+        pass
246+
247+
248+class MockFileObject:
249+    def __init__(self, contentstring=''):
250+        self.buffer = contentstring
251+        self.pos = 0
252+    def open(self, mode='r'):
253+        return self
254+    def write(self, instring):
255+        begin = self.pos
256+        padlen = begin - len(self.buffer)
257+        if padlen > 0:
258+            self.buffer += '\x00' * padlen
259+        end = self.pos + len(instring)
260+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
261+        self.pos = end
262+    def close(self):
263+        self.pos = 0
264+    def seek(self, pos):
265+        self.pos = pos
266+    def read(self, numberbytes):
267+        return self.buffer[self.pos:self.pos+numberbytes]
268+    def tell(self):
269+        return self.pos
270+    def size(self):
271+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
272+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
273+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
274+        return {stat.ST_SIZE:len(self.buffer)}
275+    def getsize(self):
276+        return len(self.buffer)
277+
278+class MockBCC:
279+    def setServiceParent(self, Parent):
280+        pass
281+
282+
283+class MockLCC:
284+    def setServiceParent(self, Parent):
285+        pass
286+
287+
288+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
289+    """ NullBackend is just for testing and executable documentation, so
290+    this test is actually a test of StorageServer in which we're using
291+    NullBackend as helper code for the test, rather than a test of
292+    NullBackend. """
293+    def setUp(self):
294+        self.ss = StorageServer(testnodeid, backend=NullCore())
295+
296+    @mock.patch('os.mkdir')
297+    @mock.patch('__builtin__.open')
298+    @mock.patch('os.listdir')
299+    @mock.patch('os.path.isdir')
300+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
301+        """ Write a new share. """
302+
303+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
304+        bs[0].remote_write(0, 'a')
305+        self.failIf(mockisdir.called)
306+        self.failIf(mocklistdir.called)
307+        self.failIf(mockopen.called)
308+        self.failIf(mockmkdir.called)
309+
310+
311+class TestServerConstruction(MockFileSystem, ReallyEqualMixin):
312+    def test_create_server_fs_backend(self):
313+        """ This tests whether a server instance can be constructed with a
314+        filesystem backend. To pass the test, it mustn't use the filesystem
315+        outside of its configured storedir. """
316+        StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
317+
318+
319+class TestServerAndFSBackend(MockFileSystem, ReallyEqualMixin):
320+    """ This tests both the StorageServer and the DAS backend together. """   
321+    def setUp(self):
322+        MockFileSystem.setUp(self)
323+        try:
324+            self.backend = DASCore(self.storedir, expiration_policy)
325+            self.ss = StorageServer(testnodeid, self.backend)
326+
327+            self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
328+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
329+        except:
330+            MockFileSystem.tearDown(self)
331+            raise
332+
333+    @mock.patch('time.time')
334+    @mock.patch('allmydata.util.fileutil.get_available_space')
335+    def test_out_of_space(self, mockget_available_space, mocktime):
336+        mocktime.return_value = 0
337+       
338+        def call_get_available_space(dir, reserve):
339+            return 0
340+
341+        mockget_available_space.side_effect = call_get_available_space
342+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
343+        self.failUnlessReallyEqual(bsc, {})
344+
345+    @mock.patch('time.time')
346+    def test_write_and_read_share(self, mocktime):
347+        """
348+        Write a new share, read it, and test the server's (and FS backend's)
349+        handling of simultaneous and successive attempts to write the same
350+        share.
351+        """
352+        mocktime.return_value = 0
353+        # Inspect incoming and fail unless it's empty.
354+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
355+       
356+        self.failUnlessReallyEqual(incomingset, frozenset())
357+       
358+        # Populate incoming with the sharenum: 0.
359+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
360+
361+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
362+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
363+
364+
365+
366+        # Attempt to create a second share writer with the same sharenum.
367+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
368+
369+        # Show that no sharewriter results from a remote_allocate_buckets
370+        # with the same si and sharenum, until BucketWriter.remote_close()
371+        # has been called.
372+        self.failIf(bsa)
373+
374+        # Test allocated size.
375+        spaceint = self.ss.allocated_size()
376+        self.failUnlessReallyEqual(spaceint, 1)
377+
378+        # Write 'a' to shnum 0. Only tested together with close and read.
379+        bs[0].remote_write(0, 'a')
380+       
381+        # Preclose: Inspect final, failUnless nothing there.
382+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
383+        bs[0].remote_close()
384+
385+        # Postclose: (Omnibus) failUnless written data is in final.
386+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
387+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
388+        contents = sharesinfinal[0].read_share_data(0, 73)
389+        self.failUnlessReallyEqual(contents, client_data)
390+
391+        # Exercise the case that the share we're asking to allocate is
392+        # already (completely) uploaded.
393+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
394+       
395+
396+    def test_read_old_share(self):
397+        """ This tests whether the code correctly finds and reads
398+        shares written out by old (Tahoe-LAFS <= v1.8.2)
399+        servers. There is a similar test in test_download, but that one
400+        is from the perspective of the client and exercises a deeper
401+        stack of code. This one is for exercising just the
402+        StorageServer object. """
403+        # Contruct a file with the appropriate contents in the mockfilesystem.
404+        datalen = len(share_data)
405+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0))
406+        finalhome.setContent(share_data)
407+
408+        # Now begin the test.
409+        bs = self.ss.remote_get_buckets('teststorage_index')
410+
411+        self.failUnlessEqual(len(bs), 1)
412+        b = bs['0']
413+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
414+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
415+        # If you try to read past the end you get the as much data as is there.
416+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
417+        # If you start reading past the end of the file you get the empty string.
418+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
419}
420[Added directories and new modules for the null backend
421wilcoxjg@gmail.com**20110809200929
422 Ignore-this: f5dfa418afced5141eb9247a9908109e
423] {
424hunk ./src/allmydata/interfaces.py 270
425         store that on disk.
426         """
427 
428+class IStorageBackend(Interface):
429+    """
430+    Objects of this kind live on the server side and are used by the
431+    storage server object.
432+    """
433+    def get_available_space(self, reserved_space):
434+        """ Returns available space for share storage in bytes, or
435+        None if this information is not available or if the available
436+        space is unlimited.
437+
438+        If the backend is configured for read-only mode then this will
439+        return 0.
440+
441+        reserved_space is how many bytes to subtract from the answer, so
442+        you can pass how many bytes you would like to leave unused on this
443+        filesystem as reserved_space. """
444+
445+    def get_bucket_shares(self):
446+        """XXX"""
447+
448+    def get_share(self):
449+        """XXX"""
450+
451+    def make_bucket_writer(self):
452+        """XXX"""
453+
454+class IStorageBackendShare(Interface):
455+    """
456+    This object contains as much as all of the share data.  It is intended
457+    for lazy evaluation such that in many use cases substantially less than
458+    all of the share data will be accessed.
459+    """
460+    def is_complete(self):
461+        """
462+        Returns the share state, or None if the share does not exist.
463+        """
464+
465 class IStorageBucketWriter(Interface):
466     """
467     Objects of this kind live on the client side.
468adddir ./src/allmydata/storage/backends
469addfile ./src/allmydata/storage/backends/base.py
470hunk ./src/allmydata/storage/backends/base.py 1
471+from twisted.application import service
472+
473+class Backend(service.MultiService):
474+    def __init__(self):
475+        service.MultiService.__init__(self)
476adddir ./src/allmydata/storage/backends/null
477addfile ./src/allmydata/storage/backends/null/core.py
478hunk ./src/allmydata/storage/backends/null/core.py 1
479+from allmydata.storage.backends.base import Backend
480+from allmydata.storage.immutable import BucketWriter, BucketReader
481+
482+class NullCore(Backend):
483+    def __init__(self):
484+        Backend.__init__(self)
485+
486+    def get_available_space(self):
487+        return None
488+
489+    def get_shares(self, storage_index):
490+        return set()
491+
492+    def get_share(self, storage_index, sharenum):
493+        return None
494+
495+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
496+        immutableshare = ImmutableShare()
497+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
498+
499+    def set_storage_server(self, ss):
500+        self.ss = ss
501+
502+    def get_incoming_shnums(self, storageindex):
503+        return frozenset()
504+
505+class ImmutableShare:
506+    sharetype = "immutable"
507+
508+    def __init__(self):
509+        """ If max_size is not None then I won't allow more than
510+        max_size to be written to me. If create=True then max_size
511+        must not be None. """
512+        pass
513+
514+    def get_shnum(self):
515+        return self.shnum
516+
517+    def unlink(self):
518+        os.unlink(self.fname)
519+
520+    def read_share_data(self, offset, length):
521+        precondition(offset >= 0)
522+        # Reads beyond the end of the data are truncated. Reads that start
523+        # beyond the end of the data return an empty string.
524+        seekpos = self._data_offset+offset
525+        fsize = os.path.getsize(self.fname)
526+        actuallength = max(0, min(length, fsize-seekpos))
527+        if actuallength == 0:
528+            return ""
529+        f = open(self.fname, 'rb')
530+        f.seek(seekpos)
531+        return f.read(actuallength)
532+
533+    def write_share_data(self, offset, data):
534+        pass
535+
536+    def _write_lease_record(self, f, lease_number, lease_info):
537+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
538+        f.seek(offset)
539+        assert f.tell() == offset
540+        f.write(lease_info.to_immutable_data())
541+
542+    def _read_num_leases(self, f):
543+        f.seek(0x08)
544+        (num_leases,) = struct.unpack(">L", f.read(4))
545+        return num_leases
546+
547+    def _write_num_leases(self, f, num_leases):
548+        f.seek(0x08)
549+        f.write(struct.pack(">L", num_leases))
550+
551+    def _truncate_leases(self, f, num_leases):
552+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
553+
554+    def get_leases(self):
555+        """Yields a LeaseInfo instance for all leases."""
556+        f = open(self.fname, 'rb')
557+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
558+        f.seek(self._lease_offset)
559+        for i in range(num_leases):
560+            data = f.read(self.LEASE_SIZE)
561+            if data:
562+                yield LeaseInfo().from_immutable_data(data)
563+
564+    def add_lease(self, lease):
565+        pass
566+
567+    def renew_lease(self, renew_secret, new_expire_time):
568+        for i,lease in enumerate(self.get_leases()):
569+            if constant_time_compare(lease.renew_secret, renew_secret):
570+                # yup. See if we need to update the owner time.
571+                if new_expire_time > lease.expiration_time:
572+                    # yes
573+                    lease.expiration_time = new_expire_time
574+                    f = open(self.fname, 'rb+')
575+                    self._write_lease_record(f, i, lease)
576+                    f.close()
577+                return
578+        raise IndexError("unable to renew non-existent lease")
579+
580+    def add_or_renew_lease(self, lease_info):
581+        try:
582+            self.renew_lease(lease_info.renew_secret,
583+                             lease_info.expiration_time)
584+        except IndexError:
585+            self.add_lease(lease_info)
586+
587+
588+    def cancel_lease(self, cancel_secret):
589+        """Remove a lease with the given cancel_secret. If the last lease is
590+        cancelled, the file will be removed. Return the number of bytes that
591+        were freed (by truncating the list of leases, and possibly by
592+        deleting the file. Raise IndexError if there was no lease with the
593+        given cancel_secret.
594+        """
595+
596+        leases = list(self.get_leases())
597+        num_leases_removed = 0
598+        for i,lease in enumerate(leases):
599+            if constant_time_compare(lease.cancel_secret, cancel_secret):
600+                leases[i] = None
601+                num_leases_removed += 1
602+        if not num_leases_removed:
603+            raise IndexError("unable to find matching lease to cancel")
604+        if num_leases_removed:
605+            # pack and write out the remaining leases. We write these out in
606+            # the same order as they were added, so that if we crash while
607+            # doing this, we won't lose any non-cancelled leases.
608+            leases = [l for l in leases if l] # remove the cancelled leases
609+            f = open(self.fname, 'rb+')
610+            for i,lease in enumerate(leases):
611+                self._write_lease_record(f, i, lease)
612+            self._write_num_leases(f, len(leases))
613+            self._truncate_leases(f, len(leases))
614+            f.close()
615+        space_freed = self.LEASE_SIZE * num_leases_removed
616+        if not len(leases):
617+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
618+            self.unlink()
619+        return space_freed
620}
621[changes to null/core.py and storage/common.py necessary for test with null backend to pass
622wilcoxjg@gmail.com**20110809201249
623 Ignore-this: 9ddcd79f9962550ed20518ae85b6b6b2
624] {
625hunk ./src/allmydata/storage/backends/null/core.py 3
626 from allmydata.storage.backends.base import Backend
627 from allmydata.storage.immutable import BucketWriter, BucketReader
628+from zope.interface import implements
629 
630 class NullCore(Backend):
631hunk ./src/allmydata/storage/backends/null/core.py 6
632+    implements(IStorageBackend)
633     def __init__(self):
634         Backend.__init__(self)
635 
636hunk ./src/allmydata/storage/backends/null/core.py 30
637         return frozenset()
638 
639 class ImmutableShare:
640+    implements(IStorageBackendShare)
641     sharetype = "immutable"
642 
643     def __init__(self):
644hunk ./src/allmydata/storage/common.py 19
645 def si_a2b(ascii_storageindex):
646     return base32.a2b(ascii_storageindex)
647 
648-def storage_index_to_dir(storageindex):
649+def si_si2dir(startfp, storageindex):
650     sia = si_b2a(storageindex)
651hunk ./src/allmydata/storage/common.py 21
652-    return os.path.join(sia[:2], sia)
653+    newfp = startfp.child(sia[:2])
654+    return newfp.child(sia)
655}
656[change storage/server.py to new "backend pluggable" version
657wilcoxjg@gmail.com**20110809201647
658 Ignore-this: 1b0c5f9e831641287992bf45af55246e
659] {
660hunk ./src/allmydata/storage/server.py 1
661-import os, re, weakref, struct, time
662+import os, weakref, struct, time
663 
664 from foolscap.api import Referenceable
665 from twisted.application import service
666hunk ./src/allmydata/storage/server.py 11
667 from allmydata.util import fileutil, idlib, log, time_format
668 import allmydata # for __full_version__
669 
670-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
671-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
672+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
673+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
674 from allmydata.storage.lease import LeaseInfo
675 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
676      create_mutable_sharefile
677hunk ./src/allmydata/storage/server.py 16
678-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
679-from allmydata.storage.crawler import BucketCountingCrawler
680-from allmydata.storage.expirer import LeaseCheckingCrawler
681-
682-# storage/
683-# storage/shares/incoming
684-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
685-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
686-# storage/shares/$START/$STORAGEINDEX
687-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
688-
689-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
690-# base-32 chars).
691-
692-# $SHARENUM matches this regex:
693-NUM_RE=re.compile("^[0-9]+$")
694-
695-
696 
697 class StorageServer(service.MultiService, Referenceable):
698     implements(RIStorageServer, IStatsProducer)
699hunk ./src/allmydata/storage/server.py 20
700     name = 'storage'
701-    LeaseCheckerClass = LeaseCheckingCrawler
702 
703hunk ./src/allmydata/storage/server.py 21
704-    def __init__(self, storedir, nodeid, reserved_space=0,
705-                 discard_storage=False, readonly_storage=False,
706-                 stats_provider=None,
707-                 expiration_enabled=False,
708-                 expiration_mode="age",
709-                 expiration_override_lease_duration=None,
710-                 expiration_cutoff_date=None,
711-                 expiration_sharetypes=("mutable", "immutable")):
712+    def __init__(self, nodeid, backend, reserved_space=0,
713+                 readonly_storage=False,
714+                 stats_provider=None ):
715         service.MultiService.__init__(self)
716         assert isinstance(nodeid, str)
717         assert len(nodeid) == 20
718hunk ./src/allmydata/storage/server.py 28
719         self.my_nodeid = nodeid
720-        self.storedir = storedir
721-        sharedir = os.path.join(storedir, "shares")
722-        fileutil.make_dirs(sharedir)
723-        self.sharedir = sharedir
724-        # we don't actually create the corruption-advisory dir until necessary
725-        self.corruption_advisory_dir = os.path.join(storedir,
726-                                                    "corruption-advisories")
727-        self.reserved_space = int(reserved_space)
728-        self.no_storage = discard_storage
729-        self.readonly_storage = readonly_storage
730         self.stats_provider = stats_provider
731         if self.stats_provider:
732             self.stats_provider.register_producer(self)
733hunk ./src/allmydata/storage/server.py 31
734-        self.incomingdir = os.path.join(sharedir, 'incoming')
735-        self._clean_incomplete()
736-        fileutil.make_dirs(self.incomingdir)
737         self._active_writers = weakref.WeakKeyDictionary()
738hunk ./src/allmydata/storage/server.py 32
739+        self.backend = backend
740+        self.backend.setServiceParent(self)
741+        self.backend.set_storage_server(self)
742         log.msg("StorageServer created", facility="tahoe.storage")
743 
744hunk ./src/allmydata/storage/server.py 37
745-        if reserved_space:
746-            if self.get_available_space() is None:
747-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
748-                        umin="0wZ27w", level=log.UNUSUAL)
749-
750         self.latencies = {"allocate": [], # immutable
751                           "write": [],
752                           "close": [],
753hunk ./src/allmydata/storage/server.py 48
754                           "renew": [],
755                           "cancel": [],
756                           }
757-        self.add_bucket_counter()
758-
759-        statefile = os.path.join(self.storedir, "lease_checker.state")
760-        historyfile = os.path.join(self.storedir, "lease_checker.history")
761-        klass = self.LeaseCheckerClass
762-        self.lease_checker = klass(self, statefile, historyfile,
763-                                   expiration_enabled, expiration_mode,
764-                                   expiration_override_lease_duration,
765-                                   expiration_cutoff_date,
766-                                   expiration_sharetypes)
767-        self.lease_checker.setServiceParent(self)
768 
769     def __repr__(self):
770         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
771hunk ./src/allmydata/storage/server.py 52
772 
773-    def add_bucket_counter(self):
774-        statefile = os.path.join(self.storedir, "bucket_counter.state")
775-        self.bucket_counter = BucketCountingCrawler(self, statefile)
776-        self.bucket_counter.setServiceParent(self)
777-
778     def count(self, name, delta=1):
779         if self.stats_provider:
780             self.stats_provider.count("storage_server." + name, delta)
781hunk ./src/allmydata/storage/server.py 66
782         """Return a dict, indexed by category, that contains a dict of
783         latency numbers for each category. If there are sufficient samples
784         for unambiguous interpretation, each dict will contain the
785-        following keys: mean, 01_0_percentile, 10_0_percentile,
786+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
787         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
788         99_0_percentile, 99_9_percentile.  If there are insufficient
789         samples for a given percentile to be interpreted unambiguously
790hunk ./src/allmydata/storage/server.py 88
791             else:
792                 stats["mean"] = None
793 
794-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
795-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
796-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
797+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
798+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
799+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
800                              (0.999, "99_9_percentile", 1000)]
801 
802             for percentile, percentilestring, minnumtoobserve in orderstatlist:
803hunk ./src/allmydata/storage/server.py 107
804             kwargs["facility"] = "tahoe.storage"
805         return log.msg(*args, **kwargs)
806 
807-    def _clean_incomplete(self):
808-        fileutil.rm_dir(self.incomingdir)
809-
810     def get_stats(self):
811         # remember: RIStatsProvider requires that our return dict
812hunk ./src/allmydata/storage/server.py 109
813-        # contains numeric values.
814+        # contains numeric, or None values.
815         stats = { 'storage_server.allocated': self.allocated_size(), }
816         stats['storage_server.reserved_space'] = self.reserved_space
817         for category,ld in self.get_latencies().items():
818hunk ./src/allmydata/storage/server.py 143
819             stats['storage_server.total_bucket_count'] = bucket_count
820         return stats
821 
822-    def get_available_space(self):
823-        """Returns available space for share storage in bytes, or None if no
824-        API to get this information is available."""
825-
826-        if self.readonly_storage:
827-            return 0
828-        return fileutil.get_available_space(self.storedir, self.reserved_space)
829-
830     def allocated_size(self):
831         space = 0
832         for bw in self._active_writers:
833hunk ./src/allmydata/storage/server.py 150
834         return space
835 
836     def remote_get_version(self):
837-        remaining_space = self.get_available_space()
838+        remaining_space = self.backend.get_available_space()
839         if remaining_space is None:
840             # We're on a platform that has no API to get disk stats.
841             remaining_space = 2**64
842hunk ./src/allmydata/storage/server.py 164
843                     }
844         return version
845 
846-    def remote_allocate_buckets(self, storage_index,
847+    def remote_allocate_buckets(self, storageindex,
848                                 renew_secret, cancel_secret,
849                                 sharenums, allocated_size,
850                                 canary, owner_num=0):
851hunk ./src/allmydata/storage/server.py 173
852         # to a particular owner.
853         start = time.time()
854         self.count("allocate")
855-        alreadygot = set()
856+        incoming = set()
857         bucketwriters = {} # k: shnum, v: BucketWriter
858hunk ./src/allmydata/storage/server.py 175
859-        si_dir = storage_index_to_dir(storage_index)
860-        si_s = si_b2a(storage_index)
861 
862hunk ./src/allmydata/storage/server.py 176
863+        si_s = si_b2a(storageindex)
864         log.msg("storage: allocate_buckets %s" % si_s)
865 
866         # in this implementation, the lease information (including secrets)
867hunk ./src/allmydata/storage/server.py 190
868 
869         max_space_per_bucket = allocated_size
870 
871-        remaining_space = self.get_available_space()
872+        remaining_space = self.backend.get_available_space()
873         limited = remaining_space is not None
874         if limited:
875             # this is a bit conservative, since some of this allocated_size()
876hunk ./src/allmydata/storage/server.py 199
877             remaining_space -= self.allocated_size()
878         # self.readonly_storage causes remaining_space <= 0
879 
880-        # fill alreadygot with all shares that we have, not just the ones
881+        # Fill alreadygot with all shares that we have, not just the ones
882         # they asked about: this will save them a lot of work. Add or update
883         # leases for all of them: if they want us to hold shares for this
884hunk ./src/allmydata/storage/server.py 202
885-        # file, they'll want us to hold leases for this file.
886-        for (shnum, fn) in self._get_bucket_shares(storage_index):
887-            alreadygot.add(shnum)
888-            sf = ShareFile(fn)
889-            sf.add_or_renew_lease(lease_info)
890+        # file, they'll want us to hold leases for all the shares of it.
891+        alreadygot = set()
892+        for share in self.backend.get_shares(storageindex):
893+            share.add_or_renew_lease(lease_info)
894+            alreadygot.add(share.shnum)
895 
896hunk ./src/allmydata/storage/server.py 208
897-        for shnum in sharenums:
898-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
899-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
900-            if os.path.exists(finalhome):
901-                # great! we already have it. easy.
902-                pass
903-            elif os.path.exists(incominghome):
904-                # Note that we don't create BucketWriters for shnums that
905-                # have a partial share (in incoming/), so if a second upload
906-                # occurs while the first is still in progress, the second
907-                # uploader will use different storage servers.
908-                pass
909-            elif (not limited) or (remaining_space >= max_space_per_bucket):
910-                # ok! we need to create the new share file.
911-                bw = BucketWriter(self, incominghome, finalhome,
912-                                  max_space_per_bucket, lease_info, canary)
913-                if self.no_storage:
914-                    bw.throw_out_all_data = True
915+        # all share numbers that are incoming
916+        incoming = self.backend.get_incoming_shnums(storageindex)
917+
918+        for shnum in ((sharenums - alreadygot) - incoming):
919+            if (not limited) or (remaining_space >= max_space_per_bucket):
920+                bw = self.backend.make_bucket_writer(storageindex, shnum, max_space_per_bucket, lease_info, canary)
921                 bucketwriters[shnum] = bw
922                 self._active_writers[bw] = 1
923                 if limited:
924hunk ./src/allmydata/storage/server.py 219
925                     remaining_space -= max_space_per_bucket
926             else:
927-                # bummer! not enough space to accept this bucket
928+                # Bummer not enough space to accept this share.
929                 pass
930 
931hunk ./src/allmydata/storage/server.py 222
932-        if bucketwriters:
933-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
934-
935         self.add_latency("allocate", time.time() - start)
936         return alreadygot, bucketwriters
937 
938hunk ./src/allmydata/storage/server.py 225
939-    def _iter_share_files(self, storage_index):
940-        for shnum, filename in self._get_bucket_shares(storage_index):
941+    def _iter_share_files(self, storageindex):
942+        for shnum, filename in self._get_shares(storageindex):
943             f = open(filename, 'rb')
944             header = f.read(32)
945             f.close()
946hunk ./src/allmydata/storage/server.py 231
947             if header[:32] == MutableShareFile.MAGIC:
948+                # XXX  Can I exploit this code?
949                 sf = MutableShareFile(filename, self)
950                 # note: if the share has been migrated, the renew_lease()
951                 # call will throw an exception, with information to help the
952hunk ./src/allmydata/storage/server.py 237
953                 # client update the lease.
954             elif header[:4] == struct.pack(">L", 1):
955+                # Check if version number is "1".
956+                # XXX WHAT ABOUT OTHER VERSIONS!!!!!!!?
957                 sf = ShareFile(filename)
958             else:
959                 continue # non-sharefile
960hunk ./src/allmydata/storage/server.py 244
961             yield sf
962 
963-    def remote_add_lease(self, storage_index, renew_secret, cancel_secret,
964+    def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
965                          owner_num=1):
966         start = time.time()
967         self.count("add-lease")
968hunk ./src/allmydata/storage/server.py 252
969         lease_info = LeaseInfo(owner_num,
970                                renew_secret, cancel_secret,
971                                new_expire_time, self.my_nodeid)
972-        for sf in self._iter_share_files(storage_index):
973+        for sf in self._iter_share_files(storageindex):
974             sf.add_or_renew_lease(lease_info)
975         self.add_latency("add-lease", time.time() - start)
976         return None
977hunk ./src/allmydata/storage/server.py 257
978 
979-    def remote_renew_lease(self, storage_index, renew_secret):
980+    def remote_renew_lease(self, storageindex, renew_secret):
981         start = time.time()
982         self.count("renew")
983         new_expire_time = time.time() + 31*24*60*60
984hunk ./src/allmydata/storage/server.py 262
985         found_buckets = False
986-        for sf in self._iter_share_files(storage_index):
987+        for sf in self._iter_share_files(storageindex):
988             found_buckets = True
989             sf.renew_lease(renew_secret, new_expire_time)
990         self.add_latency("renew", time.time() - start)
991hunk ./src/allmydata/storage/server.py 269
992         if not found_buckets:
993             raise IndexError("no such lease to renew")
994 
995-    def remote_cancel_lease(self, storage_index, cancel_secret):
996+    def remote_cancel_lease(self, storageindex, cancel_secret):
997         start = time.time()
998         self.count("cancel")
999 
1000hunk ./src/allmydata/storage/server.py 275
1001         total_space_freed = 0
1002         found_buckets = False
1003-        for sf in self._iter_share_files(storage_index):
1004+        for sf in self._iter_share_files(storageindex):
1005             # note: if we can't find a lease on one share, we won't bother
1006             # looking in the others. Unless something broke internally
1007             # (perhaps we ran out of disk space while adding a lease), the
1008hunk ./src/allmydata/storage/server.py 285
1009             total_space_freed += sf.cancel_lease(cancel_secret)
1010 
1011         if found_buckets:
1012-            storagedir = os.path.join(self.sharedir,
1013-                                      storage_index_to_dir(storage_index))
1014-            if not os.listdir(storagedir):
1015-                os.rmdir(storagedir)
1016+            # XXX  Yikes looks like code that shouldn't be in the server!
1017+            storagedir = si_si2dir(self.sharedir, storageindex)
1018+            fp_rmdir_if_empty(storagedir)
1019 
1020         if self.stats_provider:
1021             self.stats_provider.count('storage_server.bytes_freed',
1022hunk ./src/allmydata/storage/server.py 301
1023             self.stats_provider.count('storage_server.bytes_added', consumed_size)
1024         del self._active_writers[bw]
1025 
1026-    def _get_bucket_shares(self, storage_index):
1027-        """Return a list of (shnum, pathname) tuples for files that hold
1028-        shares for this storage_index. In each tuple, 'shnum' will always be
1029-        the integer form of the last component of 'pathname'."""
1030-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1031-        try:
1032-            for f in os.listdir(storagedir):
1033-                if NUM_RE.match(f):
1034-                    filename = os.path.join(storagedir, f)
1035-                    yield (int(f), filename)
1036-        except OSError:
1037-            # Commonly caused by there being no buckets at all.
1038-            pass
1039-
1040-    def remote_get_buckets(self, storage_index):
1041+    def remote_get_buckets(self, storageindex):
1042         start = time.time()
1043         self.count("get")
1044hunk ./src/allmydata/storage/server.py 304
1045-        si_s = si_b2a(storage_index)
1046+        si_s = si_b2a(storageindex)
1047         log.msg("storage: get_buckets %s" % si_s)
1048         bucketreaders = {} # k: sharenum, v: BucketReader
1049hunk ./src/allmydata/storage/server.py 307
1050-        for shnum, filename in self._get_bucket_shares(storage_index):
1051-            bucketreaders[shnum] = BucketReader(self, filename,
1052-                                                storage_index, shnum)
1053+        self.backend.set_storage_server(self)
1054+        for share in self.backend.get_shares(storageindex):
1055+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
1056         self.add_latency("get", time.time() - start)
1057         return bucketreaders
1058 
1059hunk ./src/allmydata/storage/server.py 313
1060-    def get_leases(self, storage_index):
1061+    def get_leases(self, storageindex):
1062         """Provide an iterator that yields all of the leases attached to this
1063         bucket. Each lease is returned as a LeaseInfo instance.
1064 
1065hunk ./src/allmydata/storage/server.py 323
1066         # since all shares get the same lease data, we just grab the leases
1067         # from the first share
1068         try:
1069-            shnum, filename = self._get_bucket_shares(storage_index).next()
1070+            shnum, filename = self._get_shares(storageindex).next()
1071             sf = ShareFile(filename)
1072             return sf.get_leases()
1073         except StopIteration:
1074hunk ./src/allmydata/storage/server.py 329
1075             return iter([])
1076 
1077-    def remote_slot_testv_and_readv_and_writev(self, storage_index,
1078+    #  XXX  As far as Zancas' grockery has gotten.
1079+    def remote_slot_testv_and_readv_and_writev(self, storageindex,
1080                                                secrets,
1081                                                test_and_write_vectors,
1082                                                read_vector):
1083hunk ./src/allmydata/storage/server.py 336
1084         start = time.time()
1085         self.count("writev")
1086-        si_s = si_b2a(storage_index)
1087+        si_s = si_b2a(storageindex)
1088         log.msg("storage: slot_writev %s" % si_s)
1089hunk ./src/allmydata/storage/server.py 338
1090-        si_dir = storage_index_to_dir(storage_index)
1091+       
1092         (write_enabler, renew_secret, cancel_secret) = secrets
1093         # shares exist if there is a file for them
1094hunk ./src/allmydata/storage/server.py 341
1095-        bucketdir = os.path.join(self.sharedir, si_dir)
1096+        bucketdir = si_si2dir(self.sharedir, storageindex)
1097         shares = {}
1098         if os.path.isdir(bucketdir):
1099             for sharenum_s in os.listdir(bucketdir):
1100hunk ./src/allmydata/storage/server.py 424
1101                                          self)
1102         return share
1103 
1104-    def remote_slot_readv(self, storage_index, shares, readv):
1105+    def remote_slot_readv(self, storageindex, shares, readv):
1106         start = time.time()
1107         self.count("readv")
1108hunk ./src/allmydata/storage/server.py 427
1109-        si_s = si_b2a(storage_index)
1110+        si_s = si_b2a(storageindex)
1111         lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
1112                      facility="tahoe.storage", level=log.OPERATIONAL)
1113hunk ./src/allmydata/storage/server.py 430
1114-        si_dir = storage_index_to_dir(storage_index)
1115         # shares exist if there is a file for them
1116hunk ./src/allmydata/storage/server.py 431
1117-        bucketdir = os.path.join(self.sharedir, si_dir)
1118+        bucketdir = si_si2dir(self.sharedir, storageindex)
1119         if not os.path.isdir(bucketdir):
1120             self.add_latency("readv", time.time() - start)
1121             return {}
1122hunk ./src/allmydata/storage/server.py 450
1123         self.add_latency("readv", time.time() - start)
1124         return datavs
1125 
1126-    def remote_advise_corrupt_share(self, share_type, storage_index, shnum,
1127+    def remote_advise_corrupt_share(self, share_type, storageindex, shnum,
1128                                     reason):
1129         fileutil.make_dirs(self.corruption_advisory_dir)
1130         now = time_format.iso_utc(sep="T")
1131hunk ./src/allmydata/storage/server.py 454
1132-        si_s = si_b2a(storage_index)
1133+        si_s = si_b2a(storageindex)
1134         # windows can't handle colons in the filename
1135         fn = os.path.join(self.corruption_advisory_dir,
1136                           "%s--%s-%d" % (now, si_s, shnum)).replace(":","")
1137hunk ./src/allmydata/storage/server.py 461
1138         f = open(fn, "w")
1139         f.write("report: Share Corruption\n")
1140         f.write("type: %s\n" % share_type)
1141-        f.write("storage_index: %s\n" % si_s)
1142+        f.write("storageindex: %s\n" % si_s)
1143         f.write("share_number: %d\n" % shnum)
1144         f.write("\n")
1145         f.write(reason)
1146}
1147[modify null/core.py such that the correct interfaces are implemented
1148wilcoxjg@gmail.com**20110809201822
1149 Ignore-this: 3c64580592474f71633287d1b6beeb6b
1150] hunk ./src/allmydata/storage/backends/null/core.py 4
1151 from allmydata.storage.backends.base import Backend
1152 from allmydata.storage.immutable import BucketWriter, BucketReader
1153 from zope.interface import implements
1154+from allmydata.interfaces import IStorageBackend, IStorageBackendShare
1155 
1156 class NullCore(Backend):
1157     implements(IStorageBackend)
1158[make changes to storage/immutable.py most changes are part of movement to DAS specific backend.
1159wilcoxjg@gmail.com**20110809202232
1160 Ignore-this: 70c7c6ea6be2418d70556718a050714
1161] {
1162hunk ./src/allmydata/storage/immutable.py 1
1163-import os, stat, struct, time
1164+import os, time
1165 
1166 from foolscap.api import Referenceable
1167 
1168hunk ./src/allmydata/storage/immutable.py 7
1169 from zope.interface import implements
1170 from allmydata.interfaces import RIBucketWriter, RIBucketReader
1171-from allmydata.util import base32, fileutil, log
1172+from allmydata.util import base32, log
1173 from allmydata.util.assertutil import precondition
1174 from allmydata.util.hashutil import constant_time_compare
1175 from allmydata.storage.lease import LeaseInfo
1176hunk ./src/allmydata/storage/immutable.py 14
1177 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1178      DataTooLargeError
1179 
1180-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1181-# and share data. The share data is accessed by RIBucketWriter.write and
1182-# RIBucketReader.read . The lease information is not accessible through these
1183-# interfaces.
1184-
1185-# The share file has the following layout:
1186-#  0x00: share file version number, four bytes, current version is 1
1187-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1188-#  0x08: number of leases, four bytes big-endian
1189-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1190-#  A+0x0c = B: first lease. Lease format is:
1191-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1192-#   B+0x04: renew secret, 32 bytes (SHA256)
1193-#   B+0x24: cancel secret, 32 bytes (SHA256)
1194-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1195-#   B+0x48: next lease, or end of record
1196-
1197-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1198-# but it is still filled in by storage servers in case the storage server
1199-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1200-# share file is moved from one storage server to another. The value stored in
1201-# this field is truncated, so if the actual share data length is >= 2**32,
1202-# then the value stored in this field will be the actual share data length
1203-# modulo 2**32.
1204-
1205-class ShareFile:
1206-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1207-    sharetype = "immutable"
1208-
1209-    def __init__(self, filename, max_size=None, create=False):
1210-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
1211-        precondition((max_size is not None) or (not create), max_size, create)
1212-        self.home = filename
1213-        self._max_size = max_size
1214-        if create:
1215-            # touch the file, so later callers will see that we're working on
1216-            # it. Also construct the metadata.
1217-            assert not os.path.exists(self.home)
1218-            fileutil.make_dirs(os.path.dirname(self.home))
1219-            f = open(self.home, 'wb')
1220-            # The second field -- the four-byte share data length -- is no
1221-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1222-            # there in case someone downgrades a storage server from >=
1223-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1224-            # server to another, etc. We do saturation -- a share data length
1225-            # larger than 2**32-1 (what can fit into the field) is marked as
1226-            # the largest length that can fit into the field. That way, even
1227-            # if this does happen, the old < v1.3.0 server will still allow
1228-            # clients to read the first part of the share.
1229-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1230-            f.close()
1231-            self._lease_offset = max_size + 0x0c
1232-            self._num_leases = 0
1233-        else:
1234-            f = open(self.home, 'rb')
1235-            filesize = os.path.getsize(self.home)
1236-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1237-            f.close()
1238-            if version != 1:
1239-                msg = "sharefile %s had version %d but we wanted 1" % \
1240-                      (filename, version)
1241-                raise UnknownImmutableContainerVersionError(msg)
1242-            self._num_leases = num_leases
1243-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1244-        self._data_offset = 0xc
1245-
1246-    def unlink(self):
1247-        os.unlink(self.home)
1248-
1249-    def read_share_data(self, offset, length):
1250-        precondition(offset >= 0)
1251-        # reads beyond the end of the data are truncated. Reads that start
1252-        # beyond the end of the data return an empty string. I wonder why
1253-        # Python doesn't do the following computation for me?
1254-        seekpos = self._data_offset+offset
1255-        fsize = os.path.getsize(self.home)
1256-        actuallength = max(0, min(length, fsize-seekpos))
1257-        if actuallength == 0:
1258-            return ""
1259-        f = open(self.home, 'rb')
1260-        f.seek(seekpos)
1261-        return f.read(actuallength)
1262-
1263-    def write_share_data(self, offset, data):
1264-        length = len(data)
1265-        precondition(offset >= 0, offset)
1266-        if self._max_size is not None and offset+length > self._max_size:
1267-            raise DataTooLargeError(self._max_size, offset, length)
1268-        f = open(self.home, 'rb+')
1269-        real_offset = self._data_offset+offset
1270-        f.seek(real_offset)
1271-        assert f.tell() == real_offset
1272-        f.write(data)
1273-        f.close()
1274-
1275-    def _write_lease_record(self, f, lease_number, lease_info):
1276-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1277-        f.seek(offset)
1278-        assert f.tell() == offset
1279-        f.write(lease_info.to_immutable_data())
1280-
1281-    def _read_num_leases(self, f):
1282-        f.seek(0x08)
1283-        (num_leases,) = struct.unpack(">L", f.read(4))
1284-        return num_leases
1285-
1286-    def _write_num_leases(self, f, num_leases):
1287-        f.seek(0x08)
1288-        f.write(struct.pack(">L", num_leases))
1289-
1290-    def _truncate_leases(self, f, num_leases):
1291-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1292-
1293-    def get_leases(self):
1294-        """Yields a LeaseInfo instance for all leases."""
1295-        f = open(self.home, 'rb')
1296-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1297-        f.seek(self._lease_offset)
1298-        for i in range(num_leases):
1299-            data = f.read(self.LEASE_SIZE)
1300-            if data:
1301-                yield LeaseInfo().from_immutable_data(data)
1302-
1303-    def add_lease(self, lease_info):
1304-        f = open(self.home, 'rb+')
1305-        num_leases = self._read_num_leases(f)
1306-        self._write_lease_record(f, num_leases, lease_info)
1307-        self._write_num_leases(f, num_leases+1)
1308-        f.close()
1309-
1310-    def renew_lease(self, renew_secret, new_expire_time):
1311-        for i,lease in enumerate(self.get_leases()):
1312-            if constant_time_compare(lease.renew_secret, renew_secret):
1313-                # yup. See if we need to update the owner time.
1314-                if new_expire_time > lease.expiration_time:
1315-                    # yes
1316-                    lease.expiration_time = new_expire_time
1317-                    f = open(self.home, 'rb+')
1318-                    self._write_lease_record(f, i, lease)
1319-                    f.close()
1320-                return
1321-        raise IndexError("unable to renew non-existent lease")
1322-
1323-    def add_or_renew_lease(self, lease_info):
1324-        try:
1325-            self.renew_lease(lease_info.renew_secret,
1326-                             lease_info.expiration_time)
1327-        except IndexError:
1328-            self.add_lease(lease_info)
1329-
1330-
1331-    def cancel_lease(self, cancel_secret):
1332-        """Remove a lease with the given cancel_secret. If the last lease is
1333-        cancelled, the file will be removed. Return the number of bytes that
1334-        were freed (by truncating the list of leases, and possibly by
1335-        deleting the file. Raise IndexError if there was no lease with the
1336-        given cancel_secret.
1337-        """
1338-
1339-        leases = list(self.get_leases())
1340-        num_leases_removed = 0
1341-        for i,lease in enumerate(leases):
1342-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1343-                leases[i] = None
1344-                num_leases_removed += 1
1345-        if not num_leases_removed:
1346-            raise IndexError("unable to find matching lease to cancel")
1347-        if num_leases_removed:
1348-            # pack and write out the remaining leases. We write these out in
1349-            # the same order as they were added, so that if we crash while
1350-            # doing this, we won't lose any non-cancelled leases.
1351-            leases = [l for l in leases if l] # remove the cancelled leases
1352-            f = open(self.home, 'rb+')
1353-            for i,lease in enumerate(leases):
1354-                self._write_lease_record(f, i, lease)
1355-            self._write_num_leases(f, len(leases))
1356-            self._truncate_leases(f, len(leases))
1357-            f.close()
1358-        space_freed = self.LEASE_SIZE * num_leases_removed
1359-        if not len(leases):
1360-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1361-            self.unlink()
1362-        return space_freed
1363-
1364-
1365 class BucketWriter(Referenceable):
1366     implements(RIBucketWriter)
1367 
1368hunk ./src/allmydata/storage/immutable.py 17
1369-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1370+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1371         self.ss = ss
1372hunk ./src/allmydata/storage/immutable.py 19
1373-        self.incominghome = incominghome
1374-        self.finalhome = finalhome
1375-        self._max_size = max_size # don't allow the client to write more than this
1376+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
1377         self._canary = canary
1378         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1379         self.closed = False
1380hunk ./src/allmydata/storage/immutable.py 24
1381         self.throw_out_all_data = False
1382-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1383+        self._sharefile = immutableshare
1384         # also, add our lease to the file now, so that other ones can be
1385         # added by simultaneous uploaders
1386         self._sharefile.add_lease(lease_info)
1387hunk ./src/allmydata/storage/immutable.py 45
1388         precondition(not self.closed)
1389         start = time.time()
1390 
1391-        fileutil.make_dirs(os.path.dirname(self.finalhome))
1392-        fileutil.rename(self.incominghome, self.finalhome)
1393-        try:
1394-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
1395-            # We try to delete the parent (.../ab/abcde) to avoid leaving
1396-            # these directories lying around forever, but the delete might
1397-            # fail if we're working on another share for the same storage
1398-            # index (like ab/abcde/5). The alternative approach would be to
1399-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1400-            # ShareWriter), each of which is responsible for a single
1401-            # directory on disk, and have them use reference counting of
1402-            # their children to know when they should do the rmdir. This
1403-            # approach is simpler, but relies on os.rmdir refusing to delete
1404-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
1405-            os.rmdir(os.path.dirname(self.incominghome))
1406-            # we also delete the grandparent (prefix) directory, .../ab ,
1407-            # again to avoid leaving directories lying around. This might
1408-            # fail if there is another bucket open that shares a prefix (like
1409-            # ab/abfff).
1410-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
1411-            # we leave the great-grandparent (incoming/) directory in place.
1412-        except EnvironmentError:
1413-            # ignore the "can't rmdir because the directory is not empty"
1414-            # exceptions, those are normal consequences of the
1415-            # above-mentioned conditions.
1416-            pass
1417+        self._sharefile.close()
1418+        filelen = self._sharefile.stat()
1419         self._sharefile = None
1420hunk ./src/allmydata/storage/immutable.py 48
1421+
1422         self.closed = True
1423         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1424 
1425hunk ./src/allmydata/storage/immutable.py 52
1426-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
1427         self.ss.bucket_writer_closed(self, filelen)
1428         self.ss.add_latency("close", time.time() - start)
1429         self.ss.count("close")
1430hunk ./src/allmydata/storage/immutable.py 90
1431 class BucketReader(Referenceable):
1432     implements(RIBucketReader)
1433 
1434-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
1435+    def __init__(self, ss, share):
1436         self.ss = ss
1437hunk ./src/allmydata/storage/immutable.py 92
1438-        self._share_file = ShareFile(sharefname)
1439-        self.storage_index = storage_index
1440-        self.shnum = shnum
1441+        self._share_file = share
1442+        self.storageindex = share.storageindex
1443+        self.shnum = share.shnum
1444 
1445     def __repr__(self):
1446         return "<%s %s %s>" % (self.__class__.__name__,
1447hunk ./src/allmydata/storage/immutable.py 98
1448-                               base32.b2a_l(self.storage_index[:8], 60),
1449+                               base32.b2a_l(self.storageindex[:8], 60),
1450                                self.shnum)
1451 
1452     def remote_read(self, offset, length):
1453hunk ./src/allmydata/storage/immutable.py 110
1454 
1455     def remote_advise_corrupt_share(self, reason):
1456         return self.ss.remote_advise_corrupt_share("immutable",
1457-                                                   self.storage_index,
1458+                                                   self.storageindex,
1459                                                    self.shnum,
1460                                                    reason)
1461}
1462[creates backends/das/core.py
1463wilcoxjg@gmail.com**20110809202620
1464 Ignore-this: 2ea937f8d02aa85396135903be91ed67
1465] {
1466adddir ./src/allmydata/storage/backends/das
1467addfile ./src/allmydata/storage/backends/das/core.py
1468hunk ./src/allmydata/storage/backends/das/core.py 1
1469+import re, weakref, struct, time, stat
1470+from twisted.application import service
1471+from twisted.python.filepath import UnlistableError
1472+from twisted.python import filepath
1473+from twisted.python.filepath import FilePath
1474+from zope.interface import implements
1475+
1476+import allmydata # for __full_version__
1477+from allmydata.interfaces import IStorageBackend
1478+from allmydata.storage.backends.base import Backend
1479+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
1480+from allmydata.util.assertutil import precondition
1481+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
1482+from allmydata.util import fileutil, idlib, log, time_format
1483+from allmydata.util.fileutil import fp_make_dirs
1484+from allmydata.storage.lease import LeaseInfo
1485+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1486+     create_mutable_sharefile
1487+from allmydata.storage.immutable import BucketWriter, BucketReader
1488+from allmydata.storage.crawler import BucketCountingCrawler
1489+from allmydata.util.hashutil import constant_time_compare
1490+from allmydata.storage.backends.das.expirer import LeaseCheckingCrawler
1491+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
1492+
1493+# storage/
1494+# storage/shares/incoming
1495+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1496+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
1497+# storage/shares/$START/$STORAGEINDEX
1498+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
1499+
1500+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
1501+# base-32 chars).
1502+# $SHARENUM matches this regex:
1503+NUM_RE=re.compile("^[0-9]+$")
1504+
1505+class DASCore(Backend):
1506+    implements(IStorageBackend)
1507+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1508+        Backend.__init__(self)
1509+        self._setup_storage(storedir, readonly, reserved_space)
1510+        self._setup_corruption_advisory()
1511+        self._setup_bucket_counter()
1512+        self._setup_lease_checkerf(expiration_policy)
1513+
1514+    def _setup_storage(self, storedir, readonly, reserved_space):
1515+        precondition(isinstance(storedir, FilePath), storedir, FilePath) 
1516+        self.storedir = storedir
1517+        self.readonly = readonly
1518+        self.reserved_space = int(reserved_space)
1519+        self.sharedir = self.storedir.child("shares")
1520+        fileutil.fp_make_dirs(self.sharedir)
1521+        self.incomingdir = self.sharedir.child('incoming')
1522+        self._clean_incomplete()
1523+        if self.reserved_space and (self.get_available_space() is None):
1524+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1525+                    umid="0wZ27w", level=log.UNUSUAL)
1526+
1527+
1528+    def _clean_incomplete(self):
1529+        fileutil.fp_remove(self.incomingdir)
1530+        fileutil.fp_make_dirs(self.incomingdir)
1531+
1532+    def _setup_corruption_advisory(self):
1533+        # we don't actually create the corruption-advisory dir until necessary
1534+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
1535+
1536+    def _setup_bucket_counter(self):
1537+        statefname = self.storedir.child("bucket_counter.state")
1538+        self.bucket_counter = BucketCountingCrawler(statefname)
1539+        self.bucket_counter.setServiceParent(self)
1540+
1541+    def _setup_lease_checkerf(self, expiration_policy):
1542+        statefile = self.storedir.child("lease_checker.state")
1543+        historyfile = self.storedir.child("lease_checker.history")
1544+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1545+        self.lease_checker.setServiceParent(self)
1546+
1547+    def get_incoming_shnums(self, storageindex):
1548+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
1549+        incomingthissi = si_si2dir(self.incomingdir, storageindex)
1550+        try:
1551+            childfps = [ fp for fp in incomingthissi.children() if NUM_RE.match(fp.basename()) ]
1552+            shnums = [ int(fp.basename()) for fp in childfps]
1553+            return frozenset(shnums)
1554+        except UnlistableError:
1555+            # There is no shares directory at all.
1556+            return frozenset()
1557+           
1558+    def get_shares(self, storageindex):
1559+        """ Generate ImmutableShare objects for shares we have for this
1560+        storageindex. ("Shares we have" means completed ones, excluding
1561+        incoming ones.)"""
1562+        finalstoragedir = si_si2dir(self.sharedir, storageindex)
1563+        try:
1564+            for fp in finalstoragedir.children():
1565+                fpshnumstr = fp.basename()
1566+                if NUM_RE.match(fpshnumstr):
1567+                    finalhome = finalstoragedir.child(fpshnumstr)
1568+                    yield ImmutableShare(storageindex, fpshnumstr, finalhome)
1569+        except UnlistableError:
1570+            # There is no shares directory at all.
1571+            pass
1572+       
1573+    def get_available_space(self):
1574+        if self.readonly:
1575+            return 0
1576+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1577+
1578+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
1579+        finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
1580+        incominghome = si_si2dir(self.incomingdir, storageindex).child(str(shnum))
1581+        immsh = ImmutableShare(storageindex, shnum, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
1582+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1583+        return bw
1584+
1585+    def make_bucket_reader(self, share):
1586+        return BucketReader(self.ss, share)
1587+
1588+    def set_storage_server(self, ss):
1589+        self.ss = ss
1590+       
1591+
1592+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1593+# and share data. The share data is accessed by RIBucketWriter.write and
1594+# RIBucketReader.read . The lease information is not accessible through these
1595+# interfaces.
1596+
1597+# The share file has the following layout:
1598+#  0x00: share file version number, four bytes, current version is 1
1599+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1600+#  0x08: number of leases, four bytes big-endian
1601+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1602+#  A+0x0c = B: first lease. Lease format is:
1603+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1604+#   B+0x04: renew secret, 32 bytes (SHA256)
1605+#   B+0x24: cancel secret, 32 bytes (SHA256)
1606+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1607+#   B+0x48: next lease, or end of record
1608+
1609+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1610+# but it is still filled in by storage servers in case the storage server
1611+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1612+# share file is moved from one storage server to another. The value stored in
1613+# this field is truncated, so if the actual share data length is >= 2**32,
1614+# then the value stored in this field will be the actual share data length
1615+# modulo 2**32.
1616+
1617+class ImmutableShare(object):
1618+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1619+    sharetype = "immutable"
1620+
1621+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
1622+        """ If max_size is not None then I won't allow more than
1623+        max_size to be written to me. If create=True then max_size
1624+        must not be None. """
1625+        precondition((max_size is not None) or (not create), max_size, create)
1626+        self.storageindex = storageindex
1627+        self._max_size = max_size
1628+        self.incominghome = incominghome
1629+        self.finalhome = finalhome
1630+        self.shnum = shnum
1631+        if create:
1632+            # touch the file, so later callers will see that we're working on
1633+            # it. Also construct the metadata.
1634+            assert not finalhome.exists()
1635+            fp_make_dirs(self.incominghome.parent())
1636+            # The second field -- the four-byte share data length -- is no
1637+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1638+            # there in case someone downgrades a storage server from >=
1639+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1640+            # server to another, etc. We do saturation -- a share data length
1641+            # larger than 2**32-1 (what can fit into the field) is marked as
1642+            # the largest length that can fit into the field. That way, even
1643+            # if this does happen, the old < v1.3.0 server will still allow
1644+            # clients to read the first part of the share.
1645+            self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
1646+            self._lease_offset = max_size + 0x0c
1647+            self._num_leases = 0
1648+        else:
1649+            fh = self.finalhome.open(mode='rb')
1650+            try:
1651+                (version, unused, num_leases) = struct.unpack(">LLL", fh.read(0xc))
1652+            finally:
1653+                fh.close()
1654+            filesize = self.finalhome.getsize()
1655+            if version != 1:
1656+                msg = "sharefile %s had version %d but we wanted 1" % \
1657+                      (self.finalhome, version)
1658+                raise UnknownImmutableContainerVersionError(msg)
1659+            self._num_leases = num_leases
1660+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1661+        self._data_offset = 0xc
1662+
1663+    def close(self):
1664+        fileutil.fp_make_dirs(self.finalhome.parent())
1665+        self.incominghome.moveTo(self.finalhome)
1666+        try:
1667+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
1668+            # We try to delete the parent (.../ab/abcde) to avoid leaving
1669+            # these directories lying around forever, but the delete might
1670+            # fail if we're working on another share for the same storage
1671+            # index (like ab/abcde/5). The alternative approach would be to
1672+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1673+            # ShareWriter), each of which is responsible for a single
1674+            # directory on disk, and have them use reference counting of
1675+            # their children to know when they should do the rmdir. This
1676+            # approach is simpler, but relies on os.rmdir refusing to delete
1677+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
1678+            fileutil.fp_rmdir_if_empty(self.incominghome.parent())
1679+            # we also delete the grandparent (prefix) directory, .../ab ,
1680+            # again to avoid leaving directories lying around. This might
1681+            # fail if there is another bucket open that shares a prefix (like
1682+            # ab/abfff).
1683+            fileutil.fp_rmdir_if_empty(self.incominghome.parent().parent())
1684+            # we leave the great-grandparent (incoming/) directory in place.
1685+        except EnvironmentError:
1686+            # ignore the "can't rmdir because the directory is not empty"
1687+            # exceptions, those are normal consequences of the
1688+            # above-mentioned conditions.
1689+            pass
1690+        pass
1691+       
1692+    def stat(self):
1693+        return filepath.stat(self.finalhome.path)[stat.ST_SIZE]
1694+
1695+    def get_shnum(self):
1696+        return self.shnum
1697+
1698+    def unlink(self):
1699+        self.finalhome.remove()
1700+
1701+    def read_share_data(self, offset, length):
1702+        precondition(offset >= 0)
1703+        # Reads beyond the end of the data are truncated. Reads that start
1704+        # beyond the end of the data return an empty string.
1705+        seekpos = self._data_offset+offset
1706+        fsize = self.finalhome.getsize()
1707+        actuallength = max(0, min(length, fsize-seekpos))
1708+        if actuallength == 0:
1709+            return ""
1710+        fh = self.finalhome.open(mode='rb')
1711+        try:
1712+            fh.seek(seekpos)
1713+            sharedata = fh.read(actuallength)
1714+        finally:
1715+            fh.close()
1716+        return sharedata
1717+
1718+    def write_share_data(self, offset, data):
1719+        length = len(data)
1720+        precondition(offset >= 0, offset)
1721+        if self._max_size is not None and offset+length > self._max_size:
1722+            raise DataTooLargeError(self._max_size, offset, length)
1723+        fh = self.incominghome.open(mode='rb+')
1724+        try:
1725+            real_offset = self._data_offset+offset
1726+            fh.seek(real_offset)
1727+            assert fh.tell() == real_offset
1728+            fh.write(data)
1729+        finally:
1730+            fh.close()
1731+
1732+    def _write_lease_record(self, f, lease_number, lease_info):
1733+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1734+        fh = f.open()
1735+        try:
1736+            fh.seek(offset)
1737+            assert fh.tell() == offset
1738+            fh.write(lease_info.to_immutable_data())
1739+        finally:
1740+            fh.close()
1741+
1742+    def _read_num_leases(self, f):
1743+        fh = f.open() #XXX  Should be mocking FilePath.open()
1744+        try:
1745+            fh.seek(0x08)
1746+            ro = fh.read(4)
1747+            (num_leases,) = struct.unpack(">L", ro)
1748+        finally:
1749+            fh.close()
1750+        return num_leases
1751+
1752+    def _write_num_leases(self, f, num_leases):
1753+        fh = f.open()
1754+        try:
1755+            fh.seek(0x08)
1756+            fh.write(struct.pack(">L", num_leases))
1757+        finally:
1758+            fh.close()
1759+
1760+    def _truncate_leases(self, f, num_leases):
1761+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1762+
1763+    def get_leases(self):
1764+        """Yields a LeaseInfo instance for all leases."""
1765+        fh = self.finalhome.open(mode='rb')
1766+        (version, unused, num_leases) = struct.unpack(">LLL", fh.read(0xc))
1767+        fh.seek(self._lease_offset)
1768+        for i in range(num_leases):
1769+            data = fh.read(self.LEASE_SIZE)
1770+            if data:
1771+                yield LeaseInfo().from_immutable_data(data)
1772+
1773+    def add_lease(self, lease_info):
1774+        num_leases = self._read_num_leases(self.incominghome)
1775+        self._write_lease_record(self.incominghome, num_leases, lease_info)
1776+        self._write_num_leases(self.incominghome, num_leases+1)
1777+       
1778+    def renew_lease(self, renew_secret, new_expire_time):
1779+        for i,lease in enumerate(self.get_leases()):
1780+            if constant_time_compare(lease.renew_secret, renew_secret):
1781+                # yup. See if we need to update the owner time.
1782+                if new_expire_time > lease.expiration_time:
1783+                    # yes
1784+                    lease.expiration_time = new_expire_time
1785+                    f = open(self.finalhome, 'rb+')
1786+                    self._write_lease_record(f, i, lease)
1787+                    f.close()
1788+                return
1789+        raise IndexError("unable to renew non-existent lease")
1790+
1791+    def add_or_renew_lease(self, lease_info):
1792+        try:
1793+            self.renew_lease(lease_info.renew_secret,
1794+                             lease_info.expiration_time)
1795+        except IndexError:
1796+            self.add_lease(lease_info)
1797+
1798+    def cancel_lease(self, cancel_secret):
1799+        """Remove a lease with the given cancel_secret. If the last lease is
1800+        cancelled, the file will be removed. Return the number of bytes that
1801+        were freed (by truncating the list of leases, and possibly by
1802+        deleting the file. Raise IndexError if there was no lease with the
1803+        given cancel_secret.
1804+        """
1805+
1806+        leases = list(self.get_leases())
1807+        num_leases_removed = 0
1808+        for i,lease in enumerate(leases):
1809+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1810+                leases[i] = None
1811+                num_leases_removed += 1
1812+        if not num_leases_removed:
1813+            raise IndexError("unable to find matching lease to cancel")
1814+        if num_leases_removed:
1815+            # pack and write out the remaining leases. We write these out in
1816+            # the same order as they were added, so that if we crash while
1817+            # doing this, we won't lose any non-cancelled leases.
1818+            leases = [l for l in leases if l] # remove the cancelled leases
1819+            f = open(self.finalhome, 'rb+')
1820+            for i,lease in enumerate(leases):
1821+                self._write_lease_record(f, i, lease)
1822+            self._write_num_leases(f, len(leases))
1823+            self._truncate_leases(f, len(leases))
1824+            f.close()
1825+        space_freed = self.LEASE_SIZE * num_leases_removed
1826+        if not len(leases):
1827+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
1828+            self.unlink()
1829+        return space_freed
1830}
1831[change backends/das/core.py to correct which interfaces are implemented
1832wilcoxjg@gmail.com**20110809203123
1833 Ignore-this: 7f9331a04b55f7feee4335abee011e14
1834] hunk ./src/allmydata/storage/backends/das/core.py 13
1835 from allmydata.storage.backends.base import Backend
1836 from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
1837 from allmydata.util.assertutil import precondition
1838-from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
1839+from allmydata.interfaces import IStorageBackend
1840 from allmydata.util import fileutil, idlib, log, time_format
1841 from allmydata.util.fileutil import fp_make_dirs
1842 from allmydata.storage.lease import LeaseInfo
1843[util/fileutil.py now expects and manipulates twisted.python.filepath.FilePath objects
1844wilcoxjg@gmail.com**20110809203321
1845 Ignore-this: 12c8aa13424ed51a5df09b92a454627
1846] {
1847hunk ./src/allmydata/util/fileutil.py 5
1848 Futz with files like a pro.
1849 """
1850 
1851-import sys, exceptions, os, stat, tempfile, time, binascii
1852+import errno, sys, exceptions, os, stat, tempfile, time, binascii
1853+
1854+from allmydata.util.assertutil import precondition
1855 
1856 from twisted.python import log
1857hunk ./src/allmydata/util/fileutil.py 10
1858+from twisted.python.filepath import FilePath, UnlistableError
1859 
1860 from pycryptopp.cipher.aes import AES
1861 
1862hunk ./src/allmydata/util/fileutil.py 189
1863             raise tx
1864         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
1865 
1866-def rm_dir(dirname):
1867+def fp_make_dirs(dirfp):
1868+    """
1869+    An idempotent version of FilePath.makedirs().  If the dir already
1870+    exists, do nothing and return without raising an exception.  If this
1871+    call creates the dir, return without raising an exception.  If there is
1872+    an error that prevents creation or if the directory gets deleted after
1873+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
1874+    exists, raise an exception.
1875+    """
1876+    log.msg( "xxx 0 %s" % (dirfp,))
1877+    tx = None
1878+    try:
1879+        dirfp.makedirs()
1880+    except OSError, x:
1881+        tx = x
1882+
1883+    if not dirfp.isdir():
1884+        if tx:
1885+            raise tx
1886+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
1887+
1888+def fp_rmdir_if_empty(dirfp):
1889+    """ Remove the directory if it is empty. """
1890+    try:
1891+        os.rmdir(dirfp.path)
1892+    except OSError, e:
1893+        if e.errno != errno.ENOTEMPTY:
1894+            raise
1895+    else:
1896+        dirfp.changed()
1897+
1898+def rmtree(dirname):
1899     """
1900     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
1901     already gone, do nothing and return without raising an exception.  If this
1902hunk ./src/allmydata/util/fileutil.py 239
1903             else:
1904                 remove(fullname)
1905         os.rmdir(dirname)
1906-    except Exception, le:
1907-        # Ignore "No such file or directory"
1908-        if (not isinstance(le, OSError)) or le.args[0] != 2:
1909+    except EnvironmentError, le:
1910+        # Ignore "No such file or directory", collect any other exception.
1911+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
1912             excs.append(le)
1913hunk ./src/allmydata/util/fileutil.py 243
1914+    except Exception, le:
1915+        excs.append(le)
1916 
1917     # Okay, now we've recursively removed everything, ignoring any "No
1918     # such file or directory" errors, and collecting any other errors.
1919hunk ./src/allmydata/util/fileutil.py 256
1920             raise OSError, "Failed to remove dir for unknown reason."
1921         raise OSError, excs
1922 
1923+def fp_remove(dirfp):
1924+    """
1925+    An idempotent version of shutil.rmtree().  If the dir is already gone,
1926+    do nothing and return without raising an exception.  If this call
1927+    removes the dir, return without raising an exception.  If there is an
1928+    error that prevents removal or if the directory gets created again by
1929+    someone else after this deletes it and before this checks that it is
1930+    gone, raise an exception.
1931+    """
1932+    try:
1933+        dirfp.remove()
1934+    except UnlistableError, e:
1935+        if e.originalException.errno != errno.ENOENT:
1936+            raise
1937+    except OSError, e:
1938+        if e.errno != errno.ENOENT:
1939+            raise
1940+
1941+def rm_dir(dirname):
1942+    # Renamed to be like shutil.rmtree and unlike rmdir.
1943+    return rmtree(dirname)
1944 
1945 def remove_if_possible(f):
1946     try:
1947hunk ./src/allmydata/util/fileutil.py 387
1948         import traceback
1949         traceback.print_exc()
1950 
1951-def get_disk_stats(whichdir, reserved_space=0):
1952+def get_disk_stats(whichdirfp, reserved_space=0):
1953     """Return disk statistics for the storage disk, in the form of a dict
1954     with the following fields.
1955       total:            total bytes on disk
1956hunk ./src/allmydata/util/fileutil.py 408
1957     you can pass how many bytes you would like to leave unused on this
1958     filesystem as reserved_space.
1959     """
1960+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
1961 
1962     if have_GetDiskFreeSpaceExW:
1963         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
1964hunk ./src/allmydata/util/fileutil.py 419
1965         n_free_for_nonroot = c_ulonglong(0)
1966         n_total            = c_ulonglong(0)
1967         n_free_for_root    = c_ulonglong(0)
1968-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
1969+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
1970                                                byref(n_total),
1971                                                byref(n_free_for_root))
1972         if retval == 0:
1973hunk ./src/allmydata/util/fileutil.py 424
1974             raise OSError("Windows error %d attempting to get disk statistics for %r"
1975-                          % (GetLastError(), whichdir))
1976+                          % (GetLastError(), whichdirfp.path))
1977         free_for_nonroot = n_free_for_nonroot.value
1978         total            = n_total.value
1979         free_for_root    = n_free_for_root.value
1980hunk ./src/allmydata/util/fileutil.py 433
1981         # <http://docs.python.org/library/os.html#os.statvfs>
1982         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
1983         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
1984-        s = os.statvfs(whichdir)
1985+        s = os.statvfs(whichdirfp.path)
1986 
1987         # on my mac laptop:
1988         #  statvfs(2) is a wrapper around statfs(2).
1989hunk ./src/allmydata/util/fileutil.py 460
1990              'avail': avail,
1991            }
1992 
1993-def get_available_space(whichdir, reserved_space):
1994+def get_available_space(whichdirfp, reserved_space):
1995     """Returns available space for share storage in bytes, or None if no
1996     API to get this information is available.
1997 
1998hunk ./src/allmydata/util/fileutil.py 472
1999     you can pass how many bytes you would like to leave unused on this
2000     filesystem as reserved_space.
2001     """
2002+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
2003     try:
2004hunk ./src/allmydata/util/fileutil.py 474
2005-        return get_disk_stats(whichdir, reserved_space)['avail']
2006+        return get_disk_stats(whichdirfp, reserved_space)['avail']
2007     except AttributeError:
2008         return None
2009hunk ./src/allmydata/util/fileutil.py 477
2010-    except EnvironmentError:
2011-        log.msg("OS call to get disk statistics failed")
2012-        return 0
2013}
2014[add expirer.py
2015wilcoxjg@gmail.com**20110809203519
2016 Ignore-this: b09d460593f0e0aa065e867d5159455b
2017] {
2018addfile ./src/allmydata/storage/backends/das/expirer.py
2019hunk ./src/allmydata/storage/backends/das/expirer.py 1
2020+import time, os, pickle, struct # os, pickle, and struct will almost certainly be migrated to the backend...
2021+from allmydata.storage.crawler import ShareCrawler
2022+from allmydata.storage.common import UnknownMutableContainerVersionError, \
2023+     UnknownImmutableContainerVersionError
2024+from twisted.python import log as twlog
2025+
2026+class LeaseCheckingCrawler(ShareCrawler):
2027+    """I examine the leases on all shares, determining which are still valid
2028+    and which have expired. I can remove the expired leases (if so
2029+    configured), and the share will be deleted when the last lease is
2030+    removed.
2031+
2032+    I collect statistics on the leases and make these available to a web
2033+    status page, including:
2034+
2035+    Space recovered during this cycle-so-far:
2036+     actual (only if expiration_enabled=True):
2037+      num-buckets, num-shares, sum of share sizes, real disk usage
2038+      ('real disk usage' means we use stat(fn).st_blocks*512 and include any
2039+       space used by the directory)
2040+     what it would have been with the original lease expiration time
2041+     what it would have been with our configured expiration time
2042+
2043+    Prediction of space that will be recovered during the rest of this cycle
2044+    Prediction of space that will be recovered by the entire current cycle.
2045+
2046+    Space recovered during the last 10 cycles  <-- saved in separate pickle
2047+
2048+    Shares/buckets examined:
2049+     this cycle-so-far
2050+     prediction of rest of cycle
2051+     during last 10 cycles <-- separate pickle
2052+    start/finish time of last 10 cycles  <-- separate pickle
2053+    expiration time used for last 10 cycles <-- separate pickle
2054+
2055+    Histogram of leases-per-share:
2056+     this-cycle-to-date
2057+     last 10 cycles <-- separate pickle
2058+    Histogram of lease ages, buckets = 1day
2059+     cycle-to-date
2060+     last 10 cycles <-- separate pickle
2061+
2062+    All cycle-to-date values remain valid until the start of the next cycle.
2063+
2064+    """
2065+
2066+    slow_start = 360 # wait 6 minutes after startup
2067+    minimum_cycle_time = 12*60*60 # not more than twice per day
2068+
2069+    def __init__(self, statefile, historyfp, expiration_policy):
2070+        self.historyfp = historyfp
2071+        self.expiration_enabled = expiration_policy['enabled']
2072+        self.mode = expiration_policy['mode']
2073+        self.override_lease_duration = None
2074+        self.cutoff_date = None
2075+        if self.mode == "age":
2076+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
2077+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
2078+        elif self.mode == "cutoff-date":
2079+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
2080+            assert cutoff_date is not None
2081+            self.cutoff_date = expiration_policy['cutoff_date']
2082+        else:
2083+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
2084+        self.sharetypes_to_expire = expiration_policy['sharetypes']
2085+        ShareCrawler.__init__(self, statefile)
2086+
2087+    def add_initial_state(self):
2088+        # we fill ["cycle-to-date"] here (even though they will be reset in
2089+        # self.started_cycle) just in case someone grabs our state before we
2090+        # get started: unit tests do this
2091+        so_far = self.create_empty_cycle_dict()
2092+        self.state.setdefault("cycle-to-date", so_far)
2093+        # in case we upgrade the code while a cycle is in progress, update
2094+        # the keys individually
2095+        for k in so_far:
2096+            self.state["cycle-to-date"].setdefault(k, so_far[k])
2097+
2098+        # initialize history
2099+        if not self.historyfp.exists():
2100+            history = {} # cyclenum -> dict
2101+            self.historyfp.setContent(pickle.dumps(history))
2102+
2103+    def create_empty_cycle_dict(self):
2104+        recovered = self.create_empty_recovered_dict()
2105+        so_far = {"corrupt-shares": [],
2106+                  "space-recovered": recovered,
2107+                  "lease-age-histogram": {}, # (minage,maxage)->count
2108+                  "leases-per-share-histogram": {}, # leasecount->numshares
2109+                  }
2110+        return so_far
2111+
2112+    def create_empty_recovered_dict(self):
2113+        recovered = {}
2114+        for a in ("actual", "original", "configured", "examined"):
2115+            for b in ("buckets", "shares", "sharebytes", "diskbytes"):
2116+                recovered[a+"-"+b] = 0
2117+                recovered[a+"-"+b+"-mutable"] = 0
2118+                recovered[a+"-"+b+"-immutable"] = 0
2119+        return recovered
2120+
2121+    def started_cycle(self, cycle):
2122+        self.state["cycle-to-date"] = self.create_empty_cycle_dict()
2123+
2124+    def stat(self, fn):
2125+        return os.stat(fn)
2126+
2127+    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
2128+        bucketdir = os.path.join(prefixdir, storage_index_b32)
2129+        s = self.stat(bucketdir)
2130+        would_keep_shares = []
2131+        wks = None
2132+
2133+        for fn in os.listdir(bucketdir):
2134+            try:
2135+                shnum = int(fn)
2136+            except ValueError:
2137+                continue # non-numeric means not a sharefile
2138+            sharefile = os.path.join(bucketdir, fn)
2139+            try:
2140+                wks = self.process_share(sharefile)
2141+            except (UnknownMutableContainerVersionError,
2142+                    UnknownImmutableContainerVersionError,
2143+                    struct.error):
2144+                twlog.msg("lease-checker error processing %s" % sharefile)
2145+                twlog.err()
2146+                which = (storage_index_b32, shnum)
2147+                self.state["cycle-to-date"]["corrupt-shares"].append(which)
2148+                wks = (1, 1, 1, "unknown")
2149+            would_keep_shares.append(wks)
2150+
2151+        sharetype = None
2152+        if wks:
2153+            # use the last share's sharetype as the buckettype
2154+            sharetype = wks[3]
2155+        rec = self.state["cycle-to-date"]["space-recovered"]
2156+        self.increment(rec, "examined-buckets", 1)
2157+        if sharetype:
2158+            self.increment(rec, "examined-buckets-"+sharetype, 1)
2159+
2160+        try:
2161+            bucket_diskbytes = s.st_blocks * 512
2162+        except AttributeError:
2163+            bucket_diskbytes = 0 # no stat().st_blocks on windows
2164+        if sum([wks[0] for wks in would_keep_shares]) == 0:
2165+            self.increment_bucketspace("original", bucket_diskbytes, sharetype)
2166+        if sum([wks[1] for wks in would_keep_shares]) == 0:
2167+            self.increment_bucketspace("configured", bucket_diskbytes, sharetype)
2168+        if sum([wks[2] for wks in would_keep_shares]) == 0:
2169+            self.increment_bucketspace("actual", bucket_diskbytes, sharetype)
2170+
2171+    def process_share(self, sharefilename):
2172+        # first, find out what kind of a share it is
2173+        f = open(sharefilename, "rb")
2174+        prefix = f.read(32)
2175+        f.close()
2176+        if prefix == MutableShareFile.MAGIC:
2177+            sf = MutableShareFile(sharefilename)
2178+        else:
2179+            # otherwise assume it's immutable
2180+            sf = FSBShare(sharefilename)
2181+        sharetype = sf.sharetype
2182+        now = time.time()
2183+        s = self.stat(sharefilename)
2184+
2185+        num_leases = 0
2186+        num_valid_leases_original = 0
2187+        num_valid_leases_configured = 0
2188+        expired_leases_configured = []
2189+
2190+        for li in sf.get_leases():
2191+            num_leases += 1
2192+            original_expiration_time = li.get_expiration_time()
2193+            grant_renew_time = li.get_grant_renew_time_time()
2194+            age = li.get_age()
2195+            self.add_lease_age_to_histogram(age)
2196+
2197+            #  expired-or-not according to original expiration time
2198+            if original_expiration_time > now:
2199+                num_valid_leases_original += 1
2200+
2201+            #  expired-or-not according to our configured age limit
2202+            expired = False
2203+            if self.mode == "age":
2204+                age_limit = original_expiration_time
2205+                if self.override_lease_duration is not None:
2206+                    age_limit = self.override_lease_duration
2207+                if age > age_limit:
2208+                    expired = True
2209+            else:
2210+                assert self.mode == "cutoff-date"
2211+                if grant_renew_time < self.cutoff_date:
2212+                    expired = True
2213+            if sharetype not in self.sharetypes_to_expire:
2214+                expired = False
2215+
2216+            if expired:
2217+                expired_leases_configured.append(li)
2218+            else:
2219+                num_valid_leases_configured += 1
2220+
2221+        so_far = self.state["cycle-to-date"]
2222+        self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
2223+        self.increment_space("examined", s, sharetype)
2224+
2225+        would_keep_share = [1, 1, 1, sharetype]
2226+
2227+        if self.expiration_enabled:
2228+            for li in expired_leases_configured:
2229+                sf.cancel_lease(li.cancel_secret)
2230+
2231+        if num_valid_leases_original == 0:
2232+            would_keep_share[0] = 0
2233+            self.increment_space("original", s, sharetype)
2234+
2235+        if num_valid_leases_configured == 0:
2236+            would_keep_share[1] = 0
2237+            self.increment_space("configured", s, sharetype)
2238+            if self.expiration_enabled:
2239+                would_keep_share[2] = 0
2240+                self.increment_space("actual", s, sharetype)
2241+
2242+        return would_keep_share
2243+
2244+    def increment_space(self, a, s, sharetype):
2245+        sharebytes = s.st_size
2246+        try:
2247+            # note that stat(2) says that st_blocks is 512 bytes, and that
2248+            # st_blksize is "optimal file sys I/O ops blocksize", which is
2249+            # independent of the block-size that st_blocks uses.
2250+            diskbytes = s.st_blocks * 512
2251+        except AttributeError:
2252+            # the docs say that st_blocks is only on linux. I also see it on
2253+            # MacOS. But it isn't available on windows.
2254+            diskbytes = sharebytes
2255+        so_far_sr = self.state["cycle-to-date"]["space-recovered"]
2256+        self.increment(so_far_sr, a+"-shares", 1)
2257+        self.increment(so_far_sr, a+"-sharebytes", sharebytes)
2258+        self.increment(so_far_sr, a+"-diskbytes", diskbytes)
2259+        if sharetype:
2260+            self.increment(so_far_sr, a+"-shares-"+sharetype, 1)
2261+            self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
2262+            self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
2263+
2264+    def increment_bucketspace(self, a, bucket_diskbytes, sharetype):
2265+        rec = self.state["cycle-to-date"]["space-recovered"]
2266+        self.increment(rec, a+"-diskbytes", bucket_diskbytes)
2267+        self.increment(rec, a+"-buckets", 1)
2268+        if sharetype:
2269+            self.increment(rec, a+"-diskbytes-"+sharetype, bucket_diskbytes)
2270+            self.increment(rec, a+"-buckets-"+sharetype, 1)
2271+
2272+    def increment(self, d, k, delta=1):
2273+        if k not in d:
2274+            d[k] = 0
2275+        d[k] += delta
2276+
2277+    def add_lease_age_to_histogram(self, age):
2278+        bucket_interval = 24*60*60
2279+        bucket_number = int(age/bucket_interval)
2280+        bucket_start = bucket_number * bucket_interval
2281+        bucket_end = bucket_start + bucket_interval
2282+        k = (bucket_start, bucket_end)
2283+        self.increment(self.state["cycle-to-date"]["lease-age-histogram"], k, 1)
2284+
2285+    def convert_lease_age_histogram(self, lah):
2286+        # convert { (minage,maxage) : count } into [ (minage,maxage,count) ]
2287+        # since the former is not JSON-safe (JSON dictionaries must have
2288+        # string keys).
2289+        json_safe_lah = []
2290+        for k in sorted(lah):
2291+            (minage,maxage) = k
2292+            json_safe_lah.append( (minage, maxage, lah[k]) )
2293+        return json_safe_lah
2294+
2295+    def finished_cycle(self, cycle):
2296+        # add to our history state, prune old history
2297+        h = {}
2298+
2299+        start = self.state["current-cycle-start-time"]
2300+        now = time.time()
2301+        h["cycle-start-finish-times"] = (start, now)
2302+        h["expiration-enabled"] = self.expiration_enabled
2303+        h["configured-expiration-mode"] = (self.mode,
2304+                                           self.override_lease_duration,
2305+                                           self.cutoff_date,
2306+                                           self.sharetypes_to_expire)
2307+
2308+        s = self.state["cycle-to-date"]
2309+
2310+        # state["lease-age-histogram"] is a dictionary (mapping
2311+        # (minage,maxage) tuple to a sharecount), but we report
2312+        # self.get_state()["lease-age-histogram"] as a list of
2313+        # (min,max,sharecount) tuples, because JSON can handle that better.
2314+        # We record the list-of-tuples form into the history for the same
2315+        # reason.
2316+        lah = self.convert_lease_age_histogram(s["lease-age-histogram"])
2317+        h["lease-age-histogram"] = lah
2318+        h["leases-per-share-histogram"] = s["leases-per-share-histogram"].copy()
2319+        h["corrupt-shares"] = s["corrupt-shares"][:]
2320+        # note: if ["shares-recovered"] ever acquires an internal dict, this
2321+        # copy() needs to become a deepcopy
2322+        h["space-recovered"] = s["space-recovered"].copy()
2323+
2324+        history = pickle.load(self.historyfp.getContent())
2325+        history[cycle] = h
2326+        while len(history) > 10:
2327+            oldcycles = sorted(history.keys())
2328+            del history[oldcycles[0]]
2329+        self.historyfp.setContent(pickle.dumps(history))
2330+
2331+    def get_state(self):
2332+        """In addition to the crawler state described in
2333+        ShareCrawler.get_state(), I return the following keys which are
2334+        specific to the lease-checker/expirer. Note that the non-history keys
2335+        (with 'cycle' in their names) are only present if a cycle is
2336+        currently running. If the crawler is between cycles, it appropriate
2337+        to show the latest item in the 'history' key instead. Also note that
2338+        each history item has all the data in the 'cycle-to-date' value, plus
2339+        cycle-start-finish-times.
2340+
2341+         cycle-to-date:
2342+          expiration-enabled
2343+          configured-expiration-mode
2344+          lease-age-histogram (list of (minage,maxage,sharecount) tuples)
2345+          leases-per-share-histogram
2346+          corrupt-shares (list of (si_b32,shnum) tuples, minimal verification)
2347+          space-recovered
2348+
2349+         estimated-remaining-cycle:
2350+          # Values may be None if not enough data has been gathered to
2351+          # produce an estimate.
2352+          space-recovered
2353+
2354+         estimated-current-cycle:
2355+          # cycle-to-date plus estimated-remaining. Values may be None if
2356+          # not enough data has been gathered to produce an estimate.
2357+          space-recovered
2358+
2359+         history: maps cyclenum to a dict with the following keys:
2360+          cycle-start-finish-times
2361+          expiration-enabled
2362+          configured-expiration-mode
2363+          lease-age-histogram
2364+          leases-per-share-histogram
2365+          corrupt-shares
2366+          space-recovered
2367+
2368+         The 'space-recovered' structure is a dictionary with the following
2369+         keys:
2370+          # 'examined' is what was looked at
2371+          examined-buckets, examined-buckets-mutable, examined-buckets-immutable
2372+          examined-shares, -mutable, -immutable
2373+          examined-sharebytes, -mutable, -immutable
2374+          examined-diskbytes, -mutable, -immutable
2375+
2376+          # 'actual' is what was actually deleted
2377+          actual-buckets, -mutable, -immutable
2378+          actual-shares, -mutable, -immutable
2379+          actual-sharebytes, -mutable, -immutable
2380+          actual-diskbytes, -mutable, -immutable
2381+
2382+          # would have been deleted, if the original lease timer was used
2383+          original-buckets, -mutable, -immutable
2384+          original-shares, -mutable, -immutable
2385+          original-sharebytes, -mutable, -immutable
2386+          original-diskbytes, -mutable, -immutable
2387+
2388+          # would have been deleted, if our configured max_age was used
2389+          configured-buckets, -mutable, -immutable
2390+          configured-shares, -mutable, -immutable
2391+          configured-sharebytes, -mutable, -immutable
2392+          configured-diskbytes, -mutable, -immutable
2393+
2394+        """
2395+        progress = self.get_progress()
2396+
2397+        state = ShareCrawler.get_state(self) # does a shallow copy
2398+        history = pickle.load(self.historyfp.getContent())
2399+        state["history"] = history
2400+
2401+        if not progress["cycle-in-progress"]:
2402+            del state["cycle-to-date"]
2403+            return state
2404+
2405+        so_far = state["cycle-to-date"].copy()
2406+        state["cycle-to-date"] = so_far
2407+
2408+        lah = so_far["lease-age-histogram"]
2409+        so_far["lease-age-histogram"] = self.convert_lease_age_histogram(lah)
2410+        so_far["expiration-enabled"] = self.expiration_enabled
2411+        so_far["configured-expiration-mode"] = (self.mode,
2412+                                                self.override_lease_duration,
2413+                                                self.cutoff_date,
2414+                                                self.sharetypes_to_expire)
2415+
2416+        so_far_sr = so_far["space-recovered"]
2417+        remaining_sr = {}
2418+        remaining = {"space-recovered": remaining_sr}
2419+        cycle_sr = {}
2420+        cycle = {"space-recovered": cycle_sr}
2421+
2422+        if progress["cycle-complete-percentage"] > 0.0:
2423+            pc = progress["cycle-complete-percentage"] / 100.0
2424+            m = (1-pc)/pc
2425+            for a in ("actual", "original", "configured", "examined"):
2426+                for b in ("buckets", "shares", "sharebytes", "diskbytes"):
2427+                    for c in ("", "-mutable", "-immutable"):
2428+                        k = a+"-"+b+c
2429+                        remaining_sr[k] = m * so_far_sr[k]
2430+                        cycle_sr[k] = so_far_sr[k] + remaining_sr[k]
2431+        else:
2432+            for a in ("actual", "original", "configured", "examined"):
2433+                for b in ("buckets", "shares", "sharebytes", "diskbytes"):
2434+                    for c in ("", "-mutable", "-immutable"):
2435+                        k = a+"-"+b+c
2436+                        remaining_sr[k] = None
2437+                        cycle_sr[k] = None
2438+
2439+        state["estimated-remaining-cycle"] = remaining
2440+        state["estimated-current-cycle"] = cycle
2441+        return state
2442}
2443[Changes I have made that aren't necessary for the test_backends.py suite to pass.
2444wilcoxjg@gmail.com**20110809203811
2445 Ignore-this: 117d49047456013f382ffc0559f00c40
2446] {
2447hunk ./src/allmydata/storage/crawler.py 1
2448-
2449 import os, time, struct
2450 import cPickle as pickle
2451 from twisted.internet import reactor
2452hunk ./src/allmydata/storage/crawler.py 6
2453 from twisted.application import service
2454 from allmydata.storage.common import si_b2a
2455-from allmydata.util import fileutil
2456 
2457 class TimeSliceExceeded(Exception):
2458     pass
2459hunk ./src/allmydata/storage/crawler.py 11
2460 
2461 class ShareCrawler(service.MultiService):
2462-    """A ShareCrawler subclass is attached to a StorageServer, and
2463+    """A subclass of ShareCrawler is attached to a StorageServer, and
2464     periodically walks all of its shares, processing each one in some
2465     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
2466     since large servers can easily have a terabyte of shares, in several
2467hunk ./src/allmydata/storage/crawler.py 29
2468     We assume that the normal upload/download/get_buckets traffic of a tahoe
2469     grid will cause the prefixdir contents to be mostly cached in the kernel,
2470     or that the number of buckets in each prefixdir will be small enough to
2471-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
2472+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
2473     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
2474     prefix. On this server, each prefixdir took 130ms-200ms to list the first
2475     time, and 17ms to list the second time.
2476hunk ./src/allmydata/storage/crawler.py 66
2477     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
2478     minimum_cycle_time = 300 # don't run a cycle faster than this
2479 
2480-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
2481+    def __init__(self, statefp, allowed_cpu_percentage=None):
2482         service.MultiService.__init__(self)
2483         if allowed_cpu_percentage is not None:
2484             self.allowed_cpu_percentage = allowed_cpu_percentage
2485hunk ./src/allmydata/storage/crawler.py 70
2486-        self.server = server
2487-        self.sharedir = server.sharedir
2488-        self.statefile = statefile
2489+        self.statefp = statefp
2490         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
2491                          for i in range(2**10)]
2492         self.prefixes.sort()
2493hunk ./src/allmydata/storage/crawler.py 190
2494         #                            of the last bucket to be processed, or
2495         #                            None if we are sleeping between cycles
2496         try:
2497-            f = open(self.statefile, "rb")
2498-            state = pickle.load(f)
2499-            f.close()
2500+            state = pickle.loads(self.statefp.getContent())
2501         except EnvironmentError:
2502             state = {"version": 1,
2503                      "last-cycle-finished": None,
2504hunk ./src/allmydata/storage/crawler.py 226
2505         else:
2506             last_complete_prefix = self.prefixes[lcpi]
2507         self.state["last-complete-prefix"] = last_complete_prefix
2508-        tmpfile = self.statefile + ".tmp"
2509-        f = open(tmpfile, "wb")
2510-        pickle.dump(self.state, f)
2511-        f.close()
2512-        fileutil.move_into_place(tmpfile, self.statefile)
2513+        self.statefp.setContent(pickle.dumps(self.state))
2514 
2515     def startService(self):
2516         # arrange things to look like we were just sleeping, so
2517hunk ./src/allmydata/storage/crawler.py 438
2518 
2519     minimum_cycle_time = 60*60 # we don't need this more than once an hour
2520 
2521-    def __init__(self, server, statefile, num_sample_prefixes=1):
2522-        ShareCrawler.__init__(self, server, statefile)
2523+    def __init__(self, statefp, num_sample_prefixes=1):
2524+        ShareCrawler.__init__(self, statefp)
2525         self.num_sample_prefixes = num_sample_prefixes
2526 
2527     def add_initial_state(self):
2528hunk ./src/allmydata/storage/crawler.py 478
2529             old_cycle,buckets = self.state["storage-index-samples"][prefix]
2530             if old_cycle != cycle:
2531                 del self.state["storage-index-samples"][prefix]
2532-
2533hunk ./src/allmydata/storage/lease.py 17
2534 
2535     def get_expiration_time(self):
2536         return self.expiration_time
2537+
2538     def get_grant_renew_time_time(self):
2539         # hack, based upon fixed 31day expiration period
2540         return self.expiration_time - 31*24*60*60
2541hunk ./src/allmydata/storage/lease.py 21
2542+
2543     def get_age(self):
2544         return time.time() - self.get_grant_renew_time_time()
2545 
2546hunk ./src/allmydata/storage/lease.py 32
2547          self.expiration_time) = struct.unpack(">L32s32sL", data)
2548         self.nodeid = None
2549         return self
2550+
2551     def to_immutable_data(self):
2552         return struct.pack(">L32s32sL",
2553                            self.owner_num,
2554hunk ./src/allmydata/storage/lease.py 45
2555                            int(self.expiration_time),
2556                            self.renew_secret, self.cancel_secret,
2557                            self.nodeid)
2558+
2559     def from_mutable_data(self, data):
2560         (self.owner_num,
2561          self.expiration_time,
2562}
2563[add __init__.py to backend and core and null
2564wilcoxjg@gmail.com**20110810033751
2565 Ignore-this: 1c72bc54951033ab433c38de58bdc39c
2566] {
2567addfile ./src/allmydata/storage/backends/__init__.py
2568addfile ./src/allmydata/storage/backends/null/__init__.py
2569}
2570
2571Context:
2572
2573[Work around ref #1472 by having test_drop_upload delete the non-ASCII directories it creates.
2574david-sarah@jacaranda.org**20110809012334
2575 Ignore-this: 5881fd5db419ba8ad12e0b2a82f6c4f0
2576]
2577[Remove all trailing whitespace from .py files.
2578david-sarah@jacaranda.org**20110809001117
2579 Ignore-this: d2658b5ce44af70cc606ae4d3085b7cc
2580]
2581[test_drop_upload.py: fix unused imports. refs #1429
2582david-sarah@jacaranda.org**20110808235422
2583 Ignore-this: 834f6b946bfea699d7d8c743edd66671
2584]
2585[Documentation for drop-upload frontend. refs #1429
2586david-sarah@jacaranda.org**20110808182146
2587 Ignore-this: b33110834e586c0b784d1736c2af5779
2588]
2589[Drop-upload frontend, rerecorded for 1.9 beta (and correcting a minor mistake). Includes some fixes for Windows but not the Windows inotify implementation. fixes #1429
2590david-sarah@jacaranda.org**20110808234049
2591 Ignore-this: 67f824c7f554e9a3a85f9fd2e1123d97
2592]
2593[node.py: ensure that client and introducer nodes record their port number and use that port on the next restart, fixing a regression caused by #1385. fixes #1469.
2594david-sarah@jacaranda.org**20110806221934
2595 Ignore-this: 1aa9d340b6570320ab2f9edc89c9e0a8
2596]
2597[test_runner.py: fix a race condition in the test when NODE_URL_FILE is written before PORTNUM_FILE. refs #1469
2598david-sarah@jacaranda.org**20110806231842
2599 Ignore-this: ab01ae7cec3a073e29eec473e64052a0
2600]
2601[test_runner.py: cleanups of HOTLINE_FILE writing and removal.
2602david-sarah@jacaranda.org**20110806231652
2603 Ignore-this: 25f5c5d6f5d8faebb26a4ce80110a335
2604]
2605[test_runner.py: remove an unused constant.
2606david-sarah@jacaranda.org**20110806221416
2607 Ignore-this: eade2695cbabbea9cafeaa8debe410bb
2608]
2609[node.py: fix the error path for a missing config option so that it works for a Unicode base directory.
2610david-sarah@jacaranda.org**20110806221007
2611 Ignore-this: 4eb9cc04b2ce05182a274a0d69dafaf3
2612]
2613[test_runner.py: test that client and introducer nodes record their port number and use that port on the next restart. This tests for a regression caused by ref #1385.
2614david-sarah@jacaranda.org**20110806220635
2615 Ignore-this: 40a0c040b142dbddd47e69b3c3712f5
2616]
2617[test_runner.py: fix a bug in CreateNode.do_create introduced in changeset [5114] when the tahoe.cfg file has been written with CRLF line endings. refs #1385
2618david-sarah@jacaranda.org**20110804003032
2619 Ignore-this: 7b7afdcf99da6671afac2d42828883eb
2620]
2621[test_client.py: repair Basic.test_error_on_old_config_files. refs #1385
2622david-sarah@jacaranda.org**20110803235036
2623 Ignore-this: 31e2a9c3febe55948de7e144353663e
2624]
2625[test_checker.py: increase timeout for TooParallel.test_immutable again. The ARM buildslave took 38 seconds, so 40 seconds is too close to the edge; make it 80.
2626david-sarah@jacaranda.org**20110803214042
2627 Ignore-this: 2d8026a6b25534e01738f78d6c7495cb
2628]
2629[test_runner.py: fix RunNode.test_introducer to not rely on the mtime of introducer.furl to detect when the node has restarted. Instead we detect when node.url has been written. refs #1385
2630david-sarah@jacaranda.org**20110803180917
2631 Ignore-this: 11ddc43b107beca42cb78af88c5c394c
2632]
2633[Further improve error message about old config files. refs #1385
2634david-sarah@jacaranda.org**20110803174546
2635 Ignore-this: 9d6cc3c288d9863dce58faafb3855917
2636]
2637[Slightly improve error message about old config files (avoid unnecessary Unicode escaping). refs #1385
2638david-sarah@jacaranda.org**20110803163848
2639 Ignore-this: a3e3930fba7ccf90b8db3d2ed5829df4
2640]
2641[test_checker.py: increase timeout for TooParallel.test_immutable (was consistently failing on ARM buildslave).
2642david-sarah@jacaranda.org**20110803163213
2643 Ignore-this: d0efceaf12628e8791862b80c85b5d56
2644]
2645[Fix the bug that prevents an introducer from starting when introducer.furl already exists. Also remove some dead code that used to read old config files, and rename 'warn_about_old_config_files' to reflect that it's not a warning. refs #1385
2646david-sarah@jacaranda.org**20110803013212
2647 Ignore-this: 2d6cd14bd06a7493b26f2027aff78f4d
2648]
2649[test_runner.py: modify RunNode.test_introducer to test that starting an introducer works when the introducer.furl file already exists. refs #1385
2650david-sarah@jacaranda.org**20110803012704
2651 Ignore-this: 8cf7f27ac4bfbb5ad8ca4a974106d437
2652]
2653[verifier: correct a bug introduced in changeset [5106] that caused us to only verify the first block of a file. refs #1395
2654david-sarah@jacaranda.org**20110802172437
2655 Ignore-this: 87fb77854a839ff217dce73544775b11
2656]
2657[test_repairer: add a deterministic test of share data corruption that always flips the bits of the last byte of the share data. refs #1395
2658david-sarah@jacaranda.org**20110802175841
2659 Ignore-this: 72f54603785007e88220c8d979e08be7
2660]
2661[verifier: serialize the fetching of blocks within a share so that we don't use too much RAM
2662zooko@zooko.com**20110802063703
2663 Ignore-this: debd9bac07dcbb6803f835a9e2eabaa1
2664 
2665 Shares are still verified in parallel, but within a share, don't request a
2666 block until the previous block has been verified and the memory we used to hold
2667 it has been freed up.
2668 
2669 Patch originally due to Brian. This version has a mockery-patchery-style test
2670 which is "low tech" (it implements the patching inline in the test code instead
2671 of using an extension of the mock.patch() function from the mock library) and
2672 which unpatches in case of exception.
2673 
2674 fixes #1395
2675]
2676[add docs about timing-channel attacks
2677Brian Warner <warner@lothar.com>**20110802044541
2678 Ignore-this: 73114d5f5ed9ce252597b707dba3a194
2679]
2680['test-coverage' now needs PYTHONPATH=. to find TOP/twisted/plugins/
2681Brian Warner <warner@lothar.com>**20110802041952
2682 Ignore-this: d40f1f4cb426ea1c362fc961baedde2
2683]
2684[remove nodeid from WriteBucketProxy classes and customers
2685warner@lothar.com**20110801224317
2686 Ignore-this: e55334bb0095de11711eeb3af827e8e8
2687 refs #1363
2688]
2689[remove get_serverid() from ReadBucketProxy and customers, including Checker
2690warner@lothar.com**20110801224307
2691 Ignore-this: 837aba457bc853e4fd413ab1a94519cb
2692 and debug.py dump-share commands
2693 refs #1363
2694]
2695[reject old-style (pre-Tahoe-LAFS-v1.3) configuration files
2696zooko@zooko.com**20110801232423
2697 Ignore-this: b58218fcc064cc75ad8f05ed0c38902b
2698 Check for the existence of any of them and if any are found raise exception which will abort the startup of the node.
2699 This is a backwards-incompatible change for anyone who is still using old-style configuration files.
2700 fixes #1385
2701]
2702[whitespace-cleanup
2703zooko@zooko.com**20110725015546
2704 Ignore-this: 442970d0545183b97adc7bd66657876c
2705]
2706[tests: use fileutil.write() instead of open() to ensure timely close even without CPython-style reference counting
2707zooko@zooko.com**20110331145427
2708 Ignore-this: 75aae4ab8e5fa0ad698f998aaa1888ce
2709 Some of these already had an explicit close() but I went ahead and replaced them with fileutil.write() as well for the sake of uniformity.
2710]
2711[Address Kevan's comment in #776 about Options classes missed when adding 'self.command_name'. refs #776, #1359
2712david-sarah@jacaranda.org**20110801221317
2713 Ignore-this: 8881d42cf7e6a1d15468291b0cb8fab9
2714]
2715[docs/frontends/webapi.rst: change some more instances of 'delete' or 'remove' to 'unlink', change some section titles, and use two blank lines between all sections. refs #776, #1104
2716david-sarah@jacaranda.org**20110801220919
2717 Ignore-this: 572327591137bb05c24c44812d4b163f
2718]
2719[cleanup: implement rm as a synonym for unlink rather than vice-versa. refs #776
2720david-sarah@jacaranda.org**20110801220108
2721 Ignore-this: 598dcbed870f4f6bb9df62de9111b343
2722]
2723[docs/webapi.rst: address Kevan's comments about use of 'delete' on ref #1104
2724david-sarah@jacaranda.org**20110801205356
2725 Ignore-this: 4fbf03864934753c951ddeff64392491
2726]
2727[docs: some changes of 'delete' or 'rm' to 'unlink'. refs #1104
2728david-sarah@jacaranda.org**20110713002722
2729 Ignore-this: 304d2a330d5e6e77d5f1feed7814b21c
2730]
2731[WUI: change the label of the button to unlink a file from 'del' to 'unlink'. Also change some internal names to 'unlink', and allow 't=unlink' as a synonym for 't=delete' in the web-API interface. Incidentally, improve a test to check for the rename button as well as the unlink button. fixes #1104
2732david-sarah@jacaranda.org**20110713001218
2733 Ignore-this: 3eef6b3f81b94a9c0020a38eb20aa069
2734]
2735[src/allmydata/web/filenode.py: delete a stale comment that was made incorrect by changeset [3133].
2736david-sarah@jacaranda.org**20110801203009
2737 Ignore-this: b3912e95a874647027efdc97822dd10e
2738]
2739[fix typo introduced during rebasing of 'remove get_serverid from
2740Brian Warner <warner@lothar.com>**20110801200341
2741 Ignore-this: 4235b0f585c0533892193941dbbd89a8
2742 DownloadStatus.add_dyhb_request and customers' patch, to fix test failure.
2743]
2744[remove get_serverid from DownloadStatus.add_dyhb_request and customers
2745zooko@zooko.com**20110801185401
2746 Ignore-this: db188c18566d2d0ab39a80c9dc8f6be6
2747 This patch is a rebase of a patch originally written by Brian. I didn't change any of the intent of Brian's patch, just ported it to current trunk.
2748 refs #1363
2749]
2750[remove get_serverid from DownloadStatus.add_block_request and customers
2751zooko@zooko.com**20110801185344
2752 Ignore-this: 8bfa8201d6147f69b0fbe31beea9c1e
2753 This is a rebase of a patch Brian originally wrote. I haven't changed the intent of that patch, just ported it to trunk.
2754 refs #1363
2755]
2756[apply zooko's advice: storage_client get_known_servers() returns a frozenset, caller sorts
2757warner@lothar.com**20110801174452
2758 Ignore-this: 2aa13ea6cbed4e9084bd604bf8633692
2759 refs #1363
2760]
2761[test_immutable.Test: rewrite to use NoNetworkGrid, now takes 2.7s not 97s
2762warner@lothar.com**20110801174444
2763 Ignore-this: 54f30b5d7461d2b3514e2a0172f3a98c
2764 remove now-unused ShareManglingMixin
2765 refs #1363
2766]
2767[DownloadStatus.add_known_share wants to be used by Finder, web.status
2768warner@lothar.com**20110801174436
2769 Ignore-this: 1433bcd73099a579abe449f697f35f9
2770 refs #1363
2771]
2772[replace IServer.name() with get_name(), and get_longname()
2773warner@lothar.com**20110801174428
2774 Ignore-this: e5a6f7f6687fd7732ddf41cfdd7c491b
2775 
2776 This patch was originally written by Brian, but was re-recorded by Zooko to use
2777 darcs replace instead of hunks for any file in which it would result in fewer
2778 total hunks.
2779 refs #1363
2780]
2781[upload.py: apply David-Sarah's advice rename (un)contacted(2) trackers to first_pass/second_pass/next_pass
2782zooko@zooko.com**20110801174143
2783 Ignore-this: e36e1420bba0620a0107bd90032a5198
2784 This patch was written by Brian but was re-recorded by Zooko (with David-Sarah looking on) to use darcs replace instead of editing to rename the three variables to their new names.
2785 refs #1363
2786]
2787[Coalesce multiple Share.loop() calls, make downloads faster. Closes #1268.
2788Brian Warner <warner@lothar.com>**20110801151834
2789 Ignore-this: 48530fce36c01c0ff708f61c2de7e67a
2790]
2791[src/allmydata/_auto_deps.py: 'i686' is another way of spelling x86.
2792david-sarah@jacaranda.org**20110801034035
2793 Ignore-this: 6971e0621db2fba794d86395b4d51038
2794]
2795[tahoe_rm.py: better error message when there is no path. refs #1292
2796david-sarah@jacaranda.org**20110122064212
2797 Ignore-this: ff3bb2c9f376250e5fd77eb009e09018
2798]
2799[test_cli.py: Test for error message when 'tahoe rm' is invoked without a path. refs #1292
2800david-sarah@jacaranda.org**20110104105108
2801 Ignore-this: 29ec2f2e0251e446db96db002ad5dd7d
2802]
2803[src/allmydata/__init__.py: suppress a spurious warning from 'bin/tahoe --version[-and-path]' about twisted-web and twisted-core packages.
2804david-sarah@jacaranda.org**20110801005209
2805 Ignore-this: 50e7cd53cca57b1870d9df0361c7c709
2806]
2807[test_cli.py: use to_str on fields loaded using simplejson.loads in new tests. refs #1304
2808david-sarah@jacaranda.org**20110730032521
2809 Ignore-this: d1d6dfaefd1b4e733181bf127c79c00b
2810]
2811[cli: make 'tahoe cp' overwrite mutable files in-place
2812Kevan Carstensen <kevan@isnotajoke.com>**20110729202039
2813 Ignore-this: b2ad21a19439722f05c49bfd35b01855
2814]
2815[SFTP: write an error message to standard error for unrecognized shell commands. Change the existing message for shell sessions to be written to standard error, and refactor some duplicated code. Also change the lines of the error messages to end in CRLF, and take into account Kevan's review comments. fixes #1442, #1446
2816david-sarah@jacaranda.org**20110729233102
2817 Ignore-this: d2f2bb4664f25007d1602bf7333e2cdd
2818]
2819[src/allmydata/scripts/cli.py: fix pyflakes warning.
2820david-sarah@jacaranda.org**20110728021402
2821 Ignore-this: 94050140ddb99865295973f49927c509
2822]
2823[Fix the help synopses of CLI commands to include [options] in the right place. fixes #1359, fixes #636
2824david-sarah@jacaranda.org**20110724225440
2825 Ignore-this: 2a8e488a5f63dabfa9db9efd83768a5
2826]
2827[encodingutil: argv and output encodings are always the same on all platforms. Lose the unnecessary generality of them being different. fixes #1120
2828david-sarah@jacaranda.org**20110629185356
2829 Ignore-this: 5ebacbe6903dfa83ffd3ff8436a97787
2830]
2831[docs/man/tahoe.1: add man page. fixes #1420
2832david-sarah@jacaranda.org**20110724171728
2833 Ignore-this: fc7601ec7f25494288d6141d0ae0004c
2834]
2835[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
2836david-sarah@jacaranda.org**20110721234941
2837 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
2838]
2839[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
2840david-sarah@jacaranda.org**20110722000320
2841 Ignore-this: 55cd558b791526113db3f83c00ec328a
2842]
2843[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
2844david-sarah@jacaranda.org**20110721233658
2845 Ignore-this: 81b41745477163c9b39c0b59db91cc62
2846]
2847[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
2848david-sarah@jacaranda.org**20110722035402
2849 Ignore-this: 5d03f544c4154f088e26c7107494bf39
2850]
2851[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
2852david-sarah@jacaranda.org**20110722024907
2853 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
2854]
2855[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
2856david-sarah@jacaranda.org**20110718005949
2857 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
2858]
2859[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
2860david-sarah@jacaranda.org**20110717194315
2861 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
2862]
2863[README.txt: say that quickstart.rst is in the docs directory.
2864david-sarah@jacaranda.org**20110717192400
2865 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
2866]
2867[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
2868zooko@zooko.com**20110717114226
2869 Ignore-this: df222120d41447ce4102616921626c82
2870 fixes #1383
2871]
2872[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
2873david-sarah@jacaranda.org**20110716181813
2874 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
2875]
2876[docs: add missing link in NEWS.rst
2877zooko@zooko.com**20110712153307
2878 Ignore-this: be7b7eb81c03700b739daa1027d72b35
2879]
2880[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
2881zooko@zooko.com**20110712153229
2882 Ignore-this: 723c4f9e2211027c79d711715d972c5
2883 Also remove a couple of vestigial references to figleaf, which is long gone.
2884 fixes #1409 (remove contrib/fuse)
2885]
2886[add Protovis.js-based download-status timeline visualization
2887Brian Warner <warner@lothar.com>**20110629222606
2888 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
2889 
2890 provide status overlap info on the webapi t=json output, add decode/decrypt
2891 rate tooltips, add zoomin/zoomout buttons
2892]
2893[add more download-status data, fix tests
2894Brian Warner <warner@lothar.com>**20110629222555
2895 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
2896]
2897[prepare for viz: improve DownloadStatus events
2898Brian Warner <warner@lothar.com>**20110629222542
2899 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
2900 
2901 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
2902]
2903[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
2904zooko@zooko.com**20110629185711
2905 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
2906]
2907[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
2908david-sarah@jacaranda.org**20110130235809
2909 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
2910]
2911[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
2912david-sarah@jacaranda.org**20110626054124
2913 Ignore-this: abb864427a1b91bd10d5132b4589fd90
2914]
2915[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
2916david-sarah@jacaranda.org**20110623205528
2917 Ignore-this: c63e23146c39195de52fb17c7c49b2da
2918]
2919[Rename test_package_initialization.py to (much shorter) test_import.py .
2920Brian Warner <warner@lothar.com>**20110611190234
2921 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
2922 
2923 The former name was making my 'ls' listings hard to read, by forcing them
2924 down to just two columns.
2925]
2926[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
2927zooko@zooko.com**20110611163741
2928 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
2929 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
2930 fixes #1412
2931]
2932[wui: right-align the size column in the WUI
2933zooko@zooko.com**20110611153758
2934 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
2935 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
2936 fixes #1412
2937]
2938[docs: three minor fixes
2939zooko@zooko.com**20110610121656
2940 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
2941 CREDITS for arc for stats tweak
2942 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
2943 English usage tweak
2944]
2945[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
2946david-sarah@jacaranda.org**20110609223719
2947 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
2948]
2949[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
2950wilcoxjg@gmail.com**20110527120135
2951 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
2952 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
2953 NEWS.rst, stats.py: documentation of change to get_latencies
2954 stats.rst: now documents percentile modification in get_latencies
2955 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
2956 fixes #1392
2957]
2958[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
2959david-sarah@jacaranda.org**20110517011214
2960 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
2961]
2962[docs: convert NEWS to NEWS.rst and change all references to it.
2963david-sarah@jacaranda.org**20110517010255
2964 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
2965]
2966[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
2967david-sarah@jacaranda.org**20110512140559
2968 Ignore-this: 784548fc5367fac5450df1c46890876d
2969]
2970[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
2971david-sarah@jacaranda.org**20110130164923
2972 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
2973]
2974[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
2975zooko@zooko.com**20110128142006
2976 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
2977 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
2978]
2979[M-x whitespace-cleanup
2980zooko@zooko.com**20110510193653
2981 Ignore-this: dea02f831298c0f65ad096960e7df5c7
2982]
2983[docs: fix typo in running.rst, thanks to arch_o_median
2984zooko@zooko.com**20110510193633
2985 Ignore-this: ca06de166a46abbc61140513918e79e8
2986]
2987[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
2988david-sarah@jacaranda.org**20110204204902
2989 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
2990]
2991[relnotes.txt: forseeable -> foreseeable. refs #1342
2992david-sarah@jacaranda.org**20110204204116
2993 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
2994]
2995[replace remaining .html docs with .rst docs
2996zooko@zooko.com**20110510191650
2997 Ignore-this: d557d960a986d4ac8216d1677d236399
2998 Remove install.html (long since deprecated).
2999 Also replace some obsolete references to install.html with references to quickstart.rst.
3000 Fix some broken internal references within docs/historical/historical_known_issues.txt.
3001 Thanks to Ravi Pinjala and Patrick McDonald.
3002 refs #1227
3003]
3004[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
3005zooko@zooko.com**20110428055232
3006 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
3007]
3008[munin tahoe_files plugin: fix incorrect file count
3009francois@ctrlaltdel.ch**20110428055312
3010 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
3011 fixes #1391
3012]
3013[corrected "k must never be smaller than N" to "k must never be greater than N"
3014secorp@allmydata.org**20110425010308
3015 Ignore-this: 233129505d6c70860087f22541805eac
3016]
3017[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
3018david-sarah@jacaranda.org**20110411190738
3019 Ignore-this: 7847d26bc117c328c679f08a7baee519
3020]
3021[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
3022david-sarah@jacaranda.org**20110410155844
3023 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
3024]
3025[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
3026david-sarah@jacaranda.org**20110410155705
3027 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
3028]
3029[remove unused variable detected by pyflakes
3030zooko@zooko.com**20110407172231
3031 Ignore-this: 7344652d5e0720af822070d91f03daf9
3032]
3033[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
3034david-sarah@jacaranda.org**20110401202750
3035 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
3036]
3037[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
3038Brian Warner <warner@lothar.com>**20110325232511
3039 Ignore-this: d5307faa6900f143193bfbe14e0f01a
3040]
3041[control.py: remove all uses of s.get_serverid()
3042warner@lothar.com**20110227011203
3043 Ignore-this: f80a787953bd7fa3d40e828bde00e855
3044]
3045[web: remove some uses of s.get_serverid(), not all
3046warner@lothar.com**20110227011159
3047 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
3048]
3049[immutable/downloader/fetcher.py: remove all get_serverid() calls
3050warner@lothar.com**20110227011156
3051 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
3052]
3053[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
3054warner@lothar.com**20110227011153
3055 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
3056 
3057 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
3058 _shares_from_server dict was being popped incorrectly (using shnum as the
3059 index instead of serverid). I'm still thinking through the consequences of
3060 this bug. It was probably benign and really hard to detect. I think it would
3061 cause us to incorrectly believe that we're pulling too many shares from a
3062 server, and thus prefer a different server rather than asking for a second
3063 share from the first server. The diversity code is intended to spread out the
3064 number of shares simultaneously being requested from each server, but with
3065 this bug, it might be spreading out the total number of shares requested at
3066 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
3067 segment, so the effect doesn't last very long).
3068]
3069[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
3070warner@lothar.com**20110227011150
3071 Ignore-this: d8d56dd8e7b280792b40105e13664554
3072 
3073 test_download.py: create+check MyShare instances better, make sure they share
3074 Server objects, now that finder.py cares
3075]
3076[immutable/downloader/finder.py: reduce use of get_serverid(), one left
3077warner@lothar.com**20110227011146
3078 Ignore-this: 5785be173b491ae8a78faf5142892020
3079]
3080[immutable/offloaded.py: reduce use of get_serverid() a bit more
3081warner@lothar.com**20110227011142
3082 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
3083]
3084[immutable/upload.py: reduce use of get_serverid()
3085warner@lothar.com**20110227011138
3086 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
3087]
3088[immutable/checker.py: remove some uses of s.get_serverid(), not all
3089warner@lothar.com**20110227011134
3090 Ignore-this: e480a37efa9e94e8016d826c492f626e
3091]
3092[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
3093warner@lothar.com**20110227011132
3094 Ignore-this: 6078279ddf42b179996a4b53bee8c421
3095 MockIServer stubs
3096]
3097[upload.py: rearrange _make_trackers a bit, no behavior changes
3098warner@lothar.com**20110227011128
3099 Ignore-this: 296d4819e2af452b107177aef6ebb40f
3100]
3101[happinessutil.py: finally rename merge_peers to merge_servers
3102warner@lothar.com**20110227011124
3103 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
3104]
3105[test_upload.py: factor out FakeServerTracker
3106warner@lothar.com**20110227011120
3107 Ignore-this: 6c182cba90e908221099472cc159325b
3108]
3109[test_upload.py: server-vs-tracker cleanup
3110warner@lothar.com**20110227011115
3111 Ignore-this: 2915133be1a3ba456e8603885437e03
3112]
3113[happinessutil.py: server-vs-tracker cleanup
3114warner@lothar.com**20110227011111
3115 Ignore-this: b856c84033562d7d718cae7cb01085a9
3116]
3117[upload.py: more tracker-vs-server cleanup
3118warner@lothar.com**20110227011107
3119 Ignore-this: bb75ed2afef55e47c085b35def2de315
3120]
3121[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
3122warner@lothar.com**20110227011103
3123 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
3124]
3125[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
3126warner@lothar.com**20110227011100
3127 Ignore-this: 7ea858755cbe5896ac212a925840fe68
3128 
3129 No behavioral changes, just updating variable/method names and log messages.
3130 The effects outside these three files should be minimal: some exception
3131 messages changed (to say "server" instead of "peer"), and some internal class
3132 names were changed. A few things still use "peer" to minimize external
3133 changes, like UploadResults.timings["peer_selection"] and
3134 happinessutil.merge_peers, which can be changed later.
3135]
3136[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
3137warner@lothar.com**20110227011056
3138 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
3139]
3140[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
3141warner@lothar.com**20110227011051
3142 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
3143]
3144[test: increase timeout on a network test because Francois's ARM machine hit that timeout
3145zooko@zooko.com**20110317165909
3146 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
3147 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
3148]
3149[docs/configuration.rst: add a "Frontend Configuration" section
3150Brian Warner <warner@lothar.com>**20110222014323
3151 Ignore-this: 657018aa501fe4f0efef9851628444ca
3152 
3153 this points to docs/frontends/*.rst, which were previously underlinked
3154]
3155[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
3156"Brian Warner <warner@lothar.com>"**20110221061544
3157 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
3158]
3159[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
3160david-sarah@jacaranda.org**20110221015817
3161 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
3162]
3163[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
3164david-sarah@jacaranda.org**20110221020125
3165 Ignore-this: b0744ed58f161bf188e037bad077fc48
3166]
3167[Refactor StorageFarmBroker handling of servers
3168Brian Warner <warner@lothar.com>**20110221015804
3169 Ignore-this: 842144ed92f5717699b8f580eab32a51
3170 
3171 Pass around IServer instance instead of (peerid, rref) tuple. Replace
3172 "descriptor" with "server". Other replacements:
3173 
3174  get_all_servers -> get_connected_servers/get_known_servers
3175  get_servers_for_index -> get_servers_for_psi (now returns IServers)
3176 
3177 This change still needs to be pushed further down: lots of code is now
3178 getting the IServer and then distributing (peerid, rref) internally.
3179 Instead, it ought to distribute the IServer internally and delay
3180 extracting a serverid or rref until the last moment.
3181 
3182 no_network.py was updated to retain parallelism.
3183]
3184[TAG allmydata-tahoe-1.8.2
3185warner@lothar.com**20110131020101]
3186Patch bundle hash:
31871489e82dfffd3e06c1e6e54347db9feb4f4a34e3