Ticket #1465: addresseszookocomment02_whitespace_20110810.darcs.patch

File addresseszookocomment02_whitespace_20110810.darcs.patch, 144.3 KB (added by Zancas, at 2011-08-10T17:54:11Z)

whitespace-cleanup run on most files touched by patches

Line 
1Tue Aug  9 13:39:10 MDT 2011  wilcoxjg@gmail.com
2  * storage: add tests of the new feature of having the storage backend in a separate object from the server
3
4Tue Aug  9 14:09:29 MDT 2011  wilcoxjg@gmail.com
5  * Added directories and new modules for the null backend
6
7Tue Aug  9 14:12:49 MDT 2011  wilcoxjg@gmail.com
8  * changes to null/core.py and storage/common.py necessary for test with null backend to pass
9
10Tue Aug  9 14:16:47 MDT 2011  wilcoxjg@gmail.com
11  * change storage/server.py to new "backend pluggable" version
12
13Tue Aug  9 14:18:22 MDT 2011  wilcoxjg@gmail.com
14  * modify null/core.py such that the correct interfaces are implemented
15
16Tue Aug  9 14:22:32 MDT 2011  wilcoxjg@gmail.com
17  * make changes to storage/immutable.py most changes are part of movement to DAS specific backend.
18
19Tue Aug  9 14:26:20 MDT 2011  wilcoxjg@gmail.com
20  * creates backends/das/core.py
21
22Tue Aug  9 14:31:23 MDT 2011  wilcoxjg@gmail.com
23  * change backends/das/core.py to correct which interfaces are implemented
24
25Tue Aug  9 14:33:21 MDT 2011  wilcoxjg@gmail.com
26  * util/fileutil.py now expects and manipulates twisted.python.filepath.FilePath objects
27
28Tue Aug  9 14:35:19 MDT 2011  wilcoxjg@gmail.com
29  * add expirer.py
30
31Tue Aug  9 14:38:11 MDT 2011  wilcoxjg@gmail.com
32  * Changes I have made that aren't necessary for the test_backends.py suite to pass.
33
34Tue Aug  9 21:37:51 MDT 2011  wilcoxjg@gmail.com
35  * add __init__.py to backend and core and null
36
37Wed Aug 10 11:08:47 MDT 2011  wilcoxjg@gmail.com
38  * whitespace-cleanup
39
40Wed Aug 10 11:38:49 MDT 2011  wilcoxjg@gmail.com
41  * das/__init__.py
42
43New patches:
44
45[storage: add tests of the new feature of having the storage backend in a separate object from the server
46wilcoxjg@gmail.com**20110809193910
47 Ignore-this: 72b64dab1a9ce668607a4ece4429e29a
48] {
49addfile ./src/allmydata/test/test_backends.py
50hunk ./src/allmydata/test/test_backends.py 1
51+import os, stat
52+from twisted.trial import unittest
53+from allmydata.util.log import msg
54+from allmydata.test.common_util import ReallyEqualMixin
55+import mock
56+# This is the code that we're going to be testing.
57+from allmydata.storage.server import StorageServer
58+from allmydata.storage.backends.das.core import DASCore
59+from allmydata.storage.backends.null.core import NullCore
60+from allmydata.storage.common import si_si2dir
61+# The following share file content was generated with
62+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
63+# with share data == 'a'. The total size of this input
64+# is 85 bytes.
65+shareversionnumber = '\x00\x00\x00\x01'
66+sharedatalength = '\x00\x00\x00\x01'
67+numberofleases = '\x00\x00\x00\x01'
68+shareinputdata = 'a'
69+ownernumber = '\x00\x00\x00\x00'
70+renewsecret  = 'x'*32
71+cancelsecret = 'y'*32
72+expirationtime = '\x00(\xde\x80'
73+nextlease = ''
74+containerdata = shareversionnumber + sharedatalength + numberofleases
75+client_data = shareinputdata + ownernumber + renewsecret + \
76+    cancelsecret + expirationtime + nextlease
77+share_data = containerdata + client_data
78+testnodeid = 'testnodeidxxxxxxxxxx'
79+expiration_policy = {'enabled' : False,
80+                     'mode' : 'age',
81+                     'override_lease_duration' : None,
82+                     'cutoff_date' : None,
83+                     'sharetypes' : None}
84+
85+
86+class MockFileSystem(unittest.TestCase):
87+    """ I simulate a filesystem that the code under test can use. I simulate
88+    just the parts of the filesystem that the current implementation of DAS
89+    backend needs. """
90+    def setUp(self):
91+        # Make patcher, patch, and make effects for fs using functions.
92+        msg( "%s.setUp()" % (self,))
93+        self.mockedfilepaths = {}
94+        #keys are pathnames, values are MockFilePath objects. This is necessary because
95+        #MockFilePath behavior sometimes depends on the filesystem. Where it does,
96+        #self.mockedfilepaths has the relevent info.
97+        self.storedir = MockFilePath('teststoredir', self.mockedfilepaths)
98+        self.basedir = self.storedir.child('shares')
99+        self.baseincdir = self.basedir.child('incoming')
100+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
101+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
102+        self.shareincomingname = self.sharedirincomingname.child('0')
103+        self.sharefinalname = self.sharedirfinalname.child('0')
104+
105+        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
106+        FakePath = self.FilePathFake.__enter__()
107+
108+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
109+        FakeBCC = self.BCountingCrawler.__enter__()
110+        FakeBCC.side_effect = self.call_FakeBCC
111+
112+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
113+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
114+        FakeLCC.side_effect = self.call_FakeLCC
115+
116+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
117+        GetSpace = self.get_available_space.__enter__()
118+        GetSpace.side_effect = self.call_get_available_space
119+
120+        self.statforsize = mock.patch('allmydata.storage.backends.das.core.filepath.stat')
121+        getsize = self.statforsize.__enter__()
122+        getsize.side_effect = self.call_statforsize
123+
124+    def call_FakeBCC(self, StateFile):
125+        return MockBCC()
126+
127+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
128+        return MockLCC()
129+
130+    def call_get_available_space(self, storedir, reservedspace):
131+        # The input vector has an input size of 85.
132+        return 85 - reservedspace
133+
134+    def call_statforsize(self, fakefpname):
135+        return self.mockedfilepaths[fakefpname].fileobject.size()
136+
137+    def tearDown(self):
138+        msg( "%s.tearDown()" % (self,))
139+        FakePath = self.FilePathFake.__exit__()       
140+        self.mockedfilepaths = {}
141+
142+
143+class MockFilePath:
144+    def __init__(self, pathstring, ffpathsenvironment, existance=False):
145+        #  I can't jsut make the values MockFileObjects because they may be directories.
146+        self.mockedfilepaths = ffpathsenvironment
147+        self.path = pathstring
148+        self.existance = existance
149+        if not self.mockedfilepaths.has_key(self.path):
150+            #  The first MockFilePath object is special
151+            self.mockedfilepaths[self.path] = self
152+            self.fileobject = None
153+        else:
154+            self.fileobject = self.mockedfilepaths[self.path].fileobject
155+        self.spawn = {}
156+        self.antecedent = os.path.dirname(self.path)
157+
158+    def setContent(self, contentstring):
159+        # This method rewrites the data in the file that corresponds to its path
160+        # name whether it preexisted or not.
161+        self.fileobject = MockFileObject(contentstring)
162+        self.existance = True
163+        self.mockedfilepaths[self.path].fileobject = self.fileobject
164+        self.mockedfilepaths[self.path].existance = self.existance
165+        self.setparents()
166+       
167+    def create(self):
168+        # This method chokes if there's a pre-existing file!
169+        if self.mockedfilepaths[self.path].fileobject:
170+            raise OSError
171+        else:
172+            self.fileobject = MockFileObject(contentstring)
173+            self.existance = True
174+            self.mockedfilepaths[self.path].fileobject = self.fileobject
175+            self.mockedfilepaths[self.path].existance = self.existance
176+            self.setparents()       
177+
178+    def open(self, mode='r'):
179+        # XXX Makes no use of mode.
180+        if not self.mockedfilepaths[self.path].fileobject:
181+            # If there's no fileobject there already then make one and put it there.
182+            self.fileobject = MockFileObject()
183+            self.existance = True
184+            self.mockedfilepaths[self.path].fileobject = self.fileobject
185+            self.mockedfilepaths[self.path].existance = self.existance
186+        else:
187+            # Otherwise get a ref to it.
188+            self.fileobject = self.mockedfilepaths[self.path].fileobject
189+            self.existance = self.mockedfilepaths[self.path].existance
190+        return self.fileobject.open(mode)
191+
192+    def child(self, childstring):
193+        arg2child = os.path.join(self.path, childstring)
194+        child = MockFilePath(arg2child, self.mockedfilepaths)
195+        return child
196+
197+    def children(self):
198+        childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)]
199+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
200+        childrenfromffs = [ffp for ffp in childrenfromffs if ffp.exists()]
201+        self.spawn = frozenset(childrenfromffs)
202+        return self.spawn 
203+
204+    def parent(self):
205+        if self.mockedfilepaths.has_key(self.antecedent):
206+            parent = self.mockedfilepaths[self.antecedent]
207+        else:
208+            parent = MockFilePath(self.antecedent, self.mockedfilepaths)
209+        return parent
210+
211+    def parents(self):
212+        antecedents = []
213+        def f(fps, antecedents):
214+            newfps = os.path.split(fps)[0]
215+            if newfps:
216+                antecedents.append(newfps)
217+                f(newfps, antecedents)
218+        f(self.path, antecedents)
219+        return antecedents
220+
221+    def setparents(self):
222+        for fps in self.parents():
223+            if not self.mockedfilepaths.has_key(fps):
224+                self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths, exists=True)
225+
226+    def basename(self):
227+        return os.path.split(self.path)[1]
228+
229+    def moveTo(self, newffp):
230+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
231+        if self.mockedfilepaths[newffp.path].exists():
232+            raise OSError
233+        else:
234+            self.mockedfilepaths[newffp.path] = self
235+            self.path = newffp.path
236+
237+    def getsize(self):
238+        return self.fileobject.getsize()
239+
240+    def exists(self):
241+        return self.existance
242+
243+    def isdir(self):
244+        return True
245+
246+    def makedirs(self):
247+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
248+        pass
249+
250+    def remove(self):
251+        pass
252+
253+
254+class MockFileObject:
255+    def __init__(self, contentstring=''):
256+        self.buffer = contentstring
257+        self.pos = 0
258+    def open(self, mode='r'):
259+        return self
260+    def write(self, instring):
261+        begin = self.pos
262+        padlen = begin - len(self.buffer)
263+        if padlen > 0:
264+            self.buffer += '\x00' * padlen
265+        end = self.pos + len(instring)
266+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
267+        self.pos = end
268+    def close(self):
269+        self.pos = 0
270+    def seek(self, pos):
271+        self.pos = pos
272+    def read(self, numberbytes):
273+        return self.buffer[self.pos:self.pos+numberbytes]
274+    def tell(self):
275+        return self.pos
276+    def size(self):
277+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
278+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
279+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
280+        return {stat.ST_SIZE:len(self.buffer)}
281+    def getsize(self):
282+        return len(self.buffer)
283+
284+class MockBCC:
285+    def setServiceParent(self, Parent):
286+        pass
287+
288+
289+class MockLCC:
290+    def setServiceParent(self, Parent):
291+        pass
292+
293+
294+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
295+    """ NullBackend is just for testing and executable documentation, so
296+    this test is actually a test of StorageServer in which we're using
297+    NullBackend as helper code for the test, rather than a test of
298+    NullBackend. """
299+    def setUp(self):
300+        self.ss = StorageServer(testnodeid, backend=NullCore())
301+
302+    @mock.patch('os.mkdir')
303+    @mock.patch('__builtin__.open')
304+    @mock.patch('os.listdir')
305+    @mock.patch('os.path.isdir')
306+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
307+        """ Write a new share. """
308+
309+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
310+        bs[0].remote_write(0, 'a')
311+        self.failIf(mockisdir.called)
312+        self.failIf(mocklistdir.called)
313+        self.failIf(mockopen.called)
314+        self.failIf(mockmkdir.called)
315+
316+
317+class TestServerConstruction(MockFileSystem, ReallyEqualMixin):
318+    def test_create_server_fs_backend(self):
319+        """ This tests whether a server instance can be constructed with a
320+        filesystem backend. To pass the test, it mustn't use the filesystem
321+        outside of its configured storedir. """
322+        StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
323+
324+
325+class TestServerAndFSBackend(MockFileSystem, ReallyEqualMixin):
326+    """ This tests both the StorageServer and the DAS backend together. """   
327+    def setUp(self):
328+        MockFileSystem.setUp(self)
329+        try:
330+            self.backend = DASCore(self.storedir, expiration_policy)
331+            self.ss = StorageServer(testnodeid, self.backend)
332+
333+            self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
334+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
335+        except:
336+            MockFileSystem.tearDown(self)
337+            raise
338+
339+    @mock.patch('time.time')
340+    @mock.patch('allmydata.util.fileutil.get_available_space')
341+    def test_out_of_space(self, mockget_available_space, mocktime):
342+        mocktime.return_value = 0
343+       
344+        def call_get_available_space(dir, reserve):
345+            return 0
346+
347+        mockget_available_space.side_effect = call_get_available_space
348+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
349+        self.failUnlessReallyEqual(bsc, {})
350+
351+    @mock.patch('time.time')
352+    def test_write_and_read_share(self, mocktime):
353+        """
354+        Write a new share, read it, and test the server's (and FS backend's)
355+        handling of simultaneous and successive attempts to write the same
356+        share.
357+        """
358+        mocktime.return_value = 0
359+        # Inspect incoming and fail unless it's empty.
360+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
361+       
362+        self.failUnlessReallyEqual(incomingset, frozenset())
363+       
364+        # Populate incoming with the sharenum: 0.
365+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
366+
367+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
368+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
369+
370+
371+
372+        # Attempt to create a second share writer with the same sharenum.
373+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
374+
375+        # Show that no sharewriter results from a remote_allocate_buckets
376+        # with the same si and sharenum, until BucketWriter.remote_close()
377+        # has been called.
378+        self.failIf(bsa)
379+
380+        # Test allocated size.
381+        spaceint = self.ss.allocated_size()
382+        self.failUnlessReallyEqual(spaceint, 1)
383+
384+        # Write 'a' to shnum 0. Only tested together with close and read.
385+        bs[0].remote_write(0, 'a')
386+       
387+        # Preclose: Inspect final, failUnless nothing there.
388+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
389+        bs[0].remote_close()
390+
391+        # Postclose: (Omnibus) failUnless written data is in final.
392+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
393+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
394+        contents = sharesinfinal[0].read_share_data(0, 73)
395+        self.failUnlessReallyEqual(contents, client_data)
396+
397+        # Exercise the case that the share we're asking to allocate is
398+        # already (completely) uploaded.
399+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
400+       
401+
402+    def test_read_old_share(self):
403+        """ This tests whether the code correctly finds and reads
404+        shares written out by old (Tahoe-LAFS <= v1.8.2)
405+        servers. There is a similar test in test_download, but that one
406+        is from the perspective of the client and exercises a deeper
407+        stack of code. This one is for exercising just the
408+        StorageServer object. """
409+        # Contruct a file with the appropriate contents in the mockfilesystem.
410+        datalen = len(share_data)
411+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0))
412+        finalhome.setContent(share_data)
413+
414+        # Now begin the test.
415+        bs = self.ss.remote_get_buckets('teststorage_index')
416+
417+        self.failUnlessEqual(len(bs), 1)
418+        b = bs['0']
419+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
420+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
421+        # If you try to read past the end you get the as much data as is there.
422+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
423+        # If you start reading past the end of the file you get the empty string.
424+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
425}
426[Added directories and new modules for the null backend
427wilcoxjg@gmail.com**20110809200929
428 Ignore-this: f5dfa418afced5141eb9247a9908109e
429] {
430hunk ./src/allmydata/interfaces.py 270
431         store that on disk.
432         """
433 
434+class IStorageBackend(Interface):
435+    """
436+    Objects of this kind live on the server side and are used by the
437+    storage server object.
438+    """
439+    def get_available_space(self, reserved_space):
440+        """ Returns available space for share storage in bytes, or
441+        None if this information is not available or if the available
442+        space is unlimited.
443+
444+        If the backend is configured for read-only mode then this will
445+        return 0.
446+
447+        reserved_space is how many bytes to subtract from the answer, so
448+        you can pass how many bytes you would like to leave unused on this
449+        filesystem as reserved_space. """
450+
451+    def get_bucket_shares(self):
452+        """XXX"""
453+
454+    def get_share(self):
455+        """XXX"""
456+
457+    def make_bucket_writer(self):
458+        """XXX"""
459+
460+class IStorageBackendShare(Interface):
461+    """
462+    This object contains as much as all of the share data.  It is intended
463+    for lazy evaluation such that in many use cases substantially less than
464+    all of the share data will be accessed.
465+    """
466+    def is_complete(self):
467+        """
468+        Returns the share state, or None if the share does not exist.
469+        """
470+
471 class IStorageBucketWriter(Interface):
472     """
473     Objects of this kind live on the client side.
474adddir ./src/allmydata/storage/backends
475addfile ./src/allmydata/storage/backends/base.py
476hunk ./src/allmydata/storage/backends/base.py 1
477+from twisted.application import service
478+
479+class Backend(service.MultiService):
480+    def __init__(self):
481+        service.MultiService.__init__(self)
482adddir ./src/allmydata/storage/backends/null
483addfile ./src/allmydata/storage/backends/null/core.py
484hunk ./src/allmydata/storage/backends/null/core.py 1
485+from allmydata.storage.backends.base import Backend
486+from allmydata.storage.immutable import BucketWriter, BucketReader
487+
488+class NullCore(Backend):
489+    def __init__(self):
490+        Backend.__init__(self)
491+
492+    def get_available_space(self):
493+        return None
494+
495+    def get_shares(self, storage_index):
496+        return set()
497+
498+    def get_share(self, storage_index, sharenum):
499+        return None
500+
501+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
502+        immutableshare = ImmutableShare()
503+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
504+
505+    def set_storage_server(self, ss):
506+        self.ss = ss
507+
508+    def get_incoming_shnums(self, storageindex):
509+        return frozenset()
510+
511+class ImmutableShare:
512+    sharetype = "immutable"
513+
514+    def __init__(self):
515+        """ If max_size is not None then I won't allow more than
516+        max_size to be written to me. If create=True then max_size
517+        must not be None. """
518+        pass
519+
520+    def get_shnum(self):
521+        return self.shnum
522+
523+    def unlink(self):
524+        os.unlink(self.fname)
525+
526+    def read_share_data(self, offset, length):
527+        precondition(offset >= 0)
528+        # Reads beyond the end of the data are truncated. Reads that start
529+        # beyond the end of the data return an empty string.
530+        seekpos = self._data_offset+offset
531+        fsize = os.path.getsize(self.fname)
532+        actuallength = max(0, min(length, fsize-seekpos))
533+        if actuallength == 0:
534+            return ""
535+        f = open(self.fname, 'rb')
536+        f.seek(seekpos)
537+        return f.read(actuallength)
538+
539+    def write_share_data(self, offset, data):
540+        pass
541+
542+    def _write_lease_record(self, f, lease_number, lease_info):
543+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
544+        f.seek(offset)
545+        assert f.tell() == offset
546+        f.write(lease_info.to_immutable_data())
547+
548+    def _read_num_leases(self, f):
549+        f.seek(0x08)
550+        (num_leases,) = struct.unpack(">L", f.read(4))
551+        return num_leases
552+
553+    def _write_num_leases(self, f, num_leases):
554+        f.seek(0x08)
555+        f.write(struct.pack(">L", num_leases))
556+
557+    def _truncate_leases(self, f, num_leases):
558+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
559+
560+    def get_leases(self):
561+        """Yields a LeaseInfo instance for all leases."""
562+        f = open(self.fname, 'rb')
563+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
564+        f.seek(self._lease_offset)
565+        for i in range(num_leases):
566+            data = f.read(self.LEASE_SIZE)
567+            if data:
568+                yield LeaseInfo().from_immutable_data(data)
569+
570+    def add_lease(self, lease):
571+        pass
572+
573+    def renew_lease(self, renew_secret, new_expire_time):
574+        for i,lease in enumerate(self.get_leases()):
575+            if constant_time_compare(lease.renew_secret, renew_secret):
576+                # yup. See if we need to update the owner time.
577+                if new_expire_time > lease.expiration_time:
578+                    # yes
579+                    lease.expiration_time = new_expire_time
580+                    f = open(self.fname, 'rb+')
581+                    self._write_lease_record(f, i, lease)
582+                    f.close()
583+                return
584+        raise IndexError("unable to renew non-existent lease")
585+
586+    def add_or_renew_lease(self, lease_info):
587+        try:
588+            self.renew_lease(lease_info.renew_secret,
589+                             lease_info.expiration_time)
590+        except IndexError:
591+            self.add_lease(lease_info)
592+
593+
594+    def cancel_lease(self, cancel_secret):
595+        """Remove a lease with the given cancel_secret. If the last lease is
596+        cancelled, the file will be removed. Return the number of bytes that
597+        were freed (by truncating the list of leases, and possibly by
598+        deleting the file. Raise IndexError if there was no lease with the
599+        given cancel_secret.
600+        """
601+
602+        leases = list(self.get_leases())
603+        num_leases_removed = 0
604+        for i,lease in enumerate(leases):
605+            if constant_time_compare(lease.cancel_secret, cancel_secret):
606+                leases[i] = None
607+                num_leases_removed += 1
608+        if not num_leases_removed:
609+            raise IndexError("unable to find matching lease to cancel")
610+        if num_leases_removed:
611+            # pack and write out the remaining leases. We write these out in
612+            # the same order as they were added, so that if we crash while
613+            # doing this, we won't lose any non-cancelled leases.
614+            leases = [l for l in leases if l] # remove the cancelled leases
615+            f = open(self.fname, 'rb+')
616+            for i,lease in enumerate(leases):
617+                self._write_lease_record(f, i, lease)
618+            self._write_num_leases(f, len(leases))
619+            self._truncate_leases(f, len(leases))
620+            f.close()
621+        space_freed = self.LEASE_SIZE * num_leases_removed
622+        if not len(leases):
623+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
624+            self.unlink()
625+        return space_freed
626}
627[changes to null/core.py and storage/common.py necessary for test with null backend to pass
628wilcoxjg@gmail.com**20110809201249
629 Ignore-this: 9ddcd79f9962550ed20518ae85b6b6b2
630] {
631hunk ./src/allmydata/storage/backends/null/core.py 3
632 from allmydata.storage.backends.base import Backend
633 from allmydata.storage.immutable import BucketWriter, BucketReader
634+from zope.interface import implements
635 
636 class NullCore(Backend):
637hunk ./src/allmydata/storage/backends/null/core.py 6
638+    implements(IStorageBackend)
639     def __init__(self):
640         Backend.__init__(self)
641 
642hunk ./src/allmydata/storage/backends/null/core.py 30
643         return frozenset()
644 
645 class ImmutableShare:
646+    implements(IStorageBackendShare)
647     sharetype = "immutable"
648 
649     def __init__(self):
650hunk ./src/allmydata/storage/common.py 19
651 def si_a2b(ascii_storageindex):
652     return base32.a2b(ascii_storageindex)
653 
654-def storage_index_to_dir(storageindex):
655+def si_si2dir(startfp, storageindex):
656     sia = si_b2a(storageindex)
657hunk ./src/allmydata/storage/common.py 21
658-    return os.path.join(sia[:2], sia)
659+    newfp = startfp.child(sia[:2])
660+    return newfp.child(sia)
661}
662[change storage/server.py to new "backend pluggable" version
663wilcoxjg@gmail.com**20110809201647
664 Ignore-this: 1b0c5f9e831641287992bf45af55246e
665] {
666hunk ./src/allmydata/storage/server.py 1
667-import os, re, weakref, struct, time
668+import os, weakref, struct, time
669 
670 from foolscap.api import Referenceable
671 from twisted.application import service
672hunk ./src/allmydata/storage/server.py 11
673 from allmydata.util import fileutil, idlib, log, time_format
674 import allmydata # for __full_version__
675 
676-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
677-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
678+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
679+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
680 from allmydata.storage.lease import LeaseInfo
681 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
682      create_mutable_sharefile
683hunk ./src/allmydata/storage/server.py 16
684-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
685-from allmydata.storage.crawler import BucketCountingCrawler
686-from allmydata.storage.expirer import LeaseCheckingCrawler
687-
688-# storage/
689-# storage/shares/incoming
690-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
691-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
692-# storage/shares/$START/$STORAGEINDEX
693-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
694-
695-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
696-# base-32 chars).
697-
698-# $SHARENUM matches this regex:
699-NUM_RE=re.compile("^[0-9]+$")
700-
701-
702 
703 class StorageServer(service.MultiService, Referenceable):
704     implements(RIStorageServer, IStatsProducer)
705hunk ./src/allmydata/storage/server.py 20
706     name = 'storage'
707-    LeaseCheckerClass = LeaseCheckingCrawler
708 
709hunk ./src/allmydata/storage/server.py 21
710-    def __init__(self, storedir, nodeid, reserved_space=0,
711-                 discard_storage=False, readonly_storage=False,
712-                 stats_provider=None,
713-                 expiration_enabled=False,
714-                 expiration_mode="age",
715-                 expiration_override_lease_duration=None,
716-                 expiration_cutoff_date=None,
717-                 expiration_sharetypes=("mutable", "immutable")):
718+    def __init__(self, nodeid, backend, reserved_space=0,
719+                 readonly_storage=False,
720+                 stats_provider=None ):
721         service.MultiService.__init__(self)
722         assert isinstance(nodeid, str)
723         assert len(nodeid) == 20
724hunk ./src/allmydata/storage/server.py 28
725         self.my_nodeid = nodeid
726-        self.storedir = storedir
727-        sharedir = os.path.join(storedir, "shares")
728-        fileutil.make_dirs(sharedir)
729-        self.sharedir = sharedir
730-        # we don't actually create the corruption-advisory dir until necessary
731-        self.corruption_advisory_dir = os.path.join(storedir,
732-                                                    "corruption-advisories")
733-        self.reserved_space = int(reserved_space)
734-        self.no_storage = discard_storage
735-        self.readonly_storage = readonly_storage
736         self.stats_provider = stats_provider
737         if self.stats_provider:
738             self.stats_provider.register_producer(self)
739hunk ./src/allmydata/storage/server.py 31
740-        self.incomingdir = os.path.join(sharedir, 'incoming')
741-        self._clean_incomplete()
742-        fileutil.make_dirs(self.incomingdir)
743         self._active_writers = weakref.WeakKeyDictionary()
744hunk ./src/allmydata/storage/server.py 32
745+        self.backend = backend
746+        self.backend.setServiceParent(self)
747+        self.backend.set_storage_server(self)
748         log.msg("StorageServer created", facility="tahoe.storage")
749 
750hunk ./src/allmydata/storage/server.py 37
751-        if reserved_space:
752-            if self.get_available_space() is None:
753-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
754-                        umin="0wZ27w", level=log.UNUSUAL)
755-
756         self.latencies = {"allocate": [], # immutable
757                           "write": [],
758                           "close": [],
759hunk ./src/allmydata/storage/server.py 48
760                           "renew": [],
761                           "cancel": [],
762                           }
763-        self.add_bucket_counter()
764-
765-        statefile = os.path.join(self.storedir, "lease_checker.state")
766-        historyfile = os.path.join(self.storedir, "lease_checker.history")
767-        klass = self.LeaseCheckerClass
768-        self.lease_checker = klass(self, statefile, historyfile,
769-                                   expiration_enabled, expiration_mode,
770-                                   expiration_override_lease_duration,
771-                                   expiration_cutoff_date,
772-                                   expiration_sharetypes)
773-        self.lease_checker.setServiceParent(self)
774 
775     def __repr__(self):
776         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
777hunk ./src/allmydata/storage/server.py 52
778 
779-    def add_bucket_counter(self):
780-        statefile = os.path.join(self.storedir, "bucket_counter.state")
781-        self.bucket_counter = BucketCountingCrawler(self, statefile)
782-        self.bucket_counter.setServiceParent(self)
783-
784     def count(self, name, delta=1):
785         if self.stats_provider:
786             self.stats_provider.count("storage_server." + name, delta)
787hunk ./src/allmydata/storage/server.py 66
788         """Return a dict, indexed by category, that contains a dict of
789         latency numbers for each category. If there are sufficient samples
790         for unambiguous interpretation, each dict will contain the
791-        following keys: mean, 01_0_percentile, 10_0_percentile,
792+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
793         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
794         99_0_percentile, 99_9_percentile.  If there are insufficient
795         samples for a given percentile to be interpreted unambiguously
796hunk ./src/allmydata/storage/server.py 88
797             else:
798                 stats["mean"] = None
799 
800-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
801-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
802-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
803+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
804+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
805+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
806                              (0.999, "99_9_percentile", 1000)]
807 
808             for percentile, percentilestring, minnumtoobserve in orderstatlist:
809hunk ./src/allmydata/storage/server.py 107
810             kwargs["facility"] = "tahoe.storage"
811         return log.msg(*args, **kwargs)
812 
813-    def _clean_incomplete(self):
814-        fileutil.rm_dir(self.incomingdir)
815-
816     def get_stats(self):
817         # remember: RIStatsProvider requires that our return dict
818hunk ./src/allmydata/storage/server.py 109
819-        # contains numeric values.
820+        # contains numeric, or None values.
821         stats = { 'storage_server.allocated': self.allocated_size(), }
822         stats['storage_server.reserved_space'] = self.reserved_space
823         for category,ld in self.get_latencies().items():
824merger 0.0 (
825hunk ./src/allmydata/storage/server.py 149
826-        return fileutil.get_available_space(self.storedir, self.reserved_space)
827+        return fileutil.get_available_space(self.sharedir, self.reserved_space)
828hunk ./src/allmydata/storage/server.py 143
829-    def get_available_space(self):
830-        """Returns available space for share storage in bytes, or None if no
831-        API to get this information is available."""
832-
833-        if self.readonly_storage:
834-            return 0
835-        return fileutil.get_available_space(self.storedir, self.reserved_space)
836-
837)
838hunk ./src/allmydata/storage/server.py 158
839         return space
840 
841     def remote_get_version(self):
842-        remaining_space = self.get_available_space()
843+        remaining_space = self.backend.get_available_space()
844         if remaining_space is None:
845             # We're on a platform that has no API to get disk stats.
846             remaining_space = 2**64
847hunk ./src/allmydata/storage/server.py 172
848                     }
849         return version
850 
851-    def remote_allocate_buckets(self, storage_index,
852+    def remote_allocate_buckets(self, storageindex,
853                                 renew_secret, cancel_secret,
854                                 sharenums, allocated_size,
855                                 canary, owner_num=0):
856hunk ./src/allmydata/storage/server.py 181
857         # to a particular owner.
858         start = time.time()
859         self.count("allocate")
860-        alreadygot = set()
861+        incoming = set()
862         bucketwriters = {} # k: shnum, v: BucketWriter
863hunk ./src/allmydata/storage/server.py 183
864-        si_dir = storage_index_to_dir(storage_index)
865-        si_s = si_b2a(storage_index)
866 
867hunk ./src/allmydata/storage/server.py 184
868+        si_s = si_b2a(storageindex)
869         log.msg("storage: allocate_buckets %s" % si_s)
870 
871         # in this implementation, the lease information (including secrets)
872hunk ./src/allmydata/storage/server.py 198
873 
874         max_space_per_bucket = allocated_size
875 
876-        remaining_space = self.get_available_space()
877+        remaining_space = self.backend.get_available_space()
878         limited = remaining_space is not None
879         if limited:
880             # this is a bit conservative, since some of this allocated_size()
881hunk ./src/allmydata/storage/server.py 207
882             remaining_space -= self.allocated_size()
883         # self.readonly_storage causes remaining_space <= 0
884 
885-        # fill alreadygot with all shares that we have, not just the ones
886+        # Fill alreadygot with all shares that we have, not just the ones
887         # they asked about: this will save them a lot of work. Add or update
888         # leases for all of them: if they want us to hold shares for this
889hunk ./src/allmydata/storage/server.py 210
890-        # file, they'll want us to hold leases for this file.
891-        for (shnum, fn) in self._get_bucket_shares(storage_index):
892-            alreadygot.add(shnum)
893-            sf = ShareFile(fn)
894-            sf.add_or_renew_lease(lease_info)
895+        # file, they'll want us to hold leases for all the shares of it.
896+        alreadygot = set()
897+        for share in self.backend.get_shares(storageindex):
898+            share.add_or_renew_lease(lease_info)
899+            alreadygot.add(share.shnum)
900 
901hunk ./src/allmydata/storage/server.py 216
902-        for shnum in sharenums:
903-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
904-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
905-            if os.path.exists(finalhome):
906-                # great! we already have it. easy.
907-                pass
908-            elif os.path.exists(incominghome):
909-                # Note that we don't create BucketWriters for shnums that
910-                # have a partial share (in incoming/), so if a second upload
911-                # occurs while the first is still in progress, the second
912-                # uploader will use different storage servers.
913-                pass
914-            elif (not limited) or (remaining_space >= max_space_per_bucket):
915-                # ok! we need to create the new share file.
916-                bw = BucketWriter(self, incominghome, finalhome,
917-                                  max_space_per_bucket, lease_info, canary)
918-                if self.no_storage:
919-                    bw.throw_out_all_data = True
920+        # all share numbers that are incoming
921+        incoming = self.backend.get_incoming_shnums(storageindex)
922+
923+        for shnum in ((sharenums - alreadygot) - incoming):
924+            if (not limited) or (remaining_space >= max_space_per_bucket):
925+                bw = self.backend.make_bucket_writer(storageindex, shnum, max_space_per_bucket, lease_info, canary)
926                 bucketwriters[shnum] = bw
927                 self._active_writers[bw] = 1
928                 if limited:
929hunk ./src/allmydata/storage/server.py 227
930                     remaining_space -= max_space_per_bucket
931             else:
932-                # bummer! not enough space to accept this bucket
933+                # Bummer not enough space to accept this share.
934                 pass
935 
936hunk ./src/allmydata/storage/server.py 230
937-        if bucketwriters:
938-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
939-
940         self.add_latency("allocate", time.time() - start)
941         return alreadygot, bucketwriters
942 
943hunk ./src/allmydata/storage/server.py 233
944-    def _iter_share_files(self, storage_index):
945-        for shnum, filename in self._get_bucket_shares(storage_index):
946+    def _iter_share_files(self, storageindex):
947+        for shnum, filename in self._get_shares(storageindex):
948             f = open(filename, 'rb')
949             header = f.read(32)
950             f.close()
951hunk ./src/allmydata/storage/server.py 239
952             if header[:32] == MutableShareFile.MAGIC:
953+                # XXX  Can I exploit this code?
954                 sf = MutableShareFile(filename, self)
955                 # note: if the share has been migrated, the renew_lease()
956                 # call will throw an exception, with information to help the
957hunk ./src/allmydata/storage/server.py 245
958                 # client update the lease.
959             elif header[:4] == struct.pack(">L", 1):
960+                # Check if version number is "1".
961+                # XXX WHAT ABOUT OTHER VERSIONS!!!!!!!?
962                 sf = ShareFile(filename)
963             else:
964                 continue # non-sharefile
965hunk ./src/allmydata/storage/server.py 252
966             yield sf
967 
968-    def remote_add_lease(self, storage_index, renew_secret, cancel_secret,
969+    def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
970                          owner_num=1):
971         start = time.time()
972         self.count("add-lease")
973hunk ./src/allmydata/storage/server.py 260
974         lease_info = LeaseInfo(owner_num,
975                                renew_secret, cancel_secret,
976                                new_expire_time, self.my_nodeid)
977-        for sf in self._iter_share_files(storage_index):
978+        for sf in self._iter_share_files(storageindex):
979             sf.add_or_renew_lease(lease_info)
980         self.add_latency("add-lease", time.time() - start)
981         return None
982hunk ./src/allmydata/storage/server.py 265
983 
984-    def remote_renew_lease(self, storage_index, renew_secret):
985+    def remote_renew_lease(self, storageindex, renew_secret):
986         start = time.time()
987         self.count("renew")
988         new_expire_time = time.time() + 31*24*60*60
989hunk ./src/allmydata/storage/server.py 270
990         found_buckets = False
991-        for sf in self._iter_share_files(storage_index):
992+        for sf in self._iter_share_files(storageindex):
993             found_buckets = True
994             sf.renew_lease(renew_secret, new_expire_time)
995         self.add_latency("renew", time.time() - start)
996hunk ./src/allmydata/storage/server.py 277
997         if not found_buckets:
998             raise IndexError("no such lease to renew")
999 
1000-    def remote_cancel_lease(self, storage_index, cancel_secret):
1001+    def remote_cancel_lease(self, storageindex, cancel_secret):
1002         start = time.time()
1003         self.count("cancel")
1004 
1005hunk ./src/allmydata/storage/server.py 283
1006         total_space_freed = 0
1007         found_buckets = False
1008-        for sf in self._iter_share_files(storage_index):
1009+        for sf in self._iter_share_files(storageindex):
1010             # note: if we can't find a lease on one share, we won't bother
1011             # looking in the others. Unless something broke internally
1012             # (perhaps we ran out of disk space while adding a lease), the
1013hunk ./src/allmydata/storage/server.py 293
1014             total_space_freed += sf.cancel_lease(cancel_secret)
1015 
1016         if found_buckets:
1017-            storagedir = os.path.join(self.sharedir,
1018-                                      storage_index_to_dir(storage_index))
1019-            if not os.listdir(storagedir):
1020-                os.rmdir(storagedir)
1021+            # XXX  Yikes looks like code that shouldn't be in the server!
1022+            storagedir = si_si2dir(self.sharedir, storageindex)
1023+            fp_rmdir_if_empty(storagedir)
1024 
1025         if self.stats_provider:
1026             self.stats_provider.count('storage_server.bytes_freed',
1027hunk ./src/allmydata/storage/server.py 309
1028             self.stats_provider.count('storage_server.bytes_added', consumed_size)
1029         del self._active_writers[bw]
1030 
1031-    def _get_bucket_shares(self, storage_index):
1032-        """Return a list of (shnum, pathname) tuples for files that hold
1033-        shares for this storage_index. In each tuple, 'shnum' will always be
1034-        the integer form of the last component of 'pathname'."""
1035-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1036-        try:
1037-            for f in os.listdir(storagedir):
1038-                if NUM_RE.match(f):
1039-                    filename = os.path.join(storagedir, f)
1040-                    yield (int(f), filename)
1041-        except OSError:
1042-            # Commonly caused by there being no buckets at all.
1043-            pass
1044-
1045-    def remote_get_buckets(self, storage_index):
1046+    def remote_get_buckets(self, storageindex):
1047         start = time.time()
1048         self.count("get")
1049hunk ./src/allmydata/storage/server.py 312
1050-        si_s = si_b2a(storage_index)
1051+        si_s = si_b2a(storageindex)
1052         log.msg("storage: get_buckets %s" % si_s)
1053         bucketreaders = {} # k: sharenum, v: BucketReader
1054hunk ./src/allmydata/storage/server.py 315
1055-        for shnum, filename in self._get_bucket_shares(storage_index):
1056-            bucketreaders[shnum] = BucketReader(self, filename,
1057-                                                storage_index, shnum)
1058+        self.backend.set_storage_server(self)
1059+        for share in self.backend.get_shares(storageindex):
1060+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
1061         self.add_latency("get", time.time() - start)
1062         return bucketreaders
1063 
1064hunk ./src/allmydata/storage/server.py 321
1065-    def get_leases(self, storage_index):
1066+    def get_leases(self, storageindex):
1067         """Provide an iterator that yields all of the leases attached to this
1068         bucket. Each lease is returned as a LeaseInfo instance.
1069 
1070hunk ./src/allmydata/storage/server.py 331
1071         # since all shares get the same lease data, we just grab the leases
1072         # from the first share
1073         try:
1074-            shnum, filename = self._get_bucket_shares(storage_index).next()
1075+            shnum, filename = self._get_shares(storageindex).next()
1076             sf = ShareFile(filename)
1077             return sf.get_leases()
1078         except StopIteration:
1079hunk ./src/allmydata/storage/server.py 337
1080             return iter([])
1081 
1082-    def remote_slot_testv_and_readv_and_writev(self, storage_index,
1083+    #  XXX  As far as Zancas' grockery has gotten.
1084+    def remote_slot_testv_and_readv_and_writev(self, storageindex,
1085                                                secrets,
1086                                                test_and_write_vectors,
1087                                                read_vector):
1088hunk ./src/allmydata/storage/server.py 344
1089         start = time.time()
1090         self.count("writev")
1091-        si_s = si_b2a(storage_index)
1092+        si_s = si_b2a(storageindex)
1093         log.msg("storage: slot_writev %s" % si_s)
1094hunk ./src/allmydata/storage/server.py 346
1095-        si_dir = storage_index_to_dir(storage_index)
1096+       
1097         (write_enabler, renew_secret, cancel_secret) = secrets
1098         # shares exist if there is a file for them
1099hunk ./src/allmydata/storage/server.py 349
1100-        bucketdir = os.path.join(self.sharedir, si_dir)
1101+        bucketdir = si_si2dir(self.sharedir, storageindex)
1102         shares = {}
1103         if os.path.isdir(bucketdir):
1104             for sharenum_s in os.listdir(bucketdir):
1105hunk ./src/allmydata/storage/server.py 432
1106                                          self)
1107         return share
1108 
1109-    def remote_slot_readv(self, storage_index, shares, readv):
1110+    def remote_slot_readv(self, storageindex, shares, readv):
1111         start = time.time()
1112         self.count("readv")
1113hunk ./src/allmydata/storage/server.py 435
1114-        si_s = si_b2a(storage_index)
1115+        si_s = si_b2a(storageindex)
1116         lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
1117                      facility="tahoe.storage", level=log.OPERATIONAL)
1118hunk ./src/allmydata/storage/server.py 438
1119-        si_dir = storage_index_to_dir(storage_index)
1120         # shares exist if there is a file for them
1121hunk ./src/allmydata/storage/server.py 439
1122-        bucketdir = os.path.join(self.sharedir, si_dir)
1123+        bucketdir = si_si2dir(self.sharedir, storageindex)
1124         if not os.path.isdir(bucketdir):
1125             self.add_latency("readv", time.time() - start)
1126             return {}
1127hunk ./src/allmydata/storage/server.py 458
1128         self.add_latency("readv", time.time() - start)
1129         return datavs
1130 
1131-    def remote_advise_corrupt_share(self, share_type, storage_index, shnum,
1132+    def remote_advise_corrupt_share(self, share_type, storageindex, shnum,
1133                                     reason):
1134         fileutil.make_dirs(self.corruption_advisory_dir)
1135         now = time_format.iso_utc(sep="T")
1136hunk ./src/allmydata/storage/server.py 462
1137-        si_s = si_b2a(storage_index)
1138+        si_s = si_b2a(storageindex)
1139         # windows can't handle colons in the filename
1140         fn = os.path.join(self.corruption_advisory_dir,
1141                           "%s--%s-%d" % (now, si_s, shnum)).replace(":","")
1142hunk ./src/allmydata/storage/server.py 469
1143         f = open(fn, "w")
1144         f.write("report: Share Corruption\n")
1145         f.write("type: %s\n" % share_type)
1146-        f.write("storage_index: %s\n" % si_s)
1147+        f.write("storageindex: %s\n" % si_s)
1148         f.write("share_number: %d\n" % shnum)
1149         f.write("\n")
1150         f.write(reason)
1151}
1152[modify null/core.py such that the correct interfaces are implemented
1153wilcoxjg@gmail.com**20110809201822
1154 Ignore-this: 3c64580592474f71633287d1b6beeb6b
1155] hunk ./src/allmydata/storage/backends/null/core.py 4
1156 from allmydata.storage.backends.base import Backend
1157 from allmydata.storage.immutable import BucketWriter, BucketReader
1158 from zope.interface import implements
1159+from allmydata.interfaces import IStorageBackend, IStorageBackendShare
1160 
1161 class NullCore(Backend):
1162     implements(IStorageBackend)
1163[make changes to storage/immutable.py most changes are part of movement to DAS specific backend.
1164wilcoxjg@gmail.com**20110809202232
1165 Ignore-this: 70c7c6ea6be2418d70556718a050714
1166] {
1167hunk ./src/allmydata/storage/immutable.py 1
1168-import os, stat, struct, time
1169+import os, time
1170 
1171 from foolscap.api import Referenceable
1172 
1173hunk ./src/allmydata/storage/immutable.py 7
1174 from zope.interface import implements
1175 from allmydata.interfaces import RIBucketWriter, RIBucketReader
1176-from allmydata.util import base32, fileutil, log
1177+from allmydata.util import base32, log
1178 from allmydata.util.assertutil import precondition
1179 from allmydata.util.hashutil import constant_time_compare
1180 from allmydata.storage.lease import LeaseInfo
1181hunk ./src/allmydata/storage/immutable.py 14
1182 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1183      DataTooLargeError
1184 
1185-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1186-# and share data. The share data is accessed by RIBucketWriter.write and
1187-# RIBucketReader.read . The lease information is not accessible through these
1188-# interfaces.
1189-
1190-# The share file has the following layout:
1191-#  0x00: share file version number, four bytes, current version is 1
1192-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1193-#  0x08: number of leases, four bytes big-endian
1194-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1195-#  A+0x0c = B: first lease. Lease format is:
1196-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1197-#   B+0x04: renew secret, 32 bytes (SHA256)
1198-#   B+0x24: cancel secret, 32 bytes (SHA256)
1199-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1200-#   B+0x48: next lease, or end of record
1201-
1202-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1203-# but it is still filled in by storage servers in case the storage server
1204-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1205-# share file is moved from one storage server to another. The value stored in
1206-# this field is truncated, so if the actual share data length is >= 2**32,
1207-# then the value stored in this field will be the actual share data length
1208-# modulo 2**32.
1209-
1210-class ShareFile:
1211-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1212-    sharetype = "immutable"
1213-
1214-    def __init__(self, filename, max_size=None, create=False):
1215-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
1216-        precondition((max_size is not None) or (not create), max_size, create)
1217-        self.home = filename
1218-        self._max_size = max_size
1219-        if create:
1220-            # touch the file, so later callers will see that we're working on
1221-            # it. Also construct the metadata.
1222-            assert not os.path.exists(self.home)
1223-            fileutil.make_dirs(os.path.dirname(self.home))
1224-            f = open(self.home, 'wb')
1225-            # The second field -- the four-byte share data length -- is no
1226-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1227-            # there in case someone downgrades a storage server from >=
1228-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1229-            # server to another, etc. We do saturation -- a share data length
1230-            # larger than 2**32-1 (what can fit into the field) is marked as
1231-            # the largest length that can fit into the field. That way, even
1232-            # if this does happen, the old < v1.3.0 server will still allow
1233-            # clients to read the first part of the share.
1234-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1235-            f.close()
1236-            self._lease_offset = max_size + 0x0c
1237-            self._num_leases = 0
1238-        else:
1239-            f = open(self.home, 'rb')
1240-            filesize = os.path.getsize(self.home)
1241-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1242-            f.close()
1243-            if version != 1:
1244-                msg = "sharefile %s had version %d but we wanted 1" % \
1245-                      (filename, version)
1246-                raise UnknownImmutableContainerVersionError(msg)
1247-            self._num_leases = num_leases
1248-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1249-        self._data_offset = 0xc
1250-
1251-    def unlink(self):
1252-        os.unlink(self.home)
1253-
1254-    def read_share_data(self, offset, length):
1255-        precondition(offset >= 0)
1256-        # reads beyond the end of the data are truncated. Reads that start
1257-        # beyond the end of the data return an empty string. I wonder why
1258-        # Python doesn't do the following computation for me?
1259-        seekpos = self._data_offset+offset
1260-        fsize = os.path.getsize(self.home)
1261-        actuallength = max(0, min(length, fsize-seekpos))
1262-        if actuallength == 0:
1263-            return ""
1264-        f = open(self.home, 'rb')
1265-        f.seek(seekpos)
1266-        return f.read(actuallength)
1267-
1268-    def write_share_data(self, offset, data):
1269-        length = len(data)
1270-        precondition(offset >= 0, offset)
1271-        if self._max_size is not None and offset+length > self._max_size:
1272-            raise DataTooLargeError(self._max_size, offset, length)
1273-        f = open(self.home, 'rb+')
1274-        real_offset = self._data_offset+offset
1275-        f.seek(real_offset)
1276-        assert f.tell() == real_offset
1277-        f.write(data)
1278-        f.close()
1279-
1280-    def _write_lease_record(self, f, lease_number, lease_info):
1281-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1282-        f.seek(offset)
1283-        assert f.tell() == offset
1284-        f.write(lease_info.to_immutable_data())
1285-
1286-    def _read_num_leases(self, f):
1287-        f.seek(0x08)
1288-        (num_leases,) = struct.unpack(">L", f.read(4))
1289-        return num_leases
1290-
1291-    def _write_num_leases(self, f, num_leases):
1292-        f.seek(0x08)
1293-        f.write(struct.pack(">L", num_leases))
1294-
1295-    def _truncate_leases(self, f, num_leases):
1296-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1297-
1298-    def get_leases(self):
1299-        """Yields a LeaseInfo instance for all leases."""
1300-        f = open(self.home, 'rb')
1301-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1302-        f.seek(self._lease_offset)
1303-        for i in range(num_leases):
1304-            data = f.read(self.LEASE_SIZE)
1305-            if data:
1306-                yield LeaseInfo().from_immutable_data(data)
1307-
1308-    def add_lease(self, lease_info):
1309-        f = open(self.home, 'rb+')
1310-        num_leases = self._read_num_leases(f)
1311-        self._write_lease_record(f, num_leases, lease_info)
1312-        self._write_num_leases(f, num_leases+1)
1313-        f.close()
1314-
1315-    def renew_lease(self, renew_secret, new_expire_time):
1316-        for i,lease in enumerate(self.get_leases()):
1317-            if constant_time_compare(lease.renew_secret, renew_secret):
1318-                # yup. See if we need to update the owner time.
1319-                if new_expire_time > lease.expiration_time:
1320-                    # yes
1321-                    lease.expiration_time = new_expire_time
1322-                    f = open(self.home, 'rb+')
1323-                    self._write_lease_record(f, i, lease)
1324-                    f.close()
1325-                return
1326-        raise IndexError("unable to renew non-existent lease")
1327-
1328-    def add_or_renew_lease(self, lease_info):
1329-        try:
1330-            self.renew_lease(lease_info.renew_secret,
1331-                             lease_info.expiration_time)
1332-        except IndexError:
1333-            self.add_lease(lease_info)
1334-
1335-
1336-    def cancel_lease(self, cancel_secret):
1337-        """Remove a lease with the given cancel_secret. If the last lease is
1338-        cancelled, the file will be removed. Return the number of bytes that
1339-        were freed (by truncating the list of leases, and possibly by
1340-        deleting the file. Raise IndexError if there was no lease with the
1341-        given cancel_secret.
1342-        """
1343-
1344-        leases = list(self.get_leases())
1345-        num_leases_removed = 0
1346-        for i,lease in enumerate(leases):
1347-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1348-                leases[i] = None
1349-                num_leases_removed += 1
1350-        if not num_leases_removed:
1351-            raise IndexError("unable to find matching lease to cancel")
1352-        if num_leases_removed:
1353-            # pack and write out the remaining leases. We write these out in
1354-            # the same order as they were added, so that if we crash while
1355-            # doing this, we won't lose any non-cancelled leases.
1356-            leases = [l for l in leases if l] # remove the cancelled leases
1357-            f = open(self.home, 'rb+')
1358-            for i,lease in enumerate(leases):
1359-                self._write_lease_record(f, i, lease)
1360-            self._write_num_leases(f, len(leases))
1361-            self._truncate_leases(f, len(leases))
1362-            f.close()
1363-        space_freed = self.LEASE_SIZE * num_leases_removed
1364-        if not len(leases):
1365-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1366-            self.unlink()
1367-        return space_freed
1368-
1369-
1370 class BucketWriter(Referenceable):
1371     implements(RIBucketWriter)
1372 
1373hunk ./src/allmydata/storage/immutable.py 17
1374-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1375+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1376         self.ss = ss
1377hunk ./src/allmydata/storage/immutable.py 19
1378-        self.incominghome = incominghome
1379-        self.finalhome = finalhome
1380-        self._max_size = max_size # don't allow the client to write more than this
1381+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
1382         self._canary = canary
1383         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1384         self.closed = False
1385hunk ./src/allmydata/storage/immutable.py 24
1386         self.throw_out_all_data = False
1387-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1388+        self._sharefile = immutableshare
1389         # also, add our lease to the file now, so that other ones can be
1390         # added by simultaneous uploaders
1391         self._sharefile.add_lease(lease_info)
1392hunk ./src/allmydata/storage/immutable.py 45
1393         precondition(not self.closed)
1394         start = time.time()
1395 
1396-        fileutil.make_dirs(os.path.dirname(self.finalhome))
1397-        fileutil.rename(self.incominghome, self.finalhome)
1398-        try:
1399-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
1400-            # We try to delete the parent (.../ab/abcde) to avoid leaving
1401-            # these directories lying around forever, but the delete might
1402-            # fail if we're working on another share for the same storage
1403-            # index (like ab/abcde/5). The alternative approach would be to
1404-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1405-            # ShareWriter), each of which is responsible for a single
1406-            # directory on disk, and have them use reference counting of
1407-            # their children to know when they should do the rmdir. This
1408-            # approach is simpler, but relies on os.rmdir refusing to delete
1409-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
1410-            os.rmdir(os.path.dirname(self.incominghome))
1411-            # we also delete the grandparent (prefix) directory, .../ab ,
1412-            # again to avoid leaving directories lying around. This might
1413-            # fail if there is another bucket open that shares a prefix (like
1414-            # ab/abfff).
1415-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
1416-            # we leave the great-grandparent (incoming/) directory in place.
1417-        except EnvironmentError:
1418-            # ignore the "can't rmdir because the directory is not empty"
1419-            # exceptions, those are normal consequences of the
1420-            # above-mentioned conditions.
1421-            pass
1422+        self._sharefile.close()
1423+        filelen = self._sharefile.stat()
1424         self._sharefile = None
1425hunk ./src/allmydata/storage/immutable.py 48
1426+
1427         self.closed = True
1428         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1429 
1430hunk ./src/allmydata/storage/immutable.py 52
1431-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
1432         self.ss.bucket_writer_closed(self, filelen)
1433         self.ss.add_latency("close", time.time() - start)
1434         self.ss.count("close")
1435hunk ./src/allmydata/storage/immutable.py 90
1436 class BucketReader(Referenceable):
1437     implements(RIBucketReader)
1438 
1439-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
1440+    def __init__(self, ss, share):
1441         self.ss = ss
1442hunk ./src/allmydata/storage/immutable.py 92
1443-        self._share_file = ShareFile(sharefname)
1444-        self.storage_index = storage_index
1445-        self.shnum = shnum
1446+        self._share_file = share
1447+        self.storageindex = share.storageindex
1448+        self.shnum = share.shnum
1449 
1450     def __repr__(self):
1451         return "<%s %s %s>" % (self.__class__.__name__,
1452hunk ./src/allmydata/storage/immutable.py 98
1453-                               base32.b2a_l(self.storage_index[:8], 60),
1454+                               base32.b2a_l(self.storageindex[:8], 60),
1455                                self.shnum)
1456 
1457     def remote_read(self, offset, length):
1458hunk ./src/allmydata/storage/immutable.py 110
1459 
1460     def remote_advise_corrupt_share(self, reason):
1461         return self.ss.remote_advise_corrupt_share("immutable",
1462-                                                   self.storage_index,
1463+                                                   self.storageindex,
1464                                                    self.shnum,
1465                                                    reason)
1466}
1467[creates backends/das/core.py
1468wilcoxjg@gmail.com**20110809202620
1469 Ignore-this: 2ea937f8d02aa85396135903be91ed67
1470] {
1471adddir ./src/allmydata/storage/backends/das
1472addfile ./src/allmydata/storage/backends/das/core.py
1473hunk ./src/allmydata/storage/backends/das/core.py 1
1474+import re, weakref, struct, time, stat
1475+from twisted.application import service
1476+from twisted.python.filepath import UnlistableError
1477+from twisted.python import filepath
1478+from twisted.python.filepath import FilePath
1479+from zope.interface import implements
1480+
1481+import allmydata # for __full_version__
1482+from allmydata.interfaces import IStorageBackend
1483+from allmydata.storage.backends.base import Backend
1484+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
1485+from allmydata.util.assertutil import precondition
1486+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
1487+from allmydata.util import fileutil, idlib, log, time_format
1488+from allmydata.util.fileutil import fp_make_dirs
1489+from allmydata.storage.lease import LeaseInfo
1490+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1491+     create_mutable_sharefile
1492+from allmydata.storage.immutable import BucketWriter, BucketReader
1493+from allmydata.storage.crawler import BucketCountingCrawler
1494+from allmydata.util.hashutil import constant_time_compare
1495+from allmydata.storage.backends.das.expirer import LeaseCheckingCrawler
1496+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
1497+
1498+# storage/
1499+# storage/shares/incoming
1500+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1501+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
1502+# storage/shares/$START/$STORAGEINDEX
1503+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
1504+
1505+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
1506+# base-32 chars).
1507+# $SHARENUM matches this regex:
1508+NUM_RE=re.compile("^[0-9]+$")
1509+
1510+class DASCore(Backend):
1511+    implements(IStorageBackend)
1512+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1513+        Backend.__init__(self)
1514+        self._setup_storage(storedir, readonly, reserved_space)
1515+        self._setup_corruption_advisory()
1516+        self._setup_bucket_counter()
1517+        self._setup_lease_checkerf(expiration_policy)
1518+
1519+    def _setup_storage(self, storedir, readonly, reserved_space):
1520+        precondition(isinstance(storedir, FilePath), storedir, FilePath) 
1521+        self.storedir = storedir
1522+        self.readonly = readonly
1523+        self.reserved_space = int(reserved_space)
1524+        self.sharedir = self.storedir.child("shares")
1525+        fileutil.fp_make_dirs(self.sharedir)
1526+        self.incomingdir = self.sharedir.child('incoming')
1527+        self._clean_incomplete()
1528+        if self.reserved_space and (self.get_available_space() is None):
1529+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1530+                    umid="0wZ27w", level=log.UNUSUAL)
1531+
1532+
1533+    def _clean_incomplete(self):
1534+        fileutil.fp_remove(self.incomingdir)
1535+        fileutil.fp_make_dirs(self.incomingdir)
1536+
1537+    def _setup_corruption_advisory(self):
1538+        # we don't actually create the corruption-advisory dir until necessary
1539+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
1540+
1541+    def _setup_bucket_counter(self):
1542+        statefname = self.storedir.child("bucket_counter.state")
1543+        self.bucket_counter = BucketCountingCrawler(statefname)
1544+        self.bucket_counter.setServiceParent(self)
1545+
1546+    def _setup_lease_checkerf(self, expiration_policy):
1547+        statefile = self.storedir.child("lease_checker.state")
1548+        historyfile = self.storedir.child("lease_checker.history")
1549+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1550+        self.lease_checker.setServiceParent(self)
1551+
1552+    def get_incoming_shnums(self, storageindex):
1553+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
1554+        incomingthissi = si_si2dir(self.incomingdir, storageindex)
1555+        try:
1556+            childfps = [ fp for fp in incomingthissi.children() if NUM_RE.match(fp.basename()) ]
1557+            shnums = [ int(fp.basename()) for fp in childfps]
1558+            return frozenset(shnums)
1559+        except UnlistableError:
1560+            # There is no shares directory at all.
1561+            return frozenset()
1562+           
1563+    def get_shares(self, storageindex):
1564+        """ Generate ImmutableShare objects for shares we have for this
1565+        storageindex. ("Shares we have" means completed ones, excluding
1566+        incoming ones.)"""
1567+        finalstoragedir = si_si2dir(self.sharedir, storageindex)
1568+        try:
1569+            for fp in finalstoragedir.children():
1570+                fpshnumstr = fp.basename()
1571+                if NUM_RE.match(fpshnumstr):
1572+                    finalhome = finalstoragedir.child(fpshnumstr)
1573+                    yield ImmutableShare(storageindex, fpshnumstr, finalhome)
1574+        except UnlistableError:
1575+            # There is no shares directory at all.
1576+            pass
1577+       
1578+    def get_available_space(self):
1579+        if self.readonly:
1580+            return 0
1581+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1582+
1583+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
1584+        finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
1585+        incominghome = si_si2dir(self.incomingdir, storageindex).child(str(shnum))
1586+        immsh = ImmutableShare(storageindex, shnum, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
1587+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1588+        return bw
1589+
1590+    def make_bucket_reader(self, share):
1591+        return BucketReader(self.ss, share)
1592+
1593+    def set_storage_server(self, ss):
1594+        self.ss = ss
1595+       
1596+
1597+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1598+# and share data. The share data is accessed by RIBucketWriter.write and
1599+# RIBucketReader.read . The lease information is not accessible through these
1600+# interfaces.
1601+
1602+# The share file has the following layout:
1603+#  0x00: share file version number, four bytes, current version is 1
1604+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1605+#  0x08: number of leases, four bytes big-endian
1606+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1607+#  A+0x0c = B: first lease. Lease format is:
1608+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1609+#   B+0x04: renew secret, 32 bytes (SHA256)
1610+#   B+0x24: cancel secret, 32 bytes (SHA256)
1611+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1612+#   B+0x48: next lease, or end of record
1613+
1614+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1615+# but it is still filled in by storage servers in case the storage server
1616+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1617+# share file is moved from one storage server to another. The value stored in
1618+# this field is truncated, so if the actual share data length is >= 2**32,
1619+# then the value stored in this field will be the actual share data length
1620+# modulo 2**32.
1621+
1622+class ImmutableShare(object):
1623+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1624+    sharetype = "immutable"
1625+
1626+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
1627+        """ If max_size is not None then I won't allow more than
1628+        max_size to be written to me. If create=True then max_size
1629+        must not be None. """
1630+        precondition((max_size is not None) or (not create), max_size, create)
1631+        self.storageindex = storageindex
1632+        self._max_size = max_size
1633+        self.incominghome = incominghome
1634+        self.finalhome = finalhome
1635+        self.shnum = shnum
1636+        if create:
1637+            # touch the file, so later callers will see that we're working on
1638+            # it. Also construct the metadata.
1639+            assert not finalhome.exists()
1640+            fp_make_dirs(self.incominghome.parent())
1641+            # The second field -- the four-byte share data length -- is no
1642+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1643+            # there in case someone downgrades a storage server from >=
1644+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1645+            # server to another, etc. We do saturation -- a share data length
1646+            # larger than 2**32-1 (what can fit into the field) is marked as
1647+            # the largest length that can fit into the field. That way, even
1648+            # if this does happen, the old < v1.3.0 server will still allow
1649+            # clients to read the first part of the share.
1650+            self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
1651+            self._lease_offset = max_size + 0x0c
1652+            self._num_leases = 0
1653+        else:
1654+            fh = self.finalhome.open(mode='rb')
1655+            try:
1656+                (version, unused, num_leases) = struct.unpack(">LLL", fh.read(0xc))
1657+            finally:
1658+                fh.close()
1659+            filesize = self.finalhome.getsize()
1660+            if version != 1:
1661+                msg = "sharefile %s had version %d but we wanted 1" % \
1662+                      (self.finalhome, version)
1663+                raise UnknownImmutableContainerVersionError(msg)
1664+            self._num_leases = num_leases
1665+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1666+        self._data_offset = 0xc
1667+
1668+    def close(self):
1669+        fileutil.fp_make_dirs(self.finalhome.parent())
1670+        self.incominghome.moveTo(self.finalhome)
1671+        try:
1672+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
1673+            # We try to delete the parent (.../ab/abcde) to avoid leaving
1674+            # these directories lying around forever, but the delete might
1675+            # fail if we're working on another share for the same storage
1676+            # index (like ab/abcde/5). The alternative approach would be to
1677+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1678+            # ShareWriter), each of which is responsible for a single
1679+            # directory on disk, and have them use reference counting of
1680+            # their children to know when they should do the rmdir. This
1681+            # approach is simpler, but relies on os.rmdir refusing to delete
1682+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
1683+            fileutil.fp_rmdir_if_empty(self.incominghome.parent())
1684+            # we also delete the grandparent (prefix) directory, .../ab ,
1685+            # again to avoid leaving directories lying around. This might
1686+            # fail if there is another bucket open that shares a prefix (like
1687+            # ab/abfff).
1688+            fileutil.fp_rmdir_if_empty(self.incominghome.parent().parent())
1689+            # we leave the great-grandparent (incoming/) directory in place.
1690+        except EnvironmentError:
1691+            # ignore the "can't rmdir because the directory is not empty"
1692+            # exceptions, those are normal consequences of the
1693+            # above-mentioned conditions.
1694+            pass
1695+        pass
1696+       
1697+    def stat(self):
1698+        return filepath.stat(self.finalhome.path)[stat.ST_SIZE]
1699+
1700+    def get_shnum(self):
1701+        return self.shnum
1702+
1703+    def unlink(self):
1704+        self.finalhome.remove()
1705+
1706+    def read_share_data(self, offset, length):
1707+        precondition(offset >= 0)
1708+        # Reads beyond the end of the data are truncated. Reads that start
1709+        # beyond the end of the data return an empty string.
1710+        seekpos = self._data_offset+offset
1711+        fsize = self.finalhome.getsize()
1712+        actuallength = max(0, min(length, fsize-seekpos))
1713+        if actuallength == 0:
1714+            return ""
1715+        fh = self.finalhome.open(mode='rb')
1716+        try:
1717+            fh.seek(seekpos)
1718+            sharedata = fh.read(actuallength)
1719+        finally:
1720+            fh.close()
1721+        return sharedata
1722+
1723+    def write_share_data(self, offset, data):
1724+        length = len(data)
1725+        precondition(offset >= 0, offset)
1726+        if self._max_size is not None and offset+length > self._max_size:
1727+            raise DataTooLargeError(self._max_size, offset, length)
1728+        fh = self.incominghome.open(mode='rb+')
1729+        try:
1730+            real_offset = self._data_offset+offset
1731+            fh.seek(real_offset)
1732+            assert fh.tell() == real_offset
1733+            fh.write(data)
1734+        finally:
1735+            fh.close()
1736+
1737+    def _write_lease_record(self, f, lease_number, lease_info):
1738+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1739+        fh = f.open()
1740+        try:
1741+            fh.seek(offset)
1742+            assert fh.tell() == offset
1743+            fh.write(lease_info.to_immutable_data())
1744+        finally:
1745+            fh.close()
1746+
1747+    def _read_num_leases(self, f):
1748+        fh = f.open() #XXX  Should be mocking FilePath.open()
1749+        try:
1750+            fh.seek(0x08)
1751+            ro = fh.read(4)
1752+            (num_leases,) = struct.unpack(">L", ro)
1753+        finally:
1754+            fh.close()
1755+        return num_leases
1756+
1757+    def _write_num_leases(self, f, num_leases):
1758+        fh = f.open()
1759+        try:
1760+            fh.seek(0x08)
1761+            fh.write(struct.pack(">L", num_leases))
1762+        finally:
1763+            fh.close()
1764+
1765+    def _truncate_leases(self, f, num_leases):
1766+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1767+
1768+    def get_leases(self):
1769+        """Yields a LeaseInfo instance for all leases."""
1770+        fh = self.finalhome.open(mode='rb')
1771+        (version, unused, num_leases) = struct.unpack(">LLL", fh.read(0xc))
1772+        fh.seek(self._lease_offset)
1773+        for i in range(num_leases):
1774+            data = fh.read(self.LEASE_SIZE)
1775+            if data:
1776+                yield LeaseInfo().from_immutable_data(data)
1777+
1778+    def add_lease(self, lease_info):
1779+        num_leases = self._read_num_leases(self.incominghome)
1780+        self._write_lease_record(self.incominghome, num_leases, lease_info)
1781+        self._write_num_leases(self.incominghome, num_leases+1)
1782+       
1783+    def renew_lease(self, renew_secret, new_expire_time):
1784+        for i,lease in enumerate(self.get_leases()):
1785+            if constant_time_compare(lease.renew_secret, renew_secret):
1786+                # yup. See if we need to update the owner time.
1787+                if new_expire_time > lease.expiration_time:
1788+                    # yes
1789+                    lease.expiration_time = new_expire_time
1790+                    f = open(self.finalhome, 'rb+')
1791+                    self._write_lease_record(f, i, lease)
1792+                    f.close()
1793+                return
1794+        raise IndexError("unable to renew non-existent lease")
1795+
1796+    def add_or_renew_lease(self, lease_info):
1797+        try:
1798+            self.renew_lease(lease_info.renew_secret,
1799+                             lease_info.expiration_time)
1800+        except IndexError:
1801+            self.add_lease(lease_info)
1802+
1803+    def cancel_lease(self, cancel_secret):
1804+        """Remove a lease with the given cancel_secret. If the last lease is
1805+        cancelled, the file will be removed. Return the number of bytes that
1806+        were freed (by truncating the list of leases, and possibly by
1807+        deleting the file. Raise IndexError if there was no lease with the
1808+        given cancel_secret.
1809+        """
1810+
1811+        leases = list(self.get_leases())
1812+        num_leases_removed = 0
1813+        for i,lease in enumerate(leases):
1814+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1815+                leases[i] = None
1816+                num_leases_removed += 1
1817+        if not num_leases_removed:
1818+            raise IndexError("unable to find matching lease to cancel")
1819+        if num_leases_removed:
1820+            # pack and write out the remaining leases. We write these out in
1821+            # the same order as they were added, so that if we crash while
1822+            # doing this, we won't lose any non-cancelled leases.
1823+            leases = [l for l in leases if l] # remove the cancelled leases
1824+            f = open(self.finalhome, 'rb+')
1825+            for i,lease in enumerate(leases):
1826+                self._write_lease_record(f, i, lease)
1827+            self._write_num_leases(f, len(leases))
1828+            self._truncate_leases(f, len(leases))
1829+            f.close()
1830+        space_freed = self.LEASE_SIZE * num_leases_removed
1831+        if not len(leases):
1832+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
1833+            self.unlink()
1834+        return space_freed
1835}
1836[change backends/das/core.py to correct which interfaces are implemented
1837wilcoxjg@gmail.com**20110809203123
1838 Ignore-this: 7f9331a04b55f7feee4335abee011e14
1839] hunk ./src/allmydata/storage/backends/das/core.py 13
1840 from allmydata.storage.backends.base import Backend
1841 from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
1842 from allmydata.util.assertutil import precondition
1843-from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
1844+from allmydata.interfaces import IStorageBackend
1845 from allmydata.util import fileutil, idlib, log, time_format
1846 from allmydata.util.fileutil import fp_make_dirs
1847 from allmydata.storage.lease import LeaseInfo
1848[util/fileutil.py now expects and manipulates twisted.python.filepath.FilePath objects
1849wilcoxjg@gmail.com**20110809203321
1850 Ignore-this: 12c8aa13424ed51a5df09b92a454627
1851] {
1852hunk ./src/allmydata/util/fileutil.py 5
1853 Futz with files like a pro.
1854 """
1855 
1856-import sys, exceptions, os, stat, tempfile, time, binascii
1857+import errno, sys, exceptions, os, stat, tempfile, time, binascii
1858+
1859+from allmydata.util.assertutil import precondition
1860 
1861 from twisted.python import log
1862hunk ./src/allmydata/util/fileutil.py 10
1863+from twisted.python.filepath import FilePath, UnlistableError
1864 
1865 from pycryptopp.cipher.aes import AES
1866 
1867hunk ./src/allmydata/util/fileutil.py 189
1868             raise tx
1869         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
1870 
1871-def rm_dir(dirname):
1872+def fp_make_dirs(dirfp):
1873+    """
1874+    An idempotent version of FilePath.makedirs().  If the dir already
1875+    exists, do nothing and return without raising an exception.  If this
1876+    call creates the dir, return without raising an exception.  If there is
1877+    an error that prevents creation or if the directory gets deleted after
1878+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
1879+    exists, raise an exception.
1880+    """
1881+    log.msg( "xxx 0 %s" % (dirfp,))
1882+    tx = None
1883+    try:
1884+        dirfp.makedirs()
1885+    except OSError, x:
1886+        tx = x
1887+
1888+    if not dirfp.isdir():
1889+        if tx:
1890+            raise tx
1891+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
1892+
1893+def fp_rmdir_if_empty(dirfp):
1894+    """ Remove the directory if it is empty. """
1895+    try:
1896+        os.rmdir(dirfp.path)
1897+    except OSError, e:
1898+        if e.errno != errno.ENOTEMPTY:
1899+            raise
1900+    else:
1901+        dirfp.changed()
1902+
1903+def rmtree(dirname):
1904     """
1905     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
1906     already gone, do nothing and return without raising an exception.  If this
1907hunk ./src/allmydata/util/fileutil.py 239
1908             else:
1909                 remove(fullname)
1910         os.rmdir(dirname)
1911-    except Exception, le:
1912-        # Ignore "No such file or directory"
1913-        if (not isinstance(le, OSError)) or le.args[0] != 2:
1914+    except EnvironmentError, le:
1915+        # Ignore "No such file or directory", collect any other exception.
1916+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
1917             excs.append(le)
1918hunk ./src/allmydata/util/fileutil.py 243
1919+    except Exception, le:
1920+        excs.append(le)
1921 
1922     # Okay, now we've recursively removed everything, ignoring any "No
1923     # such file or directory" errors, and collecting any other errors.
1924hunk ./src/allmydata/util/fileutil.py 256
1925             raise OSError, "Failed to remove dir for unknown reason."
1926         raise OSError, excs
1927 
1928+def fp_remove(dirfp):
1929+    """
1930+    An idempotent version of shutil.rmtree().  If the dir is already gone,
1931+    do nothing and return without raising an exception.  If this call
1932+    removes the dir, return without raising an exception.  If there is an
1933+    error that prevents removal or if the directory gets created again by
1934+    someone else after this deletes it and before this checks that it is
1935+    gone, raise an exception.
1936+    """
1937+    try:
1938+        dirfp.remove()
1939+    except UnlistableError, e:
1940+        if e.originalException.errno != errno.ENOENT:
1941+            raise
1942+    except OSError, e:
1943+        if e.errno != errno.ENOENT:
1944+            raise
1945+
1946+def rm_dir(dirname):
1947+    # Renamed to be like shutil.rmtree and unlike rmdir.
1948+    return rmtree(dirname)
1949 
1950 def remove_if_possible(f):
1951     try:
1952hunk ./src/allmydata/util/fileutil.py 387
1953         import traceback
1954         traceback.print_exc()
1955 
1956-def get_disk_stats(whichdir, reserved_space=0):
1957+def get_disk_stats(whichdirfp, reserved_space=0):
1958     """Return disk statistics for the storage disk, in the form of a dict
1959     with the following fields.
1960       total:            total bytes on disk
1961hunk ./src/allmydata/util/fileutil.py 408
1962     you can pass how many bytes you would like to leave unused on this
1963     filesystem as reserved_space.
1964     """
1965+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
1966 
1967     if have_GetDiskFreeSpaceExW:
1968         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
1969hunk ./src/allmydata/util/fileutil.py 419
1970         n_free_for_nonroot = c_ulonglong(0)
1971         n_total            = c_ulonglong(0)
1972         n_free_for_root    = c_ulonglong(0)
1973-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
1974+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
1975                                                byref(n_total),
1976                                                byref(n_free_for_root))
1977         if retval == 0:
1978hunk ./src/allmydata/util/fileutil.py 424
1979             raise OSError("Windows error %d attempting to get disk statistics for %r"
1980-                          % (GetLastError(), whichdir))
1981+                          % (GetLastError(), whichdirfp.path))
1982         free_for_nonroot = n_free_for_nonroot.value
1983         total            = n_total.value
1984         free_for_root    = n_free_for_root.value
1985hunk ./src/allmydata/util/fileutil.py 433
1986         # <http://docs.python.org/library/os.html#os.statvfs>
1987         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
1988         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
1989-        s = os.statvfs(whichdir)
1990+        s = os.statvfs(whichdirfp.path)
1991 
1992         # on my mac laptop:
1993         #  statvfs(2) is a wrapper around statfs(2).
1994hunk ./src/allmydata/util/fileutil.py 460
1995              'avail': avail,
1996            }
1997 
1998-def get_available_space(whichdir, reserved_space):
1999+def get_available_space(whichdirfp, reserved_space):
2000     """Returns available space for share storage in bytes, or None if no
2001     API to get this information is available.
2002 
2003hunk ./src/allmydata/util/fileutil.py 472
2004     you can pass how many bytes you would like to leave unused on this
2005     filesystem as reserved_space.
2006     """
2007+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
2008     try:
2009hunk ./src/allmydata/util/fileutil.py 474
2010-        return get_disk_stats(whichdir, reserved_space)['avail']
2011+        return get_disk_stats(whichdirfp, reserved_space)['avail']
2012     except AttributeError:
2013         return None
2014hunk ./src/allmydata/util/fileutil.py 477
2015-    except EnvironmentError:
2016-        log.msg("OS call to get disk statistics failed")
2017-        return 0
2018}
2019[add expirer.py
2020wilcoxjg@gmail.com**20110809203519
2021 Ignore-this: b09d460593f0e0aa065e867d5159455b
2022] {
2023addfile ./src/allmydata/storage/backends/das/expirer.py
2024hunk ./src/allmydata/storage/backends/das/expirer.py 1
2025+import time, os, pickle, struct # os, pickle, and struct will almost certainly be migrated to the backend...
2026+from allmydata.storage.crawler import ShareCrawler
2027+from allmydata.storage.common import UnknownMutableContainerVersionError, \
2028+     UnknownImmutableContainerVersionError
2029+from twisted.python import log as twlog
2030+
2031+class LeaseCheckingCrawler(ShareCrawler):
2032+    """I examine the leases on all shares, determining which are still valid
2033+    and which have expired. I can remove the expired leases (if so
2034+    configured), and the share will be deleted when the last lease is
2035+    removed.
2036+
2037+    I collect statistics on the leases and make these available to a web
2038+    status page, including:
2039+
2040+    Space recovered during this cycle-so-far:
2041+     actual (only if expiration_enabled=True):
2042+      num-buckets, num-shares, sum of share sizes, real disk usage
2043+      ('real disk usage' means we use stat(fn).st_blocks*512 and include any
2044+       space used by the directory)
2045+     what it would have been with the original lease expiration time
2046+     what it would have been with our configured expiration time
2047+
2048+    Prediction of space that will be recovered during the rest of this cycle
2049+    Prediction of space that will be recovered by the entire current cycle.
2050+
2051+    Space recovered during the last 10 cycles  <-- saved in separate pickle
2052+
2053+    Shares/buckets examined:
2054+     this cycle-so-far
2055+     prediction of rest of cycle
2056+     during last 10 cycles <-- separate pickle
2057+    start/finish time of last 10 cycles  <-- separate pickle
2058+    expiration time used for last 10 cycles <-- separate pickle
2059+
2060+    Histogram of leases-per-share:
2061+     this-cycle-to-date
2062+     last 10 cycles <-- separate pickle
2063+    Histogram of lease ages, buckets = 1day
2064+     cycle-to-date
2065+     last 10 cycles <-- separate pickle
2066+
2067+    All cycle-to-date values remain valid until the start of the next cycle.
2068+
2069+    """
2070+
2071+    slow_start = 360 # wait 6 minutes after startup
2072+    minimum_cycle_time = 12*60*60 # not more than twice per day
2073+
2074+    def __init__(self, statefile, historyfp, expiration_policy):
2075+        self.historyfp = historyfp
2076+        self.expiration_enabled = expiration_policy['enabled']
2077+        self.mode = expiration_policy['mode']
2078+        self.override_lease_duration = None
2079+        self.cutoff_date = None
2080+        if self.mode == "age":
2081+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
2082+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
2083+        elif self.mode == "cutoff-date":
2084+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
2085+            assert cutoff_date is not None
2086+            self.cutoff_date = expiration_policy['cutoff_date']
2087+        else:
2088+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
2089+        self.sharetypes_to_expire = expiration_policy['sharetypes']
2090+        ShareCrawler.__init__(self, statefile)
2091+
2092+    def add_initial_state(self):
2093+        # we fill ["cycle-to-date"] here (even though they will be reset in
2094+        # self.started_cycle) just in case someone grabs our state before we
2095+        # get started: unit tests do this
2096+        so_far = self.create_empty_cycle_dict()
2097+        self.state.setdefault("cycle-to-date", so_far)
2098+        # in case we upgrade the code while a cycle is in progress, update
2099+        # the keys individually
2100+        for k in so_far:
2101+            self.state["cycle-to-date"].setdefault(k, so_far[k])
2102+
2103+        # initialize history
2104+        if not self.historyfp.exists():
2105+            history = {} # cyclenum -> dict
2106+            self.historyfp.setContent(pickle.dumps(history))
2107+
2108+    def create_empty_cycle_dict(self):
2109+        recovered = self.create_empty_recovered_dict()
2110+        so_far = {"corrupt-shares": [],
2111+                  "space-recovered": recovered,
2112+                  "lease-age-histogram": {}, # (minage,maxage)->count
2113+                  "leases-per-share-histogram": {}, # leasecount->numshares
2114+                  }
2115+        return so_far
2116+
2117+    def create_empty_recovered_dict(self):
2118+        recovered = {}
2119+        for a in ("actual", "original", "configured", "examined"):
2120+            for b in ("buckets", "shares", "sharebytes", "diskbytes"):
2121+                recovered[a+"-"+b] = 0
2122+                recovered[a+"-"+b+"-mutable"] = 0
2123+                recovered[a+"-"+b+"-immutable"] = 0
2124+        return recovered
2125+
2126+    def started_cycle(self, cycle):
2127+        self.state["cycle-to-date"] = self.create_empty_cycle_dict()
2128+
2129+    def stat(self, fn):
2130+        return os.stat(fn)
2131+
2132+    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
2133+        bucketdir = os.path.join(prefixdir, storage_index_b32)
2134+        s = self.stat(bucketdir)
2135+        would_keep_shares = []
2136+        wks = None
2137+
2138+        for fn in os.listdir(bucketdir):
2139+            try:
2140+                shnum = int(fn)
2141+            except ValueError:
2142+                continue # non-numeric means not a sharefile
2143+            sharefile = os.path.join(bucketdir, fn)
2144+            try:
2145+                wks = self.process_share(sharefile)
2146+            except (UnknownMutableContainerVersionError,
2147+                    UnknownImmutableContainerVersionError,
2148+                    struct.error):
2149+                twlog.msg("lease-checker error processing %s" % sharefile)
2150+                twlog.err()
2151+                which = (storage_index_b32, shnum)
2152+                self.state["cycle-to-date"]["corrupt-shares"].append(which)
2153+                wks = (1, 1, 1, "unknown")
2154+            would_keep_shares.append(wks)
2155+
2156+        sharetype = None
2157+        if wks:
2158+            # use the last share's sharetype as the buckettype
2159+            sharetype = wks[3]
2160+        rec = self.state["cycle-to-date"]["space-recovered"]
2161+        self.increment(rec, "examined-buckets", 1)
2162+        if sharetype:
2163+            self.increment(rec, "examined-buckets-"+sharetype, 1)
2164+
2165+        try:
2166+            bucket_diskbytes = s.st_blocks * 512
2167+        except AttributeError:
2168+            bucket_diskbytes = 0 # no stat().st_blocks on windows
2169+        if sum([wks[0] for wks in would_keep_shares]) == 0:
2170+            self.increment_bucketspace("original", bucket_diskbytes, sharetype)
2171+        if sum([wks[1] for wks in would_keep_shares]) == 0:
2172+            self.increment_bucketspace("configured", bucket_diskbytes, sharetype)
2173+        if sum([wks[2] for wks in would_keep_shares]) == 0:
2174+            self.increment_bucketspace("actual", bucket_diskbytes, sharetype)
2175+
2176+    def process_share(self, sharefilename):
2177+        # first, find out what kind of a share it is
2178+        f = open(sharefilename, "rb")
2179+        prefix = f.read(32)
2180+        f.close()
2181+        if prefix == MutableShareFile.MAGIC:
2182+            sf = MutableShareFile(sharefilename)
2183+        else:
2184+            # otherwise assume it's immutable
2185+            sf = FSBShare(sharefilename)
2186+        sharetype = sf.sharetype
2187+        now = time.time()
2188+        s = self.stat(sharefilename)
2189+
2190+        num_leases = 0
2191+        num_valid_leases_original = 0
2192+        num_valid_leases_configured = 0
2193+        expired_leases_configured = []
2194+
2195+        for li in sf.get_leases():
2196+            num_leases += 1
2197+            original_expiration_time = li.get_expiration_time()
2198+            grant_renew_time = li.get_grant_renew_time_time()
2199+            age = li.get_age()
2200+            self.add_lease_age_to_histogram(age)
2201+
2202+            #  expired-or-not according to original expiration time
2203+            if original_expiration_time > now:
2204+                num_valid_leases_original += 1
2205+
2206+            #  expired-or-not according to our configured age limit
2207+            expired = False
2208+            if self.mode == "age":
2209+                age_limit = original_expiration_time
2210+                if self.override_lease_duration is not None:
2211+                    age_limit = self.override_lease_duration
2212+                if age > age_limit:
2213+                    expired = True
2214+            else:
2215+                assert self.mode == "cutoff-date"
2216+                if grant_renew_time < self.cutoff_date:
2217+                    expired = True
2218+            if sharetype not in self.sharetypes_to_expire:
2219+                expired = False
2220+
2221+            if expired:
2222+                expired_leases_configured.append(li)
2223+            else:
2224+                num_valid_leases_configured += 1
2225+
2226+        so_far = self.state["cycle-to-date"]
2227+        self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
2228+        self.increment_space("examined", s, sharetype)
2229+
2230+        would_keep_share = [1, 1, 1, sharetype]
2231+
2232+        if self.expiration_enabled:
2233+            for li in expired_leases_configured:
2234+                sf.cancel_lease(li.cancel_secret)
2235+
2236+        if num_valid_leases_original == 0:
2237+            would_keep_share[0] = 0
2238+            self.increment_space("original", s, sharetype)
2239+
2240+        if num_valid_leases_configured == 0:
2241+            would_keep_share[1] = 0
2242+            self.increment_space("configured", s, sharetype)
2243+            if self.expiration_enabled:
2244+                would_keep_share[2] = 0
2245+                self.increment_space("actual", s, sharetype)
2246+
2247+        return would_keep_share
2248+
2249+    def increment_space(self, a, s, sharetype):
2250+        sharebytes = s.st_size
2251+        try:
2252+            # note that stat(2) says that st_blocks is 512 bytes, and that
2253+            # st_blksize is "optimal file sys I/O ops blocksize", which is
2254+            # independent of the block-size that st_blocks uses.
2255+            diskbytes = s.st_blocks * 512
2256+        except AttributeError:
2257+            # the docs say that st_blocks is only on linux. I also see it on
2258+            # MacOS. But it isn't available on windows.
2259+            diskbytes = sharebytes
2260+        so_far_sr = self.state["cycle-to-date"]["space-recovered"]
2261+        self.increment(so_far_sr, a+"-shares", 1)
2262+        self.increment(so_far_sr, a+"-sharebytes", sharebytes)
2263+        self.increment(so_far_sr, a+"-diskbytes", diskbytes)
2264+        if sharetype:
2265+            self.increment(so_far_sr, a+"-shares-"+sharetype, 1)
2266+            self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
2267+            self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
2268+
2269+    def increment_bucketspace(self, a, bucket_diskbytes, sharetype):
2270+        rec = self.state["cycle-to-date"]["space-recovered"]
2271+        self.increment(rec, a+"-diskbytes", bucket_diskbytes)
2272+        self.increment(rec, a+"-buckets", 1)
2273+        if sharetype:
2274+            self.increment(rec, a+"-diskbytes-"+sharetype, bucket_diskbytes)
2275+            self.increment(rec, a+"-buckets-"+sharetype, 1)
2276+
2277+    def increment(self, d, k, delta=1):
2278+        if k not in d:
2279+            d[k] = 0
2280+        d[k] += delta
2281+
2282+    def add_lease_age_to_histogram(self, age):
2283+        bucket_interval = 24*60*60
2284+        bucket_number = int(age/bucket_interval)
2285+        bucket_start = bucket_number * bucket_interval
2286+        bucket_end = bucket_start + bucket_interval
2287+        k = (bucket_start, bucket_end)
2288+        self.increment(self.state["cycle-to-date"]["lease-age-histogram"], k, 1)
2289+
2290+    def convert_lease_age_histogram(self, lah):
2291+        # convert { (minage,maxage) : count } into [ (minage,maxage,count) ]
2292+        # since the former is not JSON-safe (JSON dictionaries must have
2293+        # string keys).
2294+        json_safe_lah = []
2295+        for k in sorted(lah):
2296+            (minage,maxage) = k
2297+            json_safe_lah.append( (minage, maxage, lah[k]) )
2298+        return json_safe_lah
2299+
2300+    def finished_cycle(self, cycle):
2301+        # add to our history state, prune old history
2302+        h = {}
2303+
2304+        start = self.state["current-cycle-start-time"]
2305+        now = time.time()
2306+        h["cycle-start-finish-times"] = (start, now)
2307+        h["expiration-enabled"] = self.expiration_enabled
2308+        h["configured-expiration-mode"] = (self.mode,
2309+                                           self.override_lease_duration,
2310+                                           self.cutoff_date,
2311+                                           self.sharetypes_to_expire)
2312+
2313+        s = self.state["cycle-to-date"]
2314+
2315+        # state["lease-age-histogram"] is a dictionary (mapping
2316+        # (minage,maxage) tuple to a sharecount), but we report
2317+        # self.get_state()["lease-age-histogram"] as a list of
2318+        # (min,max,sharecount) tuples, because JSON can handle that better.
2319+        # We record the list-of-tuples form into the history for the same
2320+        # reason.
2321+        lah = self.convert_lease_age_histogram(s["lease-age-histogram"])
2322+        h["lease-age-histogram"] = lah
2323+        h["leases-per-share-histogram"] = s["leases-per-share-histogram"].copy()
2324+        h["corrupt-shares"] = s["corrupt-shares"][:]
2325+        # note: if ["shares-recovered"] ever acquires an internal dict, this
2326+        # copy() needs to become a deepcopy
2327+        h["space-recovered"] = s["space-recovered"].copy()
2328+
2329+        history = pickle.load(self.historyfp.getContent())
2330+        history[cycle] = h
2331+        while len(history) > 10:
2332+            oldcycles = sorted(history.keys())
2333+            del history[oldcycles[0]]
2334+        self.historyfp.setContent(pickle.dumps(history))
2335+
2336+    def get_state(self):
2337+        """In addition to the crawler state described in
2338+        ShareCrawler.get_state(), I return the following keys which are
2339+        specific to the lease-checker/expirer. Note that the non-history keys
2340+        (with 'cycle' in their names) are only present if a cycle is
2341+        currently running. If the crawler is between cycles, it appropriate
2342+        to show the latest item in the 'history' key instead. Also note that
2343+        each history item has all the data in the 'cycle-to-date' value, plus
2344+        cycle-start-finish-times.
2345+
2346+         cycle-to-date:
2347+          expiration-enabled
2348+          configured-expiration-mode
2349+          lease-age-histogram (list of (minage,maxage,sharecount) tuples)
2350+          leases-per-share-histogram
2351+          corrupt-shares (list of (si_b32,shnum) tuples, minimal verification)
2352+          space-recovered
2353+
2354+         estimated-remaining-cycle:
2355+          # Values may be None if not enough data has been gathered to
2356+          # produce an estimate.
2357+          space-recovered
2358+
2359+         estimated-current-cycle:
2360+          # cycle-to-date plus estimated-remaining. Values may be None if
2361+          # not enough data has been gathered to produce an estimate.
2362+          space-recovered
2363+
2364+         history: maps cyclenum to a dict with the following keys:
2365+          cycle-start-finish-times
2366+          expiration-enabled
2367+          configured-expiration-mode
2368+          lease-age-histogram
2369+          leases-per-share-histogram
2370+          corrupt-shares
2371+          space-recovered
2372+
2373+         The 'space-recovered' structure is a dictionary with the following
2374+         keys:
2375+          # 'examined' is what was looked at
2376+          examined-buckets, examined-buckets-mutable, examined-buckets-immutable
2377+          examined-shares, -mutable, -immutable
2378+          examined-sharebytes, -mutable, -immutable
2379+          examined-diskbytes, -mutable, -immutable
2380+
2381+          # 'actual' is what was actually deleted
2382+          actual-buckets, -mutable, -immutable
2383+          actual-shares, -mutable, -immutable
2384+          actual-sharebytes, -mutable, -immutable
2385+          actual-diskbytes, -mutable, -immutable
2386+
2387+          # would have been deleted, if the original lease timer was used
2388+          original-buckets, -mutable, -immutable
2389+          original-shares, -mutable, -immutable
2390+          original-sharebytes, -mutable, -immutable
2391+          original-diskbytes, -mutable, -immutable
2392+
2393+          # would have been deleted, if our configured max_age was used
2394+          configured-buckets, -mutable, -immutable
2395+          configured-shares, -mutable, -immutable
2396+          configured-sharebytes, -mutable, -immutable
2397+          configured-diskbytes, -mutable, -immutable
2398+
2399+        """
2400+        progress = self.get_progress()
2401+
2402+        state = ShareCrawler.get_state(self) # does a shallow copy
2403+        history = pickle.load(self.historyfp.getContent())
2404+        state["history"] = history
2405+
2406+        if not progress["cycle-in-progress"]:
2407+            del state["cycle-to-date"]
2408+            return state
2409+
2410+        so_far = state["cycle-to-date"].copy()
2411+        state["cycle-to-date"] = so_far
2412+
2413+        lah = so_far["lease-age-histogram"]
2414+        so_far["lease-age-histogram"] = self.convert_lease_age_histogram(lah)
2415+        so_far["expiration-enabled"] = self.expiration_enabled
2416+        so_far["configured-expiration-mode"] = (self.mode,
2417+                                                self.override_lease_duration,
2418+                                                self.cutoff_date,
2419+                                                self.sharetypes_to_expire)
2420+
2421+        so_far_sr = so_far["space-recovered"]
2422+        remaining_sr = {}
2423+        remaining = {"space-recovered": remaining_sr}
2424+        cycle_sr = {}
2425+        cycle = {"space-recovered": cycle_sr}
2426+
2427+        if progress["cycle-complete-percentage"] > 0.0:
2428+            pc = progress["cycle-complete-percentage"] / 100.0
2429+            m = (1-pc)/pc
2430+            for a in ("actual", "original", "configured", "examined"):
2431+                for b in ("buckets", "shares", "sharebytes", "diskbytes"):
2432+                    for c in ("", "-mutable", "-immutable"):
2433+                        k = a+"-"+b+c
2434+                        remaining_sr[k] = m * so_far_sr[k]
2435+                        cycle_sr[k] = so_far_sr[k] + remaining_sr[k]
2436+        else:
2437+            for a in ("actual", "original", "configured", "examined"):
2438+                for b in ("buckets", "shares", "sharebytes", "diskbytes"):
2439+                    for c in ("", "-mutable", "-immutable"):
2440+                        k = a+"-"+b+c
2441+                        remaining_sr[k] = None
2442+                        cycle_sr[k] = None
2443+
2444+        state["estimated-remaining-cycle"] = remaining
2445+        state["estimated-current-cycle"] = cycle
2446+        return state
2447}
2448[Changes I have made that aren't necessary for the test_backends.py suite to pass.
2449wilcoxjg@gmail.com**20110809203811
2450 Ignore-this: 117d49047456013f382ffc0559f00c40
2451] {
2452hunk ./src/allmydata/storage/crawler.py 1
2453-
2454 import os, time, struct
2455 import cPickle as pickle
2456 from twisted.internet import reactor
2457hunk ./src/allmydata/storage/crawler.py 6
2458 from twisted.application import service
2459 from allmydata.storage.common import si_b2a
2460-from allmydata.util import fileutil
2461 
2462 class TimeSliceExceeded(Exception):
2463     pass
2464hunk ./src/allmydata/storage/crawler.py 11
2465 
2466 class ShareCrawler(service.MultiService):
2467-    """A ShareCrawler subclass is attached to a StorageServer, and
2468+    """A subclass of ShareCrawler is attached to a StorageServer, and
2469     periodically walks all of its shares, processing each one in some
2470     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
2471     since large servers can easily have a terabyte of shares, in several
2472hunk ./src/allmydata/storage/crawler.py 29
2473     We assume that the normal upload/download/get_buckets traffic of a tahoe
2474     grid will cause the prefixdir contents to be mostly cached in the kernel,
2475     or that the number of buckets in each prefixdir will be small enough to
2476-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
2477+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
2478     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
2479     prefix. On this server, each prefixdir took 130ms-200ms to list the first
2480     time, and 17ms to list the second time.
2481hunk ./src/allmydata/storage/crawler.py 66
2482     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
2483     minimum_cycle_time = 300 # don't run a cycle faster than this
2484 
2485-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
2486+    def __init__(self, statefp, allowed_cpu_percentage=None):
2487         service.MultiService.__init__(self)
2488         if allowed_cpu_percentage is not None:
2489             self.allowed_cpu_percentage = allowed_cpu_percentage
2490hunk ./src/allmydata/storage/crawler.py 70
2491-        self.server = server
2492-        self.sharedir = server.sharedir
2493-        self.statefile = statefile
2494+        self.statefp = statefp
2495         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
2496                          for i in range(2**10)]
2497         self.prefixes.sort()
2498hunk ./src/allmydata/storage/crawler.py 190
2499         #                            of the last bucket to be processed, or
2500         #                            None if we are sleeping between cycles
2501         try:
2502-            f = open(self.statefile, "rb")
2503-            state = pickle.load(f)
2504-            f.close()
2505+            state = pickle.loads(self.statefp.getContent())
2506         except EnvironmentError:
2507             state = {"version": 1,
2508                      "last-cycle-finished": None,
2509hunk ./src/allmydata/storage/crawler.py 226
2510         else:
2511             last_complete_prefix = self.prefixes[lcpi]
2512         self.state["last-complete-prefix"] = last_complete_prefix
2513-        tmpfile = self.statefile + ".tmp"
2514-        f = open(tmpfile, "wb")
2515-        pickle.dump(self.state, f)
2516-        f.close()
2517-        fileutil.move_into_place(tmpfile, self.statefile)
2518+        self.statefp.setContent(pickle.dumps(self.state))
2519 
2520     def startService(self):
2521         # arrange things to look like we were just sleeping, so
2522hunk ./src/allmydata/storage/crawler.py 438
2523 
2524     minimum_cycle_time = 60*60 # we don't need this more than once an hour
2525 
2526-    def __init__(self, server, statefile, num_sample_prefixes=1):
2527-        ShareCrawler.__init__(self, server, statefile)
2528+    def __init__(self, statefp, num_sample_prefixes=1):
2529+        ShareCrawler.__init__(self, statefp)
2530         self.num_sample_prefixes = num_sample_prefixes
2531 
2532     def add_initial_state(self):
2533hunk ./src/allmydata/storage/crawler.py 478
2534             old_cycle,buckets = self.state["storage-index-samples"][prefix]
2535             if old_cycle != cycle:
2536                 del self.state["storage-index-samples"][prefix]
2537-
2538hunk ./src/allmydata/storage/lease.py 17
2539 
2540     def get_expiration_time(self):
2541         return self.expiration_time
2542+
2543     def get_grant_renew_time_time(self):
2544         # hack, based upon fixed 31day expiration period
2545         return self.expiration_time - 31*24*60*60
2546hunk ./src/allmydata/storage/lease.py 21
2547+
2548     def get_age(self):
2549         return time.time() - self.get_grant_renew_time_time()
2550 
2551hunk ./src/allmydata/storage/lease.py 32
2552          self.expiration_time) = struct.unpack(">L32s32sL", data)
2553         self.nodeid = None
2554         return self
2555+
2556     def to_immutable_data(self):
2557         return struct.pack(">L32s32sL",
2558                            self.owner_num,
2559hunk ./src/allmydata/storage/lease.py 45
2560                            int(self.expiration_time),
2561                            self.renew_secret, self.cancel_secret,
2562                            self.nodeid)
2563+
2564     def from_mutable_data(self, data):
2565         (self.owner_num,
2566          self.expiration_time,
2567}
2568[add __init__.py to backend and core and null
2569wilcoxjg@gmail.com**20110810033751
2570 Ignore-this: 1c72bc54951033ab433c38de58bdc39c
2571] {
2572addfile ./src/allmydata/storage/backends/__init__.py
2573addfile ./src/allmydata/storage/backends/null/__init__.py
2574}
2575[whitespace-cleanup
2576wilcoxjg@gmail.com**20110810170847
2577 Ignore-this: 7a278e7c87c6fcd2e5ed783667c8b746
2578] {
2579hunk ./src/allmydata/interfaces.py 1
2580-
2581 from zope.interface import Interface
2582 from foolscap.api import StringConstraint, ListOf, TupleOf, SetOf, DictOf, \
2583      ChoiceOf, IntegerConstraint, Any, RemoteInterface, Referenceable
2584hunk ./src/allmydata/storage/backends/das/core.py 47
2585         self._setup_lease_checkerf(expiration_policy)
2586 
2587     def _setup_storage(self, storedir, readonly, reserved_space):
2588-        precondition(isinstance(storedir, FilePath), storedir, FilePath) 
2589+        precondition(isinstance(storedir, FilePath), storedir, FilePath)
2590         self.storedir = storedir
2591         self.readonly = readonly
2592         self.reserved_space = int(reserved_space)
2593hunk ./src/allmydata/storage/backends/das/core.py 89
2594         except UnlistableError:
2595             # There is no shares directory at all.
2596             return frozenset()
2597-           
2598+
2599     def get_shares(self, storageindex):
2600         """ Generate ImmutableShare objects for shares we have for this
2601         storageindex. ("Shares we have" means completed ones, excluding
2602hunk ./src/allmydata/storage/backends/das/core.py 104
2603         except UnlistableError:
2604             # There is no shares directory at all.
2605             pass
2606-       
2607+
2608     def get_available_space(self):
2609         if self.readonly:
2610             return 0
2611hunk ./src/allmydata/storage/backends/das/core.py 122
2612 
2613     def set_storage_server(self, ss):
2614         self.ss = ss
2615-       
2616+
2617 
2618 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2619 # and share data. The share data is accessed by RIBucketWriter.write and
2620hunk ./src/allmydata/storage/backends/das/core.py 223
2621             # above-mentioned conditions.
2622             pass
2623         pass
2624-       
2625+
2626     def stat(self):
2627         return filepath.stat(self.finalhome.path)[stat.ST_SIZE]
2628 
2629hunk ./src/allmydata/storage/backends/das/core.py 309
2630         num_leases = self._read_num_leases(self.incominghome)
2631         self._write_lease_record(self.incominghome, num_leases, lease_info)
2632         self._write_num_leases(self.incominghome, num_leases+1)
2633-       
2634+
2635     def renew_lease(self, renew_secret, new_expire_time):
2636         for i,lease in enumerate(self.get_leases()):
2637             if constant_time_compare(lease.renew_secret, renew_secret):
2638hunk ./src/allmydata/storage/common.py 1
2639-
2640 import os.path
2641 from allmydata.util import base32
2642 
2643hunk ./src/allmydata/storage/server.py 149
2644 
2645         if self.readonly_storage:
2646             return 0
2647-        return fileutil.get_available_space(self.storedir, self.reserved_space)
2648+        return fileutil.get_available_space(self.sharedir, self.reserved_space)
2649 
2650     def allocated_size(self):
2651         space = 0
2652hunk ./src/allmydata/storage/server.py 346
2653         self.count("writev")
2654         si_s = si_b2a(storageindex)
2655         log.msg("storage: slot_writev %s" % si_s)
2656-       
2657+
2658         (write_enabler, renew_secret, cancel_secret) = secrets
2659         # shares exist if there is a file for them
2660         bucketdir = si_si2dir(self.sharedir, storageindex)
2661}
2662[das/__init__.py
2663wilcoxjg@gmail.com**20110810173849
2664 Ignore-this: bdb730cba1d53d8827ef5fef65958471
2665] addfile ./src/allmydata/storage/backends/das/__init__.py
2666
2667Context:
2668
2669[test_client.py: relax a check in test_create_drop_uploader so that it should pass on Python 2.4.x. refs #1429
2670david-sarah@jacaranda.org**20110810052504
2671 Ignore-this: 1380749ceaf33c30e26c50d57476616c
2672]
2673[test/common_util.py: correct fix to mkdir_nonascii. refs #1472
2674david-sarah@jacaranda.org**20110810051906
2675 Ignore-this: 93c0c33370bc47d95c26c4cce8e05290
2676]
2677[test/common_util.py: fix a typo. refs #1472
2678david-sarah@jacaranda.org**20110810044235
2679 Ignore-this: f88643d7c82cb3577686d77bbff9e2bc
2680]
2681[test_client.py, test_drop_upload.py: fix pyflakes warnings.
2682david-sarah@jacaranda.org**20110810034505
2683 Ignore-this: 1e2d71bf2f43d63cbb423d32a6f96793
2684]
2685[Factor out methods dealing with non-ASCII directories and filenames from test_drop_upload.py into common_util.py. refs #1429, #1472
2686david-sarah@jacaranda.org**20110810031558
2687 Ignore-this: 3de8f945fa7a58fc318a1184bad0fd1a
2688]
2689[test_client.py: add a test that the drop-uploader is initialized correctly by client.py. Also give the DropUploader service a name, which is necessary for the test. refs #1429
2690david-sarah@jacaranda.org**20110810030538
2691 Ignore-this: 13d511ea9bbe9da2dcffe4a91ce94eae
2692]
2693[drop-upload: rename 'start' method to 'startService', which is what you're supposed to use to start a Service. refs #1429
2694david-sarah@jacaranda.org**20110810030345
2695 Ignore-this: d1f5e5c63937ea37be37324e2f1ae99d
2696]
2697[test_drop_upload.py: add comment explaining why we don't use FilePath.setContent. refs #1429
2698david-sarah@jacaranda.org**20110810025942
2699 Ignore-this: b95358030b63cb467d1d7f1b9a9b6978
2700]
2701[test_drop_upload.py: fix some grammatical and spelling nits. refs #1429
2702david-sarah@jacaranda.org**20110809221231
2703 Ignore-this: fd331acddd9f754173f274a34fe62f03
2704]
2705[drop-upload: report the configured local directory being absent differently from it being a file
2706zooko@zooko.com**20110809220930
2707 Ignore-this: a08879100f5f20e609be3f0ffa3b25cc
2708 refs #1429
2709]
2710[drop-upload: rename the 'upload.uri' parameter to 'upload.dircap', and a couple of cleanups to error messages. refs #1429
2711zooko@zooko.com**20110809220508
2712 Ignore-this: 4846368cbe331e8653bdce1f314e276b
2713 I rerecorded this patch, originally by David-Sarah, to use "darcs replace" instead of editing to do the renames. This uncovered one missed rename in Client.init_drop_uploader. (Which also means that code isn't exercised by the current unit tests.)
2714 refs #1429
2715]
2716[drop-upload test for non-existent local dir separately from test for non-directory local dir
2717zooko@zooko.com**20110809220115
2718 Ignore-this: cd85f345c02f5cb71b1c1527bd4ebddc
2719 A candidate patch for #1429 has a bug when it is using FilePath.is_dir() to detect whether the configured local dir exists and is a directory. FilePath.is_dir() raises exception, instead of returning False, if the thing doesn't exist. This test is to make sure that DropUploader.__init__ raise different exceptions for those two cases.
2720 refs #1429
2721]
2722[drop-upload: unit tests for the configuration options being named "cap" instead of "uri"
2723zooko@zooko.com**20110809215913
2724 Ignore-this: 958c78fffb3d76b3e4817647f824e7f9
2725 This is a subset of a patch that David-Sarah attached to #1429. This is just the unit-tests part of that patch, and uses darcs record instead of hunks to change the names.
2726 refs #1429
2727]
2728[src/allmydata/storage/server.py: use the filesystem of storage/shares/, rather than storage/, to calculate remaining space. fixes #1384
2729david-sarah@jacaranda.org**20110719022752
2730 Ignore-this: a4781043cfd453dbb66ae4f108d80bea
2731]
2732[test_storage.py: test that we are using the filesystem of storage/shares/, rather than storage/, to calculate remaining space, and that the HTML status output reflects the values returned by fileutil.get_disk_stats. This version works with older versions of the mock library. refs #1384
2733david-sarah@jacaranda.org**20110809190722
2734 Ignore-this: db447caca37a459ca49563efa58db58c
2735]
2736[Work around ref #1472 by having test_drop_upload delete the non-ASCII directories it creates.
2737david-sarah@jacaranda.org**20110809012334
2738 Ignore-this: 5881fd5db419ba8ad12e0b2a82f6c4f0
2739]
2740[Remove all trailing whitespace from .py files.
2741david-sarah@jacaranda.org**20110809001117
2742 Ignore-this: d2658b5ce44af70cc606ae4d3085b7cc
2743]
2744[test_drop_upload.py: fix unused imports. refs #1429
2745david-sarah@jacaranda.org**20110808235422
2746 Ignore-this: 834f6b946bfea699d7d8c743edd66671
2747]
2748[Documentation for drop-upload frontend. refs #1429
2749david-sarah@jacaranda.org**20110808182146
2750 Ignore-this: b33110834e586c0b784d1736c2af5779
2751]
2752[Drop-upload frontend, rerecorded for 1.9 beta (and correcting a minor mistake). Includes some fixes for Windows but not the Windows inotify implementation. fixes #1429
2753david-sarah@jacaranda.org**20110808234049
2754 Ignore-this: 67f824c7f554e9a3a85f9fd2e1123d97
2755]
2756[node.py: ensure that client and introducer nodes record their port number and use that port on the next restart, fixing a regression caused by #1385. fixes #1469.
2757david-sarah@jacaranda.org**20110806221934
2758 Ignore-this: 1aa9d340b6570320ab2f9edc89c9e0a8
2759]
2760[test_runner.py: fix a race condition in the test when NODE_URL_FILE is written before PORTNUM_FILE. refs #1469
2761david-sarah@jacaranda.org**20110806231842
2762 Ignore-this: ab01ae7cec3a073e29eec473e64052a0
2763]
2764[test_runner.py: cleanups of HOTLINE_FILE writing and removal.
2765david-sarah@jacaranda.org**20110806231652
2766 Ignore-this: 25f5c5d6f5d8faebb26a4ce80110a335
2767]
2768[test_runner.py: remove an unused constant.
2769david-sarah@jacaranda.org**20110806221416
2770 Ignore-this: eade2695cbabbea9cafeaa8debe410bb
2771]
2772[node.py: fix the error path for a missing config option so that it works for a Unicode base directory.
2773david-sarah@jacaranda.org**20110806221007
2774 Ignore-this: 4eb9cc04b2ce05182a274a0d69dafaf3
2775]
2776[test_runner.py: test that client and introducer nodes record their port number and use that port on the next restart. This tests for a regression caused by ref #1385.
2777david-sarah@jacaranda.org**20110806220635
2778 Ignore-this: 40a0c040b142dbddd47e69b3c3712f5
2779]
2780[test_runner.py: fix a bug in CreateNode.do_create introduced in changeset [5114] when the tahoe.cfg file has been written with CRLF line endings. refs #1385
2781david-sarah@jacaranda.org**20110804003032
2782 Ignore-this: 7b7afdcf99da6671afac2d42828883eb
2783]
2784[test_client.py: repair Basic.test_error_on_old_config_files. refs #1385
2785david-sarah@jacaranda.org**20110803235036
2786 Ignore-this: 31e2a9c3febe55948de7e144353663e
2787]
2788[test_checker.py: increase timeout for TooParallel.test_immutable again. The ARM buildslave took 38 seconds, so 40 seconds is too close to the edge; make it 80.
2789david-sarah@jacaranda.org**20110803214042
2790 Ignore-this: 2d8026a6b25534e01738f78d6c7495cb
2791]
2792[test_runner.py: fix RunNode.test_introducer to not rely on the mtime of introducer.furl to detect when the node has restarted. Instead we detect when node.url has been written. refs #1385
2793david-sarah@jacaranda.org**20110803180917
2794 Ignore-this: 11ddc43b107beca42cb78af88c5c394c
2795]
2796[Further improve error message about old config files. refs #1385
2797david-sarah@jacaranda.org**20110803174546
2798 Ignore-this: 9d6cc3c288d9863dce58faafb3855917
2799]
2800[Slightly improve error message about old config files (avoid unnecessary Unicode escaping). refs #1385
2801david-sarah@jacaranda.org**20110803163848
2802 Ignore-this: a3e3930fba7ccf90b8db3d2ed5829df4
2803]
2804[test_checker.py: increase timeout for TooParallel.test_immutable (was consistently failing on ARM buildslave).
2805david-sarah@jacaranda.org**20110803163213
2806 Ignore-this: d0efceaf12628e8791862b80c85b5d56
2807]
2808[Fix the bug that prevents an introducer from starting when introducer.furl already exists. Also remove some dead code that used to read old config files, and rename 'warn_about_old_config_files' to reflect that it's not a warning. refs #1385
2809david-sarah@jacaranda.org**20110803013212
2810 Ignore-this: 2d6cd14bd06a7493b26f2027aff78f4d
2811]
2812[test_runner.py: modify RunNode.test_introducer to test that starting an introducer works when the introducer.furl file already exists. refs #1385
2813david-sarah@jacaranda.org**20110803012704
2814 Ignore-this: 8cf7f27ac4bfbb5ad8ca4a974106d437
2815]
2816[verifier: correct a bug introduced in changeset [5106] that caused us to only verify the first block of a file. refs #1395
2817david-sarah@jacaranda.org**20110802172437
2818 Ignore-this: 87fb77854a839ff217dce73544775b11
2819]
2820[test_repairer: add a deterministic test of share data corruption that always flips the bits of the last byte of the share data. refs #1395
2821david-sarah@jacaranda.org**20110802175841
2822 Ignore-this: 72f54603785007e88220c8d979e08be7
2823]
2824[verifier: serialize the fetching of blocks within a share so that we don't use too much RAM
2825zooko@zooko.com**20110802063703
2826 Ignore-this: debd9bac07dcbb6803f835a9e2eabaa1
2827 
2828 Shares are still verified in parallel, but within a share, don't request a
2829 block until the previous block has been verified and the memory we used to hold
2830 it has been freed up.
2831 
2832 Patch originally due to Brian. This version has a mockery-patchery-style test
2833 which is "low tech" (it implements the patching inline in the test code instead
2834 of using an extension of the mock.patch() function from the mock library) and
2835 which unpatches in case of exception.
2836 
2837 fixes #1395
2838]
2839[add docs about timing-channel attacks
2840Brian Warner <warner@lothar.com>**20110802044541
2841 Ignore-this: 73114d5f5ed9ce252597b707dba3a194
2842]
2843['test-coverage' now needs PYTHONPATH=. to find TOP/twisted/plugins/
2844Brian Warner <warner@lothar.com>**20110802041952
2845 Ignore-this: d40f1f4cb426ea1c362fc961baedde2
2846]
2847[remove nodeid from WriteBucketProxy classes and customers
2848warner@lothar.com**20110801224317
2849 Ignore-this: e55334bb0095de11711eeb3af827e8e8
2850 refs #1363
2851]
2852[remove get_serverid() from ReadBucketProxy and customers, including Checker
2853warner@lothar.com**20110801224307
2854 Ignore-this: 837aba457bc853e4fd413ab1a94519cb
2855 and debug.py dump-share commands
2856 refs #1363
2857]
2858[reject old-style (pre-Tahoe-LAFS-v1.3) configuration files
2859zooko@zooko.com**20110801232423
2860 Ignore-this: b58218fcc064cc75ad8f05ed0c38902b
2861 Check for the existence of any of them and if any are found raise exception which will abort the startup of the node.
2862 This is a backwards-incompatible change for anyone who is still using old-style configuration files.
2863 fixes #1385
2864]
2865[whitespace-cleanup
2866zooko@zooko.com**20110725015546
2867 Ignore-this: 442970d0545183b97adc7bd66657876c
2868]
2869[tests: use fileutil.write() instead of open() to ensure timely close even without CPython-style reference counting
2870zooko@zooko.com**20110331145427
2871 Ignore-this: 75aae4ab8e5fa0ad698f998aaa1888ce
2872 Some of these already had an explicit close() but I went ahead and replaced them with fileutil.write() as well for the sake of uniformity.
2873]
2874[Address Kevan's comment in #776 about Options classes missed when adding 'self.command_name'. refs #776, #1359
2875david-sarah@jacaranda.org**20110801221317
2876 Ignore-this: 8881d42cf7e6a1d15468291b0cb8fab9
2877]
2878[docs/frontends/webapi.rst: change some more instances of 'delete' or 'remove' to 'unlink', change some section titles, and use two blank lines between all sections. refs #776, #1104
2879david-sarah@jacaranda.org**20110801220919
2880 Ignore-this: 572327591137bb05c24c44812d4b163f
2881]
2882[cleanup: implement rm as a synonym for unlink rather than vice-versa. refs #776
2883david-sarah@jacaranda.org**20110801220108
2884 Ignore-this: 598dcbed870f4f6bb9df62de9111b343
2885]
2886[docs/webapi.rst: address Kevan's comments about use of 'delete' on ref #1104
2887david-sarah@jacaranda.org**20110801205356
2888 Ignore-this: 4fbf03864934753c951ddeff64392491
2889]
2890[docs: some changes of 'delete' or 'rm' to 'unlink'. refs #1104
2891david-sarah@jacaranda.org**20110713002722
2892 Ignore-this: 304d2a330d5e6e77d5f1feed7814b21c
2893]
2894[WUI: change the label of the button to unlink a file from 'del' to 'unlink'. Also change some internal names to 'unlink', and allow 't=unlink' as a synonym for 't=delete' in the web-API interface. Incidentally, improve a test to check for the rename button as well as the unlink button. fixes #1104
2895david-sarah@jacaranda.org**20110713001218
2896 Ignore-this: 3eef6b3f81b94a9c0020a38eb20aa069
2897]
2898[src/allmydata/web/filenode.py: delete a stale comment that was made incorrect by changeset [3133].
2899david-sarah@jacaranda.org**20110801203009
2900 Ignore-this: b3912e95a874647027efdc97822dd10e
2901]
2902[fix typo introduced during rebasing of 'remove get_serverid from
2903Brian Warner <warner@lothar.com>**20110801200341
2904 Ignore-this: 4235b0f585c0533892193941dbbd89a8
2905 DownloadStatus.add_dyhb_request and customers' patch, to fix test failure.
2906]
2907[remove get_serverid from DownloadStatus.add_dyhb_request and customers
2908zooko@zooko.com**20110801185401
2909 Ignore-this: db188c18566d2d0ab39a80c9dc8f6be6
2910 This patch is a rebase of a patch originally written by Brian. I didn't change any of the intent of Brian's patch, just ported it to current trunk.
2911 refs #1363
2912]
2913[remove get_serverid from DownloadStatus.add_block_request and customers
2914zooko@zooko.com**20110801185344
2915 Ignore-this: 8bfa8201d6147f69b0fbe31beea9c1e
2916 This is a rebase of a patch Brian originally wrote. I haven't changed the intent of that patch, just ported it to trunk.
2917 refs #1363
2918]
2919[apply zooko's advice: storage_client get_known_servers() returns a frozenset, caller sorts
2920warner@lothar.com**20110801174452
2921 Ignore-this: 2aa13ea6cbed4e9084bd604bf8633692
2922 refs #1363
2923]
2924[test_immutable.Test: rewrite to use NoNetworkGrid, now takes 2.7s not 97s
2925warner@lothar.com**20110801174444
2926 Ignore-this: 54f30b5d7461d2b3514e2a0172f3a98c
2927 remove now-unused ShareManglingMixin
2928 refs #1363
2929]
2930[DownloadStatus.add_known_share wants to be used by Finder, web.status
2931warner@lothar.com**20110801174436
2932 Ignore-this: 1433bcd73099a579abe449f697f35f9
2933 refs #1363
2934]
2935[replace IServer.name() with get_name(), and get_longname()
2936warner@lothar.com**20110801174428
2937 Ignore-this: e5a6f7f6687fd7732ddf41cfdd7c491b
2938 
2939 This patch was originally written by Brian, but was re-recorded by Zooko to use
2940 darcs replace instead of hunks for any file in which it would result in fewer
2941 total hunks.
2942 refs #1363
2943]
2944[upload.py: apply David-Sarah's advice rename (un)contacted(2) trackers to first_pass/second_pass/next_pass
2945zooko@zooko.com**20110801174143
2946 Ignore-this: e36e1420bba0620a0107bd90032a5198
2947 This patch was written by Brian but was re-recorded by Zooko (with David-Sarah looking on) to use darcs replace instead of editing to rename the three variables to their new names.
2948 refs #1363
2949]
2950[Coalesce multiple Share.loop() calls, make downloads faster. Closes #1268.
2951Brian Warner <warner@lothar.com>**20110801151834
2952 Ignore-this: 48530fce36c01c0ff708f61c2de7e67a
2953]
2954[src/allmydata/_auto_deps.py: 'i686' is another way of spelling x86.
2955david-sarah@jacaranda.org**20110801034035
2956 Ignore-this: 6971e0621db2fba794d86395b4d51038
2957]
2958[tahoe_rm.py: better error message when there is no path. refs #1292
2959david-sarah@jacaranda.org**20110122064212
2960 Ignore-this: ff3bb2c9f376250e5fd77eb009e09018
2961]
2962[test_cli.py: Test for error message when 'tahoe rm' is invoked without a path. refs #1292
2963david-sarah@jacaranda.org**20110104105108
2964 Ignore-this: 29ec2f2e0251e446db96db002ad5dd7d
2965]
2966[src/allmydata/__init__.py: suppress a spurious warning from 'bin/tahoe --version[-and-path]' about twisted-web and twisted-core packages.
2967david-sarah@jacaranda.org**20110801005209
2968 Ignore-this: 50e7cd53cca57b1870d9df0361c7c709
2969]
2970[test_cli.py: use to_str on fields loaded using simplejson.loads in new tests. refs #1304
2971david-sarah@jacaranda.org**20110730032521
2972 Ignore-this: d1d6dfaefd1b4e733181bf127c79c00b
2973]
2974[cli: make 'tahoe cp' overwrite mutable files in-place
2975Kevan Carstensen <kevan@isnotajoke.com>**20110729202039
2976 Ignore-this: b2ad21a19439722f05c49bfd35b01855
2977]
2978[SFTP: write an error message to standard error for unrecognized shell commands. Change the existing message for shell sessions to be written to standard error, and refactor some duplicated code. Also change the lines of the error messages to end in CRLF, and take into account Kevan's review comments. fixes #1442, #1446
2979david-sarah@jacaranda.org**20110729233102
2980 Ignore-this: d2f2bb4664f25007d1602bf7333e2cdd
2981]
2982[src/allmydata/scripts/cli.py: fix pyflakes warning.
2983david-sarah@jacaranda.org**20110728021402
2984 Ignore-this: 94050140ddb99865295973f49927c509
2985]
2986[Fix the help synopses of CLI commands to include [options] in the right place. fixes #1359, fixes #636
2987david-sarah@jacaranda.org**20110724225440
2988 Ignore-this: 2a8e488a5f63dabfa9db9efd83768a5
2989]
2990[encodingutil: argv and output encodings are always the same on all platforms. Lose the unnecessary generality of them being different. fixes #1120
2991david-sarah@jacaranda.org**20110629185356
2992 Ignore-this: 5ebacbe6903dfa83ffd3ff8436a97787
2993]
2994[docs/man/tahoe.1: add man page. fixes #1420
2995david-sarah@jacaranda.org**20110724171728
2996 Ignore-this: fc7601ec7f25494288d6141d0ae0004c
2997]
2998[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
2999david-sarah@jacaranda.org**20110721234941
3000 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
3001]
3002[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
3003david-sarah@jacaranda.org**20110722000320
3004 Ignore-this: 55cd558b791526113db3f83c00ec328a
3005]
3006[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
3007david-sarah@jacaranda.org**20110721233658
3008 Ignore-this: 81b41745477163c9b39c0b59db91cc62
3009]
3010[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
3011david-sarah@jacaranda.org**20110722035402
3012 Ignore-this: 5d03f544c4154f088e26c7107494bf39
3013]
3014[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
3015david-sarah@jacaranda.org**20110722024907
3016 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
3017]
3018[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
3019david-sarah@jacaranda.org**20110718005949
3020 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
3021]
3022[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
3023david-sarah@jacaranda.org**20110717194315
3024 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
3025]
3026[README.txt: say that quickstart.rst is in the docs directory.
3027david-sarah@jacaranda.org**20110717192400
3028 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
3029]
3030[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
3031zooko@zooko.com**20110717114226
3032 Ignore-this: df222120d41447ce4102616921626c82
3033 fixes #1383
3034]
3035[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
3036david-sarah@jacaranda.org**20110716181813
3037 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
3038]
3039[docs: add missing link in NEWS.rst
3040zooko@zooko.com**20110712153307
3041 Ignore-this: be7b7eb81c03700b739daa1027d72b35
3042]
3043[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
3044zooko@zooko.com**20110712153229
3045 Ignore-this: 723c4f9e2211027c79d711715d972c5
3046 Also remove a couple of vestigial references to figleaf, which is long gone.
3047 fixes #1409 (remove contrib/fuse)
3048]
3049[add Protovis.js-based download-status timeline visualization
3050Brian Warner <warner@lothar.com>**20110629222606
3051 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
3052 
3053 provide status overlap info on the webapi t=json output, add decode/decrypt
3054 rate tooltips, add zoomin/zoomout buttons
3055]
3056[add more download-status data, fix tests
3057Brian Warner <warner@lothar.com>**20110629222555
3058 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
3059]
3060[prepare for viz: improve DownloadStatus events
3061Brian Warner <warner@lothar.com>**20110629222542
3062 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
3063 
3064 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
3065]
3066[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
3067zooko@zooko.com**20110629185711
3068 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
3069]
3070[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
3071david-sarah@jacaranda.org**20110130235809
3072 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
3073]
3074[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
3075david-sarah@jacaranda.org**20110626054124
3076 Ignore-this: abb864427a1b91bd10d5132b4589fd90
3077]
3078[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
3079david-sarah@jacaranda.org**20110623205528
3080 Ignore-this: c63e23146c39195de52fb17c7c49b2da
3081]
3082[Rename test_package_initialization.py to (much shorter) test_import.py .
3083Brian Warner <warner@lothar.com>**20110611190234
3084 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
3085 
3086 The former name was making my 'ls' listings hard to read, by forcing them
3087 down to just two columns.
3088]
3089[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
3090zooko@zooko.com**20110611163741
3091 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
3092 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
3093 fixes #1412
3094]
3095[wui: right-align the size column in the WUI
3096zooko@zooko.com**20110611153758
3097 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
3098 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
3099 fixes #1412
3100]
3101[docs: three minor fixes
3102zooko@zooko.com**20110610121656
3103 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
3104 CREDITS for arc for stats tweak
3105 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
3106 English usage tweak
3107]
3108[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
3109david-sarah@jacaranda.org**20110609223719
3110 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
3111]
3112[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
3113wilcoxjg@gmail.com**20110527120135
3114 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
3115 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
3116 NEWS.rst, stats.py: documentation of change to get_latencies
3117 stats.rst: now documents percentile modification in get_latencies
3118 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
3119 fixes #1392
3120]
3121[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
3122david-sarah@jacaranda.org**20110517011214
3123 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
3124]
3125[docs: convert NEWS to NEWS.rst and change all references to it.
3126david-sarah@jacaranda.org**20110517010255
3127 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
3128]
3129[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
3130david-sarah@jacaranda.org**20110512140559
3131 Ignore-this: 784548fc5367fac5450df1c46890876d
3132]
3133[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
3134david-sarah@jacaranda.org**20110130164923
3135 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
3136]
3137[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
3138zooko@zooko.com**20110128142006
3139 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
3140 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
3141]
3142[M-x whitespace-cleanup
3143zooko@zooko.com**20110510193653
3144 Ignore-this: dea02f831298c0f65ad096960e7df5c7
3145]
3146[docs: fix typo in running.rst, thanks to arch_o_median
3147zooko@zooko.com**20110510193633
3148 Ignore-this: ca06de166a46abbc61140513918e79e8
3149]
3150[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
3151david-sarah@jacaranda.org**20110204204902
3152 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
3153]
3154[relnotes.txt: forseeable -> foreseeable. refs #1342
3155david-sarah@jacaranda.org**20110204204116
3156 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
3157]
3158[replace remaining .html docs with .rst docs
3159zooko@zooko.com**20110510191650
3160 Ignore-this: d557d960a986d4ac8216d1677d236399
3161 Remove install.html (long since deprecated).
3162 Also replace some obsolete references to install.html with references to quickstart.rst.
3163 Fix some broken internal references within docs/historical/historical_known_issues.txt.
3164 Thanks to Ravi Pinjala and Patrick McDonald.
3165 refs #1227
3166]
3167[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
3168zooko@zooko.com**20110428055232
3169 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
3170]
3171[munin tahoe_files plugin: fix incorrect file count
3172francois@ctrlaltdel.ch**20110428055312
3173 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
3174 fixes #1391
3175]
3176[corrected "k must never be smaller than N" to "k must never be greater than N"
3177secorp@allmydata.org**20110425010308
3178 Ignore-this: 233129505d6c70860087f22541805eac
3179]
3180[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
3181david-sarah@jacaranda.org**20110411190738
3182 Ignore-this: 7847d26bc117c328c679f08a7baee519
3183]
3184[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
3185david-sarah@jacaranda.org**20110410155844
3186 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
3187]
3188[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
3189david-sarah@jacaranda.org**20110410155705
3190 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
3191]
3192[remove unused variable detected by pyflakes
3193zooko@zooko.com**20110407172231
3194 Ignore-this: 7344652d5e0720af822070d91f03daf9
3195]
3196[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
3197david-sarah@jacaranda.org**20110401202750
3198 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
3199]
3200[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
3201Brian Warner <warner@lothar.com>**20110325232511
3202 Ignore-this: d5307faa6900f143193bfbe14e0f01a
3203]
3204[control.py: remove all uses of s.get_serverid()
3205warner@lothar.com**20110227011203
3206 Ignore-this: f80a787953bd7fa3d40e828bde00e855
3207]
3208[web: remove some uses of s.get_serverid(), not all
3209warner@lothar.com**20110227011159
3210 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
3211]
3212[immutable/downloader/fetcher.py: remove all get_serverid() calls
3213warner@lothar.com**20110227011156
3214 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
3215]
3216[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
3217warner@lothar.com**20110227011153
3218 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
3219 
3220 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
3221 _shares_from_server dict was being popped incorrectly (using shnum as the
3222 index instead of serverid). I'm still thinking through the consequences of
3223 this bug. It was probably benign and really hard to detect. I think it would
3224 cause us to incorrectly believe that we're pulling too many shares from a
3225 server, and thus prefer a different server rather than asking for a second
3226 share from the first server. The diversity code is intended to spread out the
3227 number of shares simultaneously being requested from each server, but with
3228 this bug, it might be spreading out the total number of shares requested at
3229 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
3230 segment, so the effect doesn't last very long).
3231]
3232[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
3233warner@lothar.com**20110227011150
3234 Ignore-this: d8d56dd8e7b280792b40105e13664554
3235 
3236 test_download.py: create+check MyShare instances better, make sure they share
3237 Server objects, now that finder.py cares
3238]
3239[immutable/downloader/finder.py: reduce use of get_serverid(), one left
3240warner@lothar.com**20110227011146
3241 Ignore-this: 5785be173b491ae8a78faf5142892020
3242]
3243[immutable/offloaded.py: reduce use of get_serverid() a bit more
3244warner@lothar.com**20110227011142
3245 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
3246]
3247[immutable/upload.py: reduce use of get_serverid()
3248warner@lothar.com**20110227011138
3249 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
3250]
3251[immutable/checker.py: remove some uses of s.get_serverid(), not all
3252warner@lothar.com**20110227011134
3253 Ignore-this: e480a37efa9e94e8016d826c492f626e
3254]
3255[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
3256warner@lothar.com**20110227011132
3257 Ignore-this: 6078279ddf42b179996a4b53bee8c421
3258 MockIServer stubs
3259]
3260[upload.py: rearrange _make_trackers a bit, no behavior changes
3261warner@lothar.com**20110227011128
3262 Ignore-this: 296d4819e2af452b107177aef6ebb40f
3263]
3264[happinessutil.py: finally rename merge_peers to merge_servers
3265warner@lothar.com**20110227011124
3266 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
3267]
3268[test_upload.py: factor out FakeServerTracker
3269warner@lothar.com**20110227011120
3270 Ignore-this: 6c182cba90e908221099472cc159325b
3271]
3272[test_upload.py: server-vs-tracker cleanup
3273warner@lothar.com**20110227011115
3274 Ignore-this: 2915133be1a3ba456e8603885437e03
3275]
3276[happinessutil.py: server-vs-tracker cleanup
3277warner@lothar.com**20110227011111
3278 Ignore-this: b856c84033562d7d718cae7cb01085a9
3279]
3280[upload.py: more tracker-vs-server cleanup
3281warner@lothar.com**20110227011107
3282 Ignore-this: bb75ed2afef55e47c085b35def2de315
3283]
3284[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
3285warner@lothar.com**20110227011103
3286 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
3287]
3288[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
3289warner@lothar.com**20110227011100
3290 Ignore-this: 7ea858755cbe5896ac212a925840fe68
3291 
3292 No behavioral changes, just updating variable/method names and log messages.
3293 The effects outside these three files should be minimal: some exception
3294 messages changed (to say "server" instead of "peer"), and some internal class
3295 names were changed. A few things still use "peer" to minimize external
3296 changes, like UploadResults.timings["peer_selection"] and
3297 happinessutil.merge_peers, which can be changed later.
3298]
3299[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
3300warner@lothar.com**20110227011056
3301 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
3302]
3303[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
3304warner@lothar.com**20110227011051
3305 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
3306]
3307[test: increase timeout on a network test because Francois's ARM machine hit that timeout
3308zooko@zooko.com**20110317165909
3309 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
3310 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
3311]
3312[docs/configuration.rst: add a "Frontend Configuration" section
3313Brian Warner <warner@lothar.com>**20110222014323
3314 Ignore-this: 657018aa501fe4f0efef9851628444ca
3315 
3316 this points to docs/frontends/*.rst, which were previously underlinked
3317]
3318[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
3319"Brian Warner <warner@lothar.com>"**20110221061544
3320 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
3321]
3322[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
3323david-sarah@jacaranda.org**20110221015817
3324 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
3325]
3326[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
3327david-sarah@jacaranda.org**20110221020125
3328 Ignore-this: b0744ed58f161bf188e037bad077fc48
3329]
3330[Refactor StorageFarmBroker handling of servers
3331Brian Warner <warner@lothar.com>**20110221015804
3332 Ignore-this: 842144ed92f5717699b8f580eab32a51
3333 
3334 Pass around IServer instance instead of (peerid, rref) tuple. Replace
3335 "descriptor" with "server". Other replacements:
3336 
3337  get_all_servers -> get_connected_servers/get_known_servers
3338  get_servers_for_index -> get_servers_for_psi (now returns IServers)
3339 
3340 This change still needs to be pushed further down: lots of code is now
3341 getting the IServer and then distributing (peerid, rref) internally.
3342 Instead, it ought to distribute the IServer internally and delay
3343 extracting a serverid or rref until the last moment.
3344 
3345 no_network.py was updated to retain parallelism.
3346]
3347[TAG allmydata-tahoe-1.8.2
3348warner@lothar.com**20110131020101]
3349Patch bundle hash:
335013177c92b5b69511d7129a1e7bed9e82b3ec4115