1 | Wed Sep 23 21:19:32 PDT 2009 Kevan Carstensen <kevan@isnotajoke.com> |
---|
2 | * Alter CiphertextDownloader to work with servers_of_happiness |
---|
3 | |
---|
4 | Tue Nov 3 19:32:41 PST 2009 Kevan Carstensen <kevan@isnotajoke.com> |
---|
5 | * Alter the signature of set_shareholders in IEncoder to add a 'servermap' parameter, which gives IEncoders enough information to perform a sane check for servers_of_happiness. |
---|
6 | |
---|
7 | Wed Nov 4 03:12:22 PST 2009 Kevan Carstensen <kevan@isnotajoke.com> |
---|
8 | * Alter 'immutable/encode.py' and 'immutable/upload.py' to use servers_of_happiness instead of shares_of_happiness. |
---|
9 | |
---|
10 | Mon Nov 16 11:28:05 PST 2009 Kevan Carstensen <kevan@isnotajoke.com> |
---|
11 | * Alter Tahoe2PeerSelector to make sure that it recognizes existing shares on readonly servers, fixing an issue in #778 |
---|
12 | |
---|
13 | Mon Nov 16 13:24:59 PST 2009 Kevan Carstensen <kevan@isnotajoke.com> |
---|
14 | * Change stray "shares_of_happiness" to "servers_of_happiness" |
---|
15 | |
---|
16 | Tue Nov 17 17:45:42 PST 2009 Kevan Carstensen <kevan@isnotajoke.com> |
---|
17 | * Eliminate overcounting iof servers_of_happiness in Tahoe2PeerSelector; also reorganize some things. |
---|
18 | |
---|
19 | Sun Nov 22 16:24:05 PST 2009 Kevan Carstensen <kevan@isnotajoke.com> |
---|
20 | * Alter the error message returned when peer selection fails |
---|
21 | |
---|
22 | The Tahoe2PeerSelector returned either NoSharesError or NotEnoughSharesError |
---|
23 | for a variety of error conditions that weren't informatively described by them. |
---|
24 | This patch creates a new error, UploadHappinessError, replaces uses of |
---|
25 | NoSharesError and NotEnoughSharesError with it, and alters the error message |
---|
26 | raised with the errors to be more in line with the new servers_of_happiness |
---|
27 | behavior. See ticket #834 for more information. |
---|
28 | |
---|
29 | Fri Dec 4 20:30:37 PST 2009 Kevan Carstensen <kevan@isnotajoke.com> |
---|
30 | * Change "UploadHappinessError" to "UploadUnhappinessError" |
---|
31 | |
---|
32 | Wed Dec 30 13:03:44 PST 2009 Kevan Carstensen <kevan@isnotajoke.com> |
---|
33 | * Alter the error message when an upload fails, per some comments in #778. |
---|
34 | |
---|
35 | When I first implemented #778, I just altered the error messages to refer to |
---|
36 | servers where they referred to shares. The resulting error messages weren't |
---|
37 | very good. These are a bit better. |
---|
38 | |
---|
39 | Mon Feb 15 12:22:14 PST 2010 Kevan Carstensen <kevan@isnotajoke.com> |
---|
40 | * Fix up the behavior of #778, per reviewers' comments |
---|
41 | |
---|
42 | - Make some important utility functions clearer and more thoroughly |
---|
43 | documented. |
---|
44 | - Assert in upload.servers_of_happiness that the buckets attributes |
---|
45 | of PeerTrackers passed to it are mutually disjoint. |
---|
46 | - Get rid of some silly non-Pythonisms that I didn't see when I first |
---|
47 | wrote these patches. |
---|
48 | - Make sure that should_add_server returns true when queried about a |
---|
49 | shnum that it doesn't know about yet. |
---|
50 | - Change Tahoe2PeerSelector.preexisting_shares to map a shareid to a set |
---|
51 | of peerids, alter dependencies to deal with that. |
---|
52 | - Remove upload.should_add_servers, because it is no longer necessary |
---|
53 | - Move upload.shares_of_happiness and upload.shares_by_server to a utility |
---|
54 | file. |
---|
55 | - Change some points in Tahoe2PeerSelector. |
---|
56 | - Compute servers_of_happiness using a bipartite matching algorithm that |
---|
57 | we know is optimal instead of an ad-hoc greedy algorithm that isn't. |
---|
58 | - Change servers_of_happiness to just take a sharemap as an argument, |
---|
59 | change its callers to merge existing_shares and used_peers before |
---|
60 | calling it. |
---|
61 | - Change an error message in the encoder to be more appropriate for |
---|
62 | servers of happiness. |
---|
63 | |
---|
64 | |
---|
65 | New patches: |
---|
66 | |
---|
67 | [Alter CiphertextDownloader to work with servers_of_happiness |
---|
68 | Kevan Carstensen <kevan@isnotajoke.com>**20090924041932 |
---|
69 | Ignore-this: e81edccf0308c2d3bedbc4cf217da197 |
---|
70 | ] hunk ./src/allmydata/immutable/download.py 1039 |
---|
71 | # Repairer (uploader) needs the encodingparams. |
---|
72 | self._target.set_encodingparams(( |
---|
73 | self._verifycap.needed_shares, |
---|
74 | - self._verifycap.total_shares, # I don't think the target actually cares about "happy". |
---|
75 | + 0, # see ticket #778 for why this is |
---|
76 | self._verifycap.total_shares, |
---|
77 | self._vup.segment_size |
---|
78 | )) |
---|
79 | [Alter the signature of set_shareholders in IEncoder to add a 'servermap' parameter, which gives IEncoders enough information to perform a sane check for servers_of_happiness. |
---|
80 | Kevan Carstensen <kevan@isnotajoke.com>**20091104033241 |
---|
81 | Ignore-this: b3a6649a8ac66431beca1026a31fed94 |
---|
82 | ] { |
---|
83 | hunk ./src/allmydata/interfaces.py 1341 |
---|
84 | Once this is called, set_size() and set_params() may not be called. |
---|
85 | """ |
---|
86 | |
---|
87 | - def set_shareholders(shareholders): |
---|
88 | + def set_shareholders(shareholders, servermap): |
---|
89 | """Tell the encoder where to put the encoded shares. 'shareholders' |
---|
90 | must be a dictionary that maps share number (an integer ranging from |
---|
91 | hunk ./src/allmydata/interfaces.py 1344 |
---|
92 | - 0 to n-1) to an instance that provides IStorageBucketWriter. This |
---|
93 | - must be performed before start() can be called.""" |
---|
94 | + 0 to n-1) to an instance that provides IStorageBucketWriter. |
---|
95 | + 'servermap' is a dictionary that maps share number (as defined above) |
---|
96 | + to a peerid. This must be performed before start() can be called.""" |
---|
97 | |
---|
98 | def start(): |
---|
99 | """Begin the encode/upload process. This involves reading encrypted |
---|
100 | } |
---|
101 | [Alter 'immutable/encode.py' and 'immutable/upload.py' to use servers_of_happiness instead of shares_of_happiness. |
---|
102 | Kevan Carstensen <kevan@isnotajoke.com>**20091104111222 |
---|
103 | Ignore-this: abb3283314820a8bbf9b5d0cbfbb57c8 |
---|
104 | ] { |
---|
105 | hunk ./src/allmydata/immutable/encode.py 121 |
---|
106 | assert not self._codec |
---|
107 | k, happy, n, segsize = params |
---|
108 | self.required_shares = k |
---|
109 | - self.shares_of_happiness = happy |
---|
110 | + self.servers_of_happiness = happy |
---|
111 | self.num_shares = n |
---|
112 | self.segment_size = segsize |
---|
113 | self.log("got encoding parameters: %d/%d/%d %d" % (k,happy,n, segsize)) |
---|
114 | hunk ./src/allmydata/immutable/encode.py 179 |
---|
115 | if name == "storage_index": |
---|
116 | return self._storage_index |
---|
117 | elif name == "share_counts": |
---|
118 | - return (self.required_shares, self.shares_of_happiness, |
---|
119 | + return (self.required_shares, self.servers_of_happiness, |
---|
120 | self.num_shares) |
---|
121 | elif name == "num_segments": |
---|
122 | return self.num_segments |
---|
123 | hunk ./src/allmydata/immutable/encode.py 194 |
---|
124 | else: |
---|
125 | raise KeyError("unknown parameter name '%s'" % name) |
---|
126 | |
---|
127 | - def set_shareholders(self, landlords): |
---|
128 | + def set_shareholders(self, landlords, servermap): |
---|
129 | assert isinstance(landlords, dict) |
---|
130 | for k in landlords: |
---|
131 | assert IStorageBucketWriter.providedBy(landlords[k]) |
---|
132 | hunk ./src/allmydata/immutable/encode.py 199 |
---|
133 | self.landlords = landlords.copy() |
---|
134 | + assert isinstance(servermap, dict) |
---|
135 | + self.servermap = servermap.copy() |
---|
136 | |
---|
137 | def start(self): |
---|
138 | """ Returns a Deferred that will fire with the verify cap (an instance of |
---|
139 | hunk ./src/allmydata/immutable/encode.py 491 |
---|
140 | # even more UNUSUAL |
---|
141 | self.log("they weren't in our list of landlords", parent=ln, |
---|
142 | level=log.WEIRD, umid="TQGFRw") |
---|
143 | - if len(self.landlords) < self.shares_of_happiness: |
---|
144 | - msg = "lost too many shareholders during upload (still have %d, want %d): %s" % \ |
---|
145 | - (len(self.landlords), self.shares_of_happiness, why) |
---|
146 | - if self.landlords: |
---|
147 | + del(self.servermap[shareid]) |
---|
148 | + servers_left = list(set(self.servermap.values())) |
---|
149 | + if len(servers_left) < self.servers_of_happiness: |
---|
150 | + msg = "lost too many servers during upload (still have %d, want %d): %s" % \ |
---|
151 | + (len(servers_left), |
---|
152 | + self.servers_of_happiness, why) |
---|
153 | + if servers_left: |
---|
154 | raise NotEnoughSharesError(msg) |
---|
155 | else: |
---|
156 | raise NoSharesError(msg) |
---|
157 | hunk ./src/allmydata/immutable/encode.py 502 |
---|
158 | self.log("but we can still continue with %s shares, we'll be happy " |
---|
159 | - "with at least %s" % (len(self.landlords), |
---|
160 | - self.shares_of_happiness), |
---|
161 | + "with at least %s" % (len(servers_left), |
---|
162 | + self.servers_of_happiness), |
---|
163 | parent=ln) |
---|
164 | |
---|
165 | def _gather_responses(self, dl): |
---|
166 | hunk ./src/allmydata/immutable/upload.py 131 |
---|
167 | self.buckets.update(b) |
---|
168 | return (alreadygot, set(b.keys())) |
---|
169 | |
---|
170 | +def servers_with_shares(existing_shares, used_peers=None): |
---|
171 | + servers = [] |
---|
172 | + if used_peers: |
---|
173 | + peers = list(used_peers.copy()) |
---|
174 | + # We do this because the preexisting shares list goes by peerid. |
---|
175 | + peers = [x.peerid for x in peers] |
---|
176 | + servers.extend(peers) |
---|
177 | + servers.extend(existing_shares.values()) |
---|
178 | + return list(set(servers)) |
---|
179 | + |
---|
180 | +def shares_by_server(existing_shares): |
---|
181 | + servers = {} |
---|
182 | + for server in set(existing_shares.values()): |
---|
183 | + servers[server] = set([x for x in existing_shares.keys() |
---|
184 | + if existing_shares[x] == server]) |
---|
185 | + return servers |
---|
186 | + |
---|
187 | class Tahoe2PeerSelector: |
---|
188 | |
---|
189 | def __init__(self, upload_id, logparent=None, upload_status=None): |
---|
190 | hunk ./src/allmydata/immutable/upload.py 164 |
---|
191 | |
---|
192 | def get_shareholders(self, storage_broker, secret_holder, |
---|
193 | storage_index, share_size, block_size, |
---|
194 | - num_segments, total_shares, shares_of_happiness): |
---|
195 | + num_segments, total_shares, servers_of_happiness): |
---|
196 | """ |
---|
197 | @return: (used_peers, already_peers), where used_peers is a set of |
---|
198 | PeerTracker instances that have agreed to hold some shares |
---|
199 | hunk ./src/allmydata/immutable/upload.py 177 |
---|
200 | self._status.set_status("Contacting Peers..") |
---|
201 | |
---|
202 | self.total_shares = total_shares |
---|
203 | - self.shares_of_happiness = shares_of_happiness |
---|
204 | + self.servers_of_happiness = servers_of_happiness |
---|
205 | |
---|
206 | self.homeless_shares = range(total_shares) |
---|
207 | # self.uncontacted_peers = list() # peers we haven't asked yet |
---|
208 | hunk ./src/allmydata/immutable/upload.py 242 |
---|
209 | d = defer.maybeDeferred(self._loop) |
---|
210 | return d |
---|
211 | |
---|
212 | + |
---|
213 | def _loop(self): |
---|
214 | if not self.homeless_shares: |
---|
215 | hunk ./src/allmydata/immutable/upload.py 245 |
---|
216 | - # all done |
---|
217 | - msg = ("placed all %d shares, " |
---|
218 | - "sent %d queries to %d peers, " |
---|
219 | - "%d queries placed some shares, %d placed none, " |
---|
220 | - "got %d errors" % |
---|
221 | - (self.total_shares, |
---|
222 | - self.query_count, self.num_peers_contacted, |
---|
223 | - self.good_query_count, self.bad_query_count, |
---|
224 | - self.error_count)) |
---|
225 | - log.msg("peer selection successful for %s: %s" % (self, msg), |
---|
226 | + effective_happiness = servers_with_shares( |
---|
227 | + self.preexisting_shares, |
---|
228 | + self.use_peers) |
---|
229 | + if self.servers_of_happiness <= len(effective_happiness): |
---|
230 | + msg = ("placed all %d shares, " |
---|
231 | + "sent %d queries to %d peers, " |
---|
232 | + "%d queries placed some shares, %d placed none, " |
---|
233 | + "got %d errors" % |
---|
234 | + (self.total_shares, |
---|
235 | + self.query_count, self.num_peers_contacted, |
---|
236 | + self.good_query_count, self.bad_query_count, |
---|
237 | + self.error_count)) |
---|
238 | + log.msg("peer selection successful for %s: %s" % (self, msg), |
---|
239 | parent=self._log_parent) |
---|
240 | hunk ./src/allmydata/immutable/upload.py 259 |
---|
241 | - return (self.use_peers, self.preexisting_shares) |
---|
242 | + return (self.use_peers, self.preexisting_shares) |
---|
243 | + else: |
---|
244 | + delta = self.servers_of_happiness - len(effective_happiness) |
---|
245 | + shares = shares_by_server(self.preexisting_shares) |
---|
246 | + # Each server in shares maps to a set of shares stored on it. |
---|
247 | + # Since we want to keep at least one share on each server |
---|
248 | + # that has one (otherwise we'd only be making |
---|
249 | + # the situation worse by removing distinct servers), |
---|
250 | + # each server has len(its shares) - 1 to spread around. |
---|
251 | + shares_to_spread = sum([len(list(sharelist)) - 1 |
---|
252 | + for (server, sharelist) |
---|
253 | + in shares.items()]) |
---|
254 | + if delta <= len(self.uncontacted_peers) and \ |
---|
255 | + shares_to_spread >= delta: |
---|
256 | + # Loop through the allocated shares, removing |
---|
257 | + items = shares.items() |
---|
258 | + while len(self.homeless_shares) < delta: |
---|
259 | + servernum, sharelist = items.pop() |
---|
260 | + if len(sharelist) > 1: |
---|
261 | + share = sharelist.pop() |
---|
262 | + self.homeless_shares.append(share) |
---|
263 | + del(self.preexisting_shares[share]) |
---|
264 | + items.append((servernum, sharelist)) |
---|
265 | + return self._loop() |
---|
266 | + else: |
---|
267 | + raise NotEnoughSharesError("shares could only be placed on %d " |
---|
268 | + "servers (%d were requested)" % |
---|
269 | + (len(effective_happiness), |
---|
270 | + self.servers_of_happiness)) |
---|
271 | |
---|
272 | if self.uncontacted_peers: |
---|
273 | peer = self.uncontacted_peers.pop(0) |
---|
274 | hunk ./src/allmydata/immutable/upload.py 336 |
---|
275 | else: |
---|
276 | # no more peers. If we haven't placed enough shares, we fail. |
---|
277 | placed_shares = self.total_shares - len(self.homeless_shares) |
---|
278 | - if placed_shares < self.shares_of_happiness: |
---|
279 | + effective_happiness = servers_with_shares( |
---|
280 | + self.preexisting_shares, |
---|
281 | + self.use_peers) |
---|
282 | + if len(effective_happiness) < self.servers_of_happiness: |
---|
283 | msg = ("placed %d shares out of %d total (%d homeless), " |
---|
284 | hunk ./src/allmydata/immutable/upload.py 341 |
---|
285 | - "want to place %d, " |
---|
286 | + "want to place on %d servers, " |
---|
287 | "sent %d queries to %d peers, " |
---|
288 | "%d queries placed some shares, %d placed none, " |
---|
289 | "got %d errors" % |
---|
290 | hunk ./src/allmydata/immutable/upload.py 347 |
---|
291 | (self.total_shares - len(self.homeless_shares), |
---|
292 | self.total_shares, len(self.homeless_shares), |
---|
293 | - self.shares_of_happiness, |
---|
294 | + self.servers_of_happiness, |
---|
295 | self.query_count, self.num_peers_contacted, |
---|
296 | self.good_query_count, self.bad_query_count, |
---|
297 | self.error_count)) |
---|
298 | hunk ./src/allmydata/immutable/upload.py 394 |
---|
299 | level=log.NOISY, parent=self._log_parent) |
---|
300 | progress = False |
---|
301 | for s in alreadygot: |
---|
302 | + if self.preexisting_shares.has_key(s): |
---|
303 | + old_size = len(servers_with_shares(self.preexisting_shares)) |
---|
304 | + new_candidate = self.preexisting_shares.copy() |
---|
305 | + new_candidate[s] = peer.peerid |
---|
306 | + new_size = len(servers_with_shares(new_candidate)) |
---|
307 | + if old_size >= new_size: continue |
---|
308 | self.preexisting_shares[s] = peer.peerid |
---|
309 | if s in self.homeless_shares: |
---|
310 | self.homeless_shares.remove(s) |
---|
311 | hunk ./src/allmydata/immutable/upload.py 825 |
---|
312 | for peer in used_peers: |
---|
313 | assert isinstance(peer, PeerTracker) |
---|
314 | buckets = {} |
---|
315 | + servermap = already_peers.copy() |
---|
316 | for peer in used_peers: |
---|
317 | buckets.update(peer.buckets) |
---|
318 | for shnum in peer.buckets: |
---|
319 | hunk ./src/allmydata/immutable/upload.py 830 |
---|
320 | self._peer_trackers[shnum] = peer |
---|
321 | + servermap[shnum] = peer.peerid |
---|
322 | assert len(buckets) == sum([len(peer.buckets) for peer in used_peers]) |
---|
323 | hunk ./src/allmydata/immutable/upload.py 832 |
---|
324 | - encoder.set_shareholders(buckets) |
---|
325 | + encoder.set_shareholders(buckets, servermap) |
---|
326 | |
---|
327 | def _encrypted_done(self, verifycap): |
---|
328 | """ Returns a Deferred that will fire with the UploadResults instance. """ |
---|
329 | replace ./src/allmydata/immutable/upload.py [A-Za-z_0-9] _servers_with_shares _servers_with_unique_shares |
---|
330 | replace ./src/allmydata/immutable/upload.py [A-Za-z_0-9] servers_with_shares servers_with_unique_shares |
---|
331 | } |
---|
332 | [Alter Tahoe2PeerSelector to make sure that it recognizes existing shares on readonly servers, fixing an issue in #778 |
---|
333 | Kevan Carstensen <kevan@isnotajoke.com>**20091116192805 |
---|
334 | Ignore-this: 15289f4d709e03851ed0587b286fd955 |
---|
335 | ] { |
---|
336 | hunk ./src/allmydata/immutable/upload.py 117 |
---|
337 | d.addCallback(self._got_reply) |
---|
338 | return d |
---|
339 | |
---|
340 | + def query_allocated(self): |
---|
341 | + d = self._storageserver.callRemote("get_buckets", |
---|
342 | + self.storage_index) |
---|
343 | + d.addCallback(self._got_allocate_reply) |
---|
344 | + return d |
---|
345 | + |
---|
346 | + def _got_allocate_reply(self, buckets): |
---|
347 | + return (self.peerid, buckets) |
---|
348 | + |
---|
349 | def _got_reply(self, (alreadygot, buckets)): |
---|
350 | #log.msg("%s._got_reply(%s)" % (self, (alreadygot, buckets))) |
---|
351 | b = {} |
---|
352 | hunk ./src/allmydata/immutable/upload.py 195 |
---|
353 | self._started_second_pass = False |
---|
354 | self.use_peers = set() # PeerTrackers that have shares assigned to them |
---|
355 | self.preexisting_shares = {} # sharenum -> peerid holding the share |
---|
356 | + # We don't try to allocate shares to these servers, since they've |
---|
357 | + # said that they're incapable of storing shares of the size that |
---|
358 | + # we'd want to store. We keep them around because they may have |
---|
359 | + # existing shares for this storage index, which we want to know |
---|
360 | + # about for accurate servers_of_happiness accounting |
---|
361 | + self.readonly_peers = [] |
---|
362 | |
---|
363 | peers = storage_broker.get_servers_for_index(storage_index) |
---|
364 | if not peers: |
---|
365 | hunk ./src/allmydata/immutable/upload.py 227 |
---|
366 | (peerid, conn) = peer |
---|
367 | v1 = conn.version["http://allmydata.org/tahoe/protocols/storage/v1"] |
---|
368 | return v1["maximum-immutable-share-size"] |
---|
369 | - peers = [peer for peer in peers |
---|
370 | - if _get_maxsize(peer) >= allocated_size] |
---|
371 | - if not peers: |
---|
372 | - raise NoServersError("no peers could accept an allocated_size of %d" % allocated_size) |
---|
373 | + new_peers = [peer for peer in peers |
---|
374 | + if _get_maxsize(peer) >= allocated_size] |
---|
375 | + old_peers = list(set(peers).difference(set(new_peers))) |
---|
376 | + peers = new_peers |
---|
377 | |
---|
378 | # decide upon the renewal/cancel secrets, to include them in the |
---|
379 | # allocate_buckets query. |
---|
380 | hunk ./src/allmydata/immutable/upload.py 241 |
---|
381 | storage_index) |
---|
382 | file_cancel_secret = file_cancel_secret_hash(client_cancel_secret, |
---|
383 | storage_index) |
---|
384 | - |
---|
385 | - trackers = [ PeerTracker(peerid, conn, |
---|
386 | - share_size, block_size, |
---|
387 | - num_segments, num_share_hashes, |
---|
388 | - storage_index, |
---|
389 | - bucket_renewal_secret_hash(file_renewal_secret, |
---|
390 | - peerid), |
---|
391 | - bucket_cancel_secret_hash(file_cancel_secret, |
---|
392 | + def _make_trackers(peers): |
---|
393 | + return [ PeerTracker(peerid, conn, |
---|
394 | + share_size, block_size, |
---|
395 | + num_segments, num_share_hashes, |
---|
396 | + storage_index, |
---|
397 | + bucket_renewal_secret_hash(file_renewal_secret, |
---|
398 | peerid), |
---|
399 | hunk ./src/allmydata/immutable/upload.py 248 |
---|
400 | - ) |
---|
401 | - for (peerid, conn) in peers ] |
---|
402 | - self.uncontacted_peers = trackers |
---|
403 | - |
---|
404 | - d = defer.maybeDeferred(self._loop) |
---|
405 | + bucket_cancel_secret_hash(file_cancel_secret, |
---|
406 | + peerid)) |
---|
407 | + for (peerid, conn) in peers] |
---|
408 | + self.uncontacted_peers = _make_trackers(peers) |
---|
409 | + self.readonly_peers = _make_trackers(old_peers) |
---|
410 | + # Talk to the readonly servers to get an idea of what servers |
---|
411 | + # have what shares (if any) for this storage index |
---|
412 | + d = defer.maybeDeferred(self._existing_shares) |
---|
413 | + d.addCallback(lambda ign: self._loop()) |
---|
414 | return d |
---|
415 | |
---|
416 | hunk ./src/allmydata/immutable/upload.py 259 |
---|
417 | + def _existing_shares(self): |
---|
418 | + if self.readonly_peers: |
---|
419 | + peer = self.readonly_peers.pop() |
---|
420 | + assert isinstance(peer, PeerTracker) |
---|
421 | + d = peer.query_allocated() |
---|
422 | + d.addCallback(self._handle_allocate_response) |
---|
423 | + return d |
---|
424 | + |
---|
425 | + def _handle_allocate_response(self, (peer, buckets)): |
---|
426 | + for bucket in buckets: |
---|
427 | + self.preexisting_shares[bucket] = peer |
---|
428 | + if self.homeless_shares: |
---|
429 | + self.homeless_shares.remove(bucket) |
---|
430 | + return self._existing_shares() |
---|
431 | |
---|
432 | def _loop(self): |
---|
433 | if not self.homeless_shares: |
---|
434 | } |
---|
435 | [Change stray "shares_of_happiness" to "servers_of_happiness" |
---|
436 | Kevan Carstensen <kevan@isnotajoke.com>**20091116212459 |
---|
437 | Ignore-this: 1c971ba8c3c4d2e7ba9f020577b28b73 |
---|
438 | ] { |
---|
439 | hunk ./docs/architecture.txt 183 |
---|
440 | place a quantity known as "shares of happiness", we'll do the upload anyways. |
---|
441 | If we cannot place at least this many, the upload is declared a failure. |
---|
442 | |
---|
443 | -The current defaults use k=3, shares_of_happiness=7, and N=10, meaning that |
---|
444 | +The current defaults use k=3, servers_of_happiness=7, and N=10, meaning that |
---|
445 | we'll try to place 10 shares, we'll be happy if we can place 7, and we need |
---|
446 | to get back any 3 to recover the file. This results in a 3.3x expansion |
---|
447 | factor. In general, you should set N about equal to the number of nodes in |
---|
448 | hunk ./src/allmydata/immutable/upload.py 411 |
---|
449 | pass |
---|
450 | else: |
---|
451 | # No more peers, so this upload might fail (it depends upon |
---|
452 | - # whether we've hit shares_of_happiness or not). Log the last |
---|
453 | + # whether we've hit servers_of_happiness or not). Log the last |
---|
454 | # failure we got: if a coding error causes all peers to fail |
---|
455 | # in the same way, this allows the common failure to be seen |
---|
456 | # by the uploader and should help with debugging |
---|
457 | hunk ./src/allmydata/interfaces.py 809 |
---|
458 | |
---|
459 | class NotEnoughSharesError(Exception): |
---|
460 | """Download was unable to get enough shares, or upload was unable to |
---|
461 | - place 'shares_of_happiness' shares.""" |
---|
462 | + place 'servers_of_happiness' shares.""" |
---|
463 | |
---|
464 | class NoSharesError(Exception): |
---|
465 | """Upload or Download was unable to get any shares at all.""" |
---|
466 | hunk ./src/allmydata/interfaces.py 1308 |
---|
467 | pushed. |
---|
468 | |
---|
469 | 'share_counts': return a tuple describing how many shares are used: |
---|
470 | - (needed_shares, shares_of_happiness, total_shares) |
---|
471 | + (needed_shares, servers_of_happiness, total_shares) |
---|
472 | |
---|
473 | 'num_segments': return an int with the number of segments that |
---|
474 | will be encoded. |
---|
475 | hunk ./src/allmydata/test/test_encode.py 768 |
---|
476 | def test_lost_one_shareholder(self): |
---|
477 | # we have enough shareholders when we start, but one segment in we |
---|
478 | # lose one of them. The upload should still succeed, as long as we |
---|
479 | - # still have 'shares_of_happiness' peers left. |
---|
480 | + # still have 'servers_of_happiness' peers left. |
---|
481 | modemap = dict([(i, "good") for i in range(9)] + |
---|
482 | [(i, "lost") for i in range(9, 10)]) |
---|
483 | return self.send_and_recover((4,8,10), bucket_modes=modemap) |
---|
484 | hunk ./src/allmydata/test/test_encode.py 776 |
---|
485 | def test_lost_one_shareholder_early(self): |
---|
486 | # we have enough shareholders when we choose peers, but just before |
---|
487 | # we send the 'start' message, we lose one of them. The upload should |
---|
488 | - # still succeed, as long as we still have 'shares_of_happiness' peers |
---|
489 | + # still succeed, as long as we still have 'servers_of_happiness' peers |
---|
490 | # left. |
---|
491 | modemap = dict([(i, "good") for i in range(9)] + |
---|
492 | [(i, "lost-early") for i in range(9, 10)]) |
---|
493 | } |
---|
494 | [Eliminate overcounting iof servers_of_happiness in Tahoe2PeerSelector; also reorganize some things. |
---|
495 | Kevan Carstensen <kevan@isnotajoke.com>**20091118014542 |
---|
496 | Ignore-this: a6cb032cbff74f4f9d4238faebd99868 |
---|
497 | ] { |
---|
498 | hunk ./src/allmydata/immutable/upload.py 141 |
---|
499 | return (alreadygot, set(b.keys())) |
---|
500 | |
---|
501 | def servers_with_unique_shares(existing_shares, used_peers=None): |
---|
502 | + """ |
---|
503 | + I accept a dict of shareid -> peerid mappings (and optionally a list |
---|
504 | + of PeerTracker instances) and return a list of servers that have shares. |
---|
505 | + """ |
---|
506 | servers = [] |
---|
507 | hunk ./src/allmydata/immutable/upload.py 146 |
---|
508 | + existing_shares = existing_shares.copy() |
---|
509 | if used_peers: |
---|
510 | hunk ./src/allmydata/immutable/upload.py 148 |
---|
511 | + peerdict = {} |
---|
512 | + for peer in used_peers: |
---|
513 | + peerdict.update(dict([(i, peer.peerid) for i in peer.buckets])) |
---|
514 | + for k in peerdict.keys(): |
---|
515 | + if existing_shares.has_key(k): |
---|
516 | + # Prevent overcounting; favor the bucket, and not the |
---|
517 | + # prexisting share. |
---|
518 | + del(existing_shares[k]) |
---|
519 | peers = list(used_peers.copy()) |
---|
520 | # We do this because the preexisting shares list goes by peerid. |
---|
521 | peers = [x.peerid for x in peers] |
---|
522 | hunk ./src/allmydata/immutable/upload.py 164 |
---|
523 | return list(set(servers)) |
---|
524 | |
---|
525 | def shares_by_server(existing_shares): |
---|
526 | + """ |
---|
527 | + I accept a dict of shareid -> peerid mappings, and return a dict |
---|
528 | + of peerid -> shareid mappings |
---|
529 | + """ |
---|
530 | servers = {} |
---|
531 | for server in set(existing_shares.values()): |
---|
532 | servers[server] = set([x for x in existing_shares.keys() |
---|
533 | hunk ./src/allmydata/immutable/upload.py 174 |
---|
534 | if existing_shares[x] == server]) |
---|
535 | return servers |
---|
536 | |
---|
537 | +def should_add_server(existing_shares, server, bucket): |
---|
538 | + """ |
---|
539 | + I tell my caller whether the servers_of_happiness number will be |
---|
540 | + increased or decreased if a particular server is added as the peer |
---|
541 | + already holding a particular share. I take a dictionary, a peerid, |
---|
542 | + and a bucket as arguments, and return a boolean. |
---|
543 | + """ |
---|
544 | + old_size = len(servers_with_unique_shares(existing_shares)) |
---|
545 | + new_candidate = existing_shares.copy() |
---|
546 | + new_candidate[bucket] = server |
---|
547 | + new_size = len(servers_with_unique_shares(new_candidate)) |
---|
548 | + return old_size < new_size |
---|
549 | + |
---|
550 | class Tahoe2PeerSelector: |
---|
551 | |
---|
552 | def __init__(self, upload_id, logparent=None, upload_status=None): |
---|
553 | hunk ./src/allmydata/immutable/upload.py 294 |
---|
554 | peer = self.readonly_peers.pop() |
---|
555 | assert isinstance(peer, PeerTracker) |
---|
556 | d = peer.query_allocated() |
---|
557 | - d.addCallback(self._handle_allocate_response) |
---|
558 | + d.addCallback(self._handle_existing_response) |
---|
559 | return d |
---|
560 | |
---|
561 | hunk ./src/allmydata/immutable/upload.py 297 |
---|
562 | - def _handle_allocate_response(self, (peer, buckets)): |
---|
563 | + def _handle_existing_response(self, (peer, buckets)): |
---|
564 | for bucket in buckets: |
---|
565 | hunk ./src/allmydata/immutable/upload.py 299 |
---|
566 | - self.preexisting_shares[bucket] = peer |
---|
567 | - if self.homeless_shares: |
---|
568 | - self.homeless_shares.remove(bucket) |
---|
569 | + if should_add_server(self.preexisting_shares, peer, bucket): |
---|
570 | + self.preexisting_shares[bucket] = peer |
---|
571 | + if self.homeless_shares and bucket in self.homeless_shares: |
---|
572 | + self.homeless_shares.remove(bucket) |
---|
573 | return self._existing_shares() |
---|
574 | |
---|
575 | def _loop(self): |
---|
576 | hunk ./src/allmydata/immutable/upload.py 346 |
---|
577 | items.append((servernum, sharelist)) |
---|
578 | return self._loop() |
---|
579 | else: |
---|
580 | - raise NotEnoughSharesError("shares could only be placed on %d " |
---|
581 | - "servers (%d were requested)" % |
---|
582 | - (len(effective_happiness), |
---|
583 | - self.servers_of_happiness)) |
---|
584 | + raise NotEnoughSharesError("shares could only be placed " |
---|
585 | + "on %d servers (%d were requested)" % |
---|
586 | + (len(effective_happiness), |
---|
587 | + self.servers_of_happiness)) |
---|
588 | |
---|
589 | if self.uncontacted_peers: |
---|
590 | peer = self.uncontacted_peers.pop(0) |
---|
591 | hunk ./src/allmydata/immutable/upload.py 425 |
---|
592 | # we placed enough to be happy, so we're done |
---|
593 | if self._status: |
---|
594 | self._status.set_status("Placed all shares") |
---|
595 | - return self.use_peers |
---|
596 | + return (self.use_peers, self.preexisting_shares) |
---|
597 | |
---|
598 | def _got_response(self, res, peer, shares_to_ask, put_peer_here): |
---|
599 | if isinstance(res, failure.Failure): |
---|
600 | hunk ./src/allmydata/immutable/upload.py 456 |
---|
601 | level=log.NOISY, parent=self._log_parent) |
---|
602 | progress = False |
---|
603 | for s in alreadygot: |
---|
604 | - if self.preexisting_shares.has_key(s): |
---|
605 | - old_size = len(servers_with_unique_shares(self.preexisting_shares)) |
---|
606 | - new_candidate = self.preexisting_shares.copy() |
---|
607 | - new_candidate[s] = peer.peerid |
---|
608 | - new_size = len(servers_with_unique_shares(new_candidate)) |
---|
609 | - if old_size >= new_size: continue |
---|
610 | - self.preexisting_shares[s] = peer.peerid |
---|
611 | - if s in self.homeless_shares: |
---|
612 | - self.homeless_shares.remove(s) |
---|
613 | - progress = True |
---|
614 | + if should_add_server(self.preexisting_shares, |
---|
615 | + peer.peerid, s): |
---|
616 | + self.preexisting_shares[s] = peer.peerid |
---|
617 | + if s in self.homeless_shares: |
---|
618 | + self.homeless_shares.remove(s) |
---|
619 | + progress = True |
---|
620 | |
---|
621 | # the PeerTracker will remember which shares were allocated on |
---|
622 | # that peer. We just have to remember to use them. |
---|
623 | } |
---|
624 | [Alter the error message returned when peer selection fails |
---|
625 | Kevan Carstensen <kevan@isnotajoke.com>**20091123002405 |
---|
626 | Ignore-this: b2a7dc163edcab8d9613bfd6907e5166 |
---|
627 | |
---|
628 | The Tahoe2PeerSelector returned either NoSharesError or NotEnoughSharesError |
---|
629 | for a variety of error conditions that weren't informatively described by them. |
---|
630 | This patch creates a new error, UploadHappinessError, replaces uses of |
---|
631 | NoSharesError and NotEnoughSharesError with it, and alters the error message |
---|
632 | raised with the errors to be more in line with the new servers_of_happiness |
---|
633 | behavior. See ticket #834 for more information. |
---|
634 | ] { |
---|
635 | hunk ./src/allmydata/immutable/encode.py 14 |
---|
636 | from allmydata.util.assertutil import _assert, precondition |
---|
637 | from allmydata.codec import CRSEncoder |
---|
638 | from allmydata.interfaces import IEncoder, IStorageBucketWriter, \ |
---|
639 | - IEncryptedUploadable, IUploadStatus, NotEnoughSharesError, NoSharesError |
---|
640 | + IEncryptedUploadable, IUploadStatus, UploadHappinessError |
---|
641 | + |
---|
642 | |
---|
643 | """ |
---|
644 | The goal of the encoder is to turn the original file into a series of |
---|
645 | hunk ./src/allmydata/immutable/encode.py 498 |
---|
646 | msg = "lost too many servers during upload (still have %d, want %d): %s" % \ |
---|
647 | (len(servers_left), |
---|
648 | self.servers_of_happiness, why) |
---|
649 | - if servers_left: |
---|
650 | - raise NotEnoughSharesError(msg) |
---|
651 | - else: |
---|
652 | - raise NoSharesError(msg) |
---|
653 | + raise UploadHappinessError(msg) |
---|
654 | self.log("but we can still continue with %s shares, we'll be happy " |
---|
655 | "with at least %s" % (len(servers_left), |
---|
656 | self.servers_of_happiness), |
---|
657 | hunk ./src/allmydata/immutable/encode.py 508 |
---|
658 | d = defer.DeferredList(dl, fireOnOneErrback=True) |
---|
659 | def _eatNotEnoughSharesError(f): |
---|
660 | # all exceptions that occur while talking to a peer are handled |
---|
661 | - # in _remove_shareholder. That might raise NotEnoughSharesError, |
---|
662 | + # in _remove_shareholder. That might raise UploadHappinessError, |
---|
663 | # which will cause the DeferredList to errback but which should |
---|
664 | hunk ./src/allmydata/immutable/encode.py 510 |
---|
665 | - # otherwise be consumed. Allow non-NotEnoughSharesError exceptions |
---|
666 | + # otherwise be consumed. Allow non-UploadHappinessError exceptions |
---|
667 | # to pass through as an unhandled errback. We use this in lieu of |
---|
668 | # consumeErrors=True to allow coding errors to be logged. |
---|
669 | hunk ./src/allmydata/immutable/encode.py 513 |
---|
670 | - f.trap(NotEnoughSharesError, NoSharesError) |
---|
671 | + f.trap(UploadHappinessError) |
---|
672 | return None |
---|
673 | for d0 in dl: |
---|
674 | d0.addErrback(_eatNotEnoughSharesError) |
---|
675 | hunk ./src/allmydata/immutable/upload.py 20 |
---|
676 | from allmydata.util.rrefutil import add_version_to_remote_reference |
---|
677 | from allmydata.interfaces import IUploadable, IUploader, IUploadResults, \ |
---|
678 | IEncryptedUploadable, RIEncryptedUploadable, IUploadStatus, \ |
---|
679 | - NotEnoughSharesError, NoSharesError, NoServersError, \ |
---|
680 | - InsufficientVersionError |
---|
681 | + NoServersError, InsufficientVersionError, UploadHappinessError |
---|
682 | from allmydata.immutable import layout |
---|
683 | from pycryptopp.cipher.aes import AES |
---|
684 | |
---|
685 | hunk ./src/allmydata/immutable/upload.py 119 |
---|
686 | def query_allocated(self): |
---|
687 | d = self._storageserver.callRemote("get_buckets", |
---|
688 | self.storage_index) |
---|
689 | - d.addCallback(self._got_allocate_reply) |
---|
690 | return d |
---|
691 | |
---|
692 | hunk ./src/allmydata/immutable/upload.py 121 |
---|
693 | - def _got_allocate_reply(self, buckets): |
---|
694 | - return (self.peerid, buckets) |
---|
695 | - |
---|
696 | def _got_reply(self, (alreadygot, buckets)): |
---|
697 | #log.msg("%s._got_reply(%s)" % (self, (alreadygot, buckets))) |
---|
698 | b = {} |
---|
699 | hunk ./src/allmydata/immutable/upload.py 187 |
---|
700 | def __init__(self, upload_id, logparent=None, upload_status=None): |
---|
701 | self.upload_id = upload_id |
---|
702 | self.query_count, self.good_query_count, self.bad_query_count = 0,0,0 |
---|
703 | + # Peers that are working normally, but full. |
---|
704 | + self.full_count = 0 |
---|
705 | self.error_count = 0 |
---|
706 | self.num_peers_contacted = 0 |
---|
707 | self.last_failure_msg = None |
---|
708 | hunk ./src/allmydata/immutable/upload.py 291 |
---|
709 | peer = self.readonly_peers.pop() |
---|
710 | assert isinstance(peer, PeerTracker) |
---|
711 | d = peer.query_allocated() |
---|
712 | - d.addCallback(self._handle_existing_response) |
---|
713 | + d.addBoth(self._handle_existing_response, peer.peerid) |
---|
714 | + self.num_peers_contacted += 1 |
---|
715 | + self.query_count += 1 |
---|
716 | + log.msg("asking peer %s for any existing shares for upload id %s" |
---|
717 | + % (idlib.shortnodeid_b2a(peer.peerid), self.upload_id), |
---|
718 | + level=log.NOISY, parent=self._log_parent) |
---|
719 | + if self._status: |
---|
720 | + self._status.set_status("Contacting Peer %s to find " |
---|
721 | + "any existing shares" |
---|
722 | + % idlib.shortnodeid_b2a(peer.peerid)) |
---|
723 | return d |
---|
724 | |
---|
725 | hunk ./src/allmydata/immutable/upload.py 303 |
---|
726 | - def _handle_existing_response(self, (peer, buckets)): |
---|
727 | - for bucket in buckets: |
---|
728 | - if should_add_server(self.preexisting_shares, peer, bucket): |
---|
729 | - self.preexisting_shares[bucket] = peer |
---|
730 | - if self.homeless_shares and bucket in self.homeless_shares: |
---|
731 | - self.homeless_shares.remove(bucket) |
---|
732 | + def _handle_existing_response(self, res, peer): |
---|
733 | + if isinstance(res, failure.Failure): |
---|
734 | + log.msg("%s got error during existing shares check: %s" |
---|
735 | + % (idlib.shortnodeid_b2a(peer), res), |
---|
736 | + level=log.UNUSUAL, parent=self._log_parent) |
---|
737 | + self.error_count += 1 |
---|
738 | + self.bad_query_count += 1 |
---|
739 | + else: |
---|
740 | + buckets = res |
---|
741 | + log.msg("response from peer %s: alreadygot=%s" |
---|
742 | + % (idlib.shortnodeid_b2a(peer), tuple(sorted(buckets))), |
---|
743 | + level=log.NOISY, parent=self._log_parent) |
---|
744 | + for bucket in buckets: |
---|
745 | + if should_add_server(self.preexisting_shares, peer, bucket): |
---|
746 | + self.preexisting_shares[bucket] = peer |
---|
747 | + if self.homeless_shares and bucket in self.homeless_shares: |
---|
748 | + self.homeless_shares.remove(bucket) |
---|
749 | + self.full_count += 1 |
---|
750 | + self.bad_query_count += 1 |
---|
751 | return self._existing_shares() |
---|
752 | |
---|
753 | def _loop(self): |
---|
754 | hunk ./src/allmydata/immutable/upload.py 365 |
---|
755 | items.append((servernum, sharelist)) |
---|
756 | return self._loop() |
---|
757 | else: |
---|
758 | - raise NotEnoughSharesError("shares could only be placed " |
---|
759 | + raise UploadHappinessError("shares could only be placed " |
---|
760 | "on %d servers (%d were requested)" % |
---|
761 | (len(effective_happiness), |
---|
762 | self.servers_of_happiness)) |
---|
763 | hunk ./src/allmydata/immutable/upload.py 424 |
---|
764 | msg = ("placed %d shares out of %d total (%d homeless), " |
---|
765 | "want to place on %d servers, " |
---|
766 | "sent %d queries to %d peers, " |
---|
767 | - "%d queries placed some shares, %d placed none, " |
---|
768 | - "got %d errors" % |
---|
769 | + "%d queries placed some shares, %d placed none " |
---|
770 | + "(of which %d placed none due to the server being" |
---|
771 | + " full and %d placed none due to an error)" % |
---|
772 | (self.total_shares - len(self.homeless_shares), |
---|
773 | self.total_shares, len(self.homeless_shares), |
---|
774 | self.servers_of_happiness, |
---|
775 | hunk ./src/allmydata/immutable/upload.py 432 |
---|
776 | self.query_count, self.num_peers_contacted, |
---|
777 | self.good_query_count, self.bad_query_count, |
---|
778 | - self.error_count)) |
---|
779 | + self.full_count, self.error_count)) |
---|
780 | msg = "peer selection failed for %s: %s" % (self, msg) |
---|
781 | if self.last_failure_msg: |
---|
782 | msg += " (%s)" % (self.last_failure_msg,) |
---|
783 | hunk ./src/allmydata/immutable/upload.py 437 |
---|
784 | log.msg(msg, level=log.UNUSUAL, parent=self._log_parent) |
---|
785 | - if placed_shares: |
---|
786 | - raise NotEnoughSharesError(msg) |
---|
787 | - else: |
---|
788 | - raise NoSharesError(msg) |
---|
789 | + raise UploadHappinessError(msg) |
---|
790 | else: |
---|
791 | # we placed enough to be happy, so we're done |
---|
792 | if self._status: |
---|
793 | hunk ./src/allmydata/immutable/upload.py 451 |
---|
794 | log.msg("%s got error during peer selection: %s" % (peer, res), |
---|
795 | level=log.UNUSUAL, parent=self._log_parent) |
---|
796 | self.error_count += 1 |
---|
797 | + self.bad_query_count += 1 |
---|
798 | self.homeless_shares = list(shares_to_ask) + self.homeless_shares |
---|
799 | if (self.uncontacted_peers |
---|
800 | or self.contacted_peers |
---|
801 | hunk ./src/allmydata/immutable/upload.py 479 |
---|
802 | self.preexisting_shares[s] = peer.peerid |
---|
803 | if s in self.homeless_shares: |
---|
804 | self.homeless_shares.remove(s) |
---|
805 | - progress = True |
---|
806 | |
---|
807 | # the PeerTracker will remember which shares were allocated on |
---|
808 | # that peer. We just have to remember to use them. |
---|
809 | hunk ./src/allmydata/immutable/upload.py 495 |
---|
810 | self.good_query_count += 1 |
---|
811 | else: |
---|
812 | self.bad_query_count += 1 |
---|
813 | + self.full_count += 1 |
---|
814 | |
---|
815 | if still_homeless: |
---|
816 | # In networks with lots of space, this is very unusual and |
---|
817 | hunk ./src/allmydata/interfaces.py 808 |
---|
818 | """ |
---|
819 | |
---|
820 | class NotEnoughSharesError(Exception): |
---|
821 | - """Download was unable to get enough shares, or upload was unable to |
---|
822 | - place 'servers_of_happiness' shares.""" |
---|
823 | + """Download was unable to get enough shares""" |
---|
824 | |
---|
825 | class NoSharesError(Exception): |
---|
826 | hunk ./src/allmydata/interfaces.py 811 |
---|
827 | - """Upload or Download was unable to get any shares at all.""" |
---|
828 | + """Download was unable to get any shares at all.""" |
---|
829 | + |
---|
830 | +class UploadHappinessError(Exception): |
---|
831 | + """Upload was unable to satisfy 'servers_of_happiness'""" |
---|
832 | |
---|
833 | class UnableToFetchCriticalDownloadDataError(Exception): |
---|
834 | """I was unable to fetch some piece of critical data which is supposed to |
---|
835 | } |
---|
836 | [Change "UploadHappinessError" to "UploadUnhappinessError" |
---|
837 | Kevan Carstensen <kevan@isnotajoke.com>**20091205043037 |
---|
838 | Ignore-this: 236b64ab19836854af4993bb5c1b221a |
---|
839 | ] { |
---|
840 | replace ./src/allmydata/immutable/encode.py [A-Za-z_0-9] UploadHappinessError UploadUnhappinessError |
---|
841 | replace ./src/allmydata/immutable/upload.py [A-Za-z_0-9] UploadHappinessError UploadUnhappinessError |
---|
842 | replace ./src/allmydata/interfaces.py [A-Za-z_0-9] UploadHappinessError UploadUnhappinessError |
---|
843 | } |
---|
844 | [Alter the error message when an upload fails, per some comments in #778. |
---|
845 | Kevan Carstensen <kevan@isnotajoke.com>**20091230210344 |
---|
846 | Ignore-this: ba97422b2f9737c46abeb828727beb1 |
---|
847 | |
---|
848 | When I first implemented #778, I just altered the error messages to refer to |
---|
849 | servers where they referred to shares. The resulting error messages weren't |
---|
850 | very good. These are a bit better. |
---|
851 | ] { |
---|
852 | hunk ./src/allmydata/immutable/upload.py 200 |
---|
853 | |
---|
854 | def get_shareholders(self, storage_broker, secret_holder, |
---|
855 | storage_index, share_size, block_size, |
---|
856 | - num_segments, total_shares, servers_of_happiness): |
---|
857 | + num_segments, total_shares, needed_shares, |
---|
858 | + servers_of_happiness): |
---|
859 | """ |
---|
860 | @return: (used_peers, already_peers), where used_peers is a set of |
---|
861 | PeerTracker instances that have agreed to hold some shares |
---|
862 | hunk ./src/allmydata/immutable/upload.py 215 |
---|
863 | |
---|
864 | self.total_shares = total_shares |
---|
865 | self.servers_of_happiness = servers_of_happiness |
---|
866 | + self.needed_shares = needed_shares |
---|
867 | |
---|
868 | self.homeless_shares = range(total_shares) |
---|
869 | # self.uncontacted_peers = list() # peers we haven't asked yet |
---|
870 | hunk ./src/allmydata/immutable/upload.py 230 |
---|
871 | # existing shares for this storage index, which we want to know |
---|
872 | # about for accurate servers_of_happiness accounting |
---|
873 | self.readonly_peers = [] |
---|
874 | + # These peers have shares -- any shares -- for our SI. We keep track |
---|
875 | + # of these to write an error message with them later. |
---|
876 | + self.peers_with_shares = [] |
---|
877 | |
---|
878 | peers = storage_broker.get_servers_for_index(storage_index) |
---|
879 | if not peers: |
---|
880 | hunk ./src/allmydata/immutable/upload.py 317 |
---|
881 | self.bad_query_count += 1 |
---|
882 | else: |
---|
883 | buckets = res |
---|
884 | + if buckets: |
---|
885 | + self.peers_with_shares.append(peer) |
---|
886 | log.msg("response from peer %s: alreadygot=%s" |
---|
887 | % (idlib.shortnodeid_b2a(peer), tuple(sorted(buckets))), |
---|
888 | level=log.NOISY, parent=self._log_parent) |
---|
889 | hunk ./src/allmydata/immutable/upload.py 331 |
---|
890 | self.bad_query_count += 1 |
---|
891 | return self._existing_shares() |
---|
892 | |
---|
893 | + def _get_progress_message(self): |
---|
894 | + if not self.homeless_shares: |
---|
895 | + msg = "placed all %d shares, " % (self.total_shares) |
---|
896 | + else: |
---|
897 | + msg = ("placed %d shares out of %d total (%d homeless), " % |
---|
898 | + (self.total_shares - len(self.homeless_shares), |
---|
899 | + self.total_shares, |
---|
900 | + len(self.homeless_shares))) |
---|
901 | + return (msg + "want to place shares on at least %d servers such that " |
---|
902 | + "any %d of them have enough shares to recover the file, " |
---|
903 | + "sent %d queries to %d peers, " |
---|
904 | + "%d queries placed some shares, %d placed none " |
---|
905 | + "(of which %d placed none due to the server being" |
---|
906 | + " full and %d placed none due to an error)" % |
---|
907 | + (self.servers_of_happiness, self.needed_shares, |
---|
908 | + self.query_count, self.num_peers_contacted, |
---|
909 | + self.good_query_count, self.bad_query_count, |
---|
910 | + self.full_count, self.error_count)) |
---|
911 | + |
---|
912 | + |
---|
913 | def _loop(self): |
---|
914 | if not self.homeless_shares: |
---|
915 | effective_happiness = servers_with_unique_shares( |
---|
916 | hunk ./src/allmydata/immutable/upload.py 357 |
---|
917 | self.preexisting_shares, |
---|
918 | self.use_peers) |
---|
919 | if self.servers_of_happiness <= len(effective_happiness): |
---|
920 | - msg = ("placed all %d shares, " |
---|
921 | - "sent %d queries to %d peers, " |
---|
922 | - "%d queries placed some shares, %d placed none, " |
---|
923 | - "got %d errors" % |
---|
924 | - (self.total_shares, |
---|
925 | - self.query_count, self.num_peers_contacted, |
---|
926 | - self.good_query_count, self.bad_query_count, |
---|
927 | - self.error_count)) |
---|
928 | - log.msg("peer selection successful for %s: %s" % (self, msg), |
---|
929 | - parent=self._log_parent) |
---|
930 | + msg = ("peer selection successful for %s: %s" % (self, |
---|
931 | + self._get_progress_message())) |
---|
932 | + log.msg(msg, parent=self._log_parent) |
---|
933 | return (self.use_peers, self.preexisting_shares) |
---|
934 | else: |
---|
935 | delta = self.servers_of_happiness - len(effective_happiness) |
---|
936 | hunk ./src/allmydata/immutable/upload.py 375 |
---|
937 | if delta <= len(self.uncontacted_peers) and \ |
---|
938 | shares_to_spread >= delta: |
---|
939 | # Loop through the allocated shares, removing |
---|
940 | + # one from each server that has more than one and putting |
---|
941 | + # it back into self.homeless_shares until we've done |
---|
942 | + # this delta times. |
---|
943 | items = shares.items() |
---|
944 | while len(self.homeless_shares) < delta: |
---|
945 | servernum, sharelist = items.pop() |
---|
946 | hunk ./src/allmydata/immutable/upload.py 388 |
---|
947 | items.append((servernum, sharelist)) |
---|
948 | return self._loop() |
---|
949 | else: |
---|
950 | - raise UploadUnhappinessError("shares could only be placed " |
---|
951 | - "on %d servers (%d were requested)" % |
---|
952 | - (len(effective_happiness), |
---|
953 | - self.servers_of_happiness)) |
---|
954 | + peer_count = len(list(set(self.peers_with_shares))) |
---|
955 | + # If peer_count < needed_shares, then the second error |
---|
956 | + # message is nonsensical, so we use this one. |
---|
957 | + if peer_count < self.needed_shares: |
---|
958 | + msg = ("shares could only be placed or found on %d " |
---|
959 | + "server(s). " |
---|
960 | + "We were asked to place shares on at least %d " |
---|
961 | + "server(s) such that any %d of them have " |
---|
962 | + "enough shares to recover the file." % |
---|
963 | + (peer_count, |
---|
964 | + self.servers_of_happiness, |
---|
965 | + self.needed_shares)) |
---|
966 | + # Otherwise, if we've placed on at least needed_shares |
---|
967 | + # peers, but there isn't an x-happy subset of those peers |
---|
968 | + # for x < needed_shares, we use this error message. |
---|
969 | + elif len(effective_happiness) < self.needed_shares: |
---|
970 | + msg = ("shares could be placed or found on %d " |
---|
971 | + "server(s), but they are not spread out evenly " |
---|
972 | + "enough to ensure that any %d of these servers " |
---|
973 | + "would have enough shares to recover the file. " |
---|
974 | + "We were asked to place " |
---|
975 | + "shares on at least %d servers such that any " |
---|
976 | + "%d of them have enough shares to recover the " |
---|
977 | + "file." % |
---|
978 | + (peer_count, |
---|
979 | + self.needed_shares, |
---|
980 | + self.servers_of_happiness, |
---|
981 | + self.needed_shares)) |
---|
982 | + # Otherwise, if there is an x-happy subset of peers where |
---|
983 | + # x >= needed_shares, but x < shares_of_happiness, then |
---|
984 | + # we use this message. |
---|
985 | + else: |
---|
986 | + msg = ("shares could only be placed on %d server(s) " |
---|
987 | + "such that any %d of them have enough shares " |
---|
988 | + "to recover the file, but we were asked to use " |
---|
989 | + "at least %d such servers." % |
---|
990 | + (len(effective_happiness), |
---|
991 | + self.needed_shares, |
---|
992 | + self.servers_of_happiness)) |
---|
993 | + raise UploadUnhappinessError(msg) |
---|
994 | |
---|
995 | if self.uncontacted_peers: |
---|
996 | peer = self.uncontacted_peers.pop(0) |
---|
997 | hunk ./src/allmydata/immutable/upload.py 480 |
---|
998 | self.preexisting_shares, |
---|
999 | self.use_peers) |
---|
1000 | if len(effective_happiness) < self.servers_of_happiness: |
---|
1001 | - msg = ("placed %d shares out of %d total (%d homeless), " |
---|
1002 | - "want to place on %d servers, " |
---|
1003 | - "sent %d queries to %d peers, " |
---|
1004 | - "%d queries placed some shares, %d placed none " |
---|
1005 | - "(of which %d placed none due to the server being" |
---|
1006 | - " full and %d placed none due to an error)" % |
---|
1007 | - (self.total_shares - len(self.homeless_shares), |
---|
1008 | - self.total_shares, len(self.homeless_shares), |
---|
1009 | - self.servers_of_happiness, |
---|
1010 | - self.query_count, self.num_peers_contacted, |
---|
1011 | - self.good_query_count, self.bad_query_count, |
---|
1012 | - self.full_count, self.error_count)) |
---|
1013 | - msg = "peer selection failed for %s: %s" % (self, msg) |
---|
1014 | + msg = ("peer selection failed for %s: %s" % (self, |
---|
1015 | + self._get_progress_message())) |
---|
1016 | if self.last_failure_msg: |
---|
1017 | msg += " (%s)" % (self.last_failure_msg,) |
---|
1018 | log.msg(msg, level=log.UNUSUAL, parent=self._log_parent) |
---|
1019 | hunk ./src/allmydata/immutable/upload.py 534 |
---|
1020 | self.use_peers.add(peer) |
---|
1021 | progress = True |
---|
1022 | |
---|
1023 | + if allocated or alreadygot: |
---|
1024 | + self.peers_with_shares.append(peer.peerid) |
---|
1025 | + |
---|
1026 | not_yet_present = set(shares_to_ask) - set(alreadygot) |
---|
1027 | still_homeless = not_yet_present - set(allocated) |
---|
1028 | |
---|
1029 | hunk ./src/allmydata/immutable/upload.py 931 |
---|
1030 | d = peer_selector.get_shareholders(storage_broker, secret_holder, |
---|
1031 | storage_index, |
---|
1032 | share_size, block_size, |
---|
1033 | - num_segments, n, desired) |
---|
1034 | + num_segments, n, k, desired) |
---|
1035 | def _done(res): |
---|
1036 | self._peer_selection_elapsed = time.time() - peer_selection_started |
---|
1037 | return res |
---|
1038 | } |
---|
1039 | [Fix up the behavior of #778, per reviewers' comments |
---|
1040 | Kevan Carstensen <kevan@isnotajoke.com>**20100215202214 |
---|
1041 | Ignore-this: 14bf3680b77fa1b2dafa85eb22c2ebf9 |
---|
1042 | |
---|
1043 | - Make some important utility functions clearer and more thoroughly |
---|
1044 | documented. |
---|
1045 | - Assert in upload.servers_of_happiness that the buckets attributes |
---|
1046 | of PeerTrackers passed to it are mutually disjoint. |
---|
1047 | - Get rid of some silly non-Pythonisms that I didn't see when I first |
---|
1048 | wrote these patches. |
---|
1049 | - Make sure that should_add_server returns true when queried about a |
---|
1050 | shnum that it doesn't know about yet. |
---|
1051 | - Change Tahoe2PeerSelector.preexisting_shares to map a shareid to a set |
---|
1052 | of peerids, alter dependencies to deal with that. |
---|
1053 | - Remove upload.should_add_servers, because it is no longer necessary |
---|
1054 | - Move upload.shares_of_happiness and upload.shares_by_server to a utility |
---|
1055 | file. |
---|
1056 | - Change some points in Tahoe2PeerSelector. |
---|
1057 | - Compute servers_of_happiness using a bipartite matching algorithm that |
---|
1058 | we know is optimal instead of an ad-hoc greedy algorithm that isn't. |
---|
1059 | - Change servers_of_happiness to just take a sharemap as an argument, |
---|
1060 | change its callers to merge existing_shares and used_peers before |
---|
1061 | calling it. |
---|
1062 | - Change an error message in the encoder to be more appropriate for |
---|
1063 | servers of happiness. |
---|
1064 | |
---|
1065 | ] { |
---|
1066 | hunk ./src/allmydata/immutable/encode.py 10 |
---|
1067 | from allmydata import uri |
---|
1068 | from allmydata.storage.server import si_b2a |
---|
1069 | from allmydata.hashtree import HashTree |
---|
1070 | -from allmydata.util import mathutil, hashutil, base32, log |
---|
1071 | +from allmydata.util import mathutil, hashutil, base32, log, happinessutil |
---|
1072 | from allmydata.util.assertutil import _assert, precondition |
---|
1073 | from allmydata.codec import CRSEncoder |
---|
1074 | from allmydata.interfaces import IEncoder, IStorageBucketWriter, \ |
---|
1075 | hunk ./src/allmydata/immutable/encode.py 201 |
---|
1076 | assert IStorageBucketWriter.providedBy(landlords[k]) |
---|
1077 | self.landlords = landlords.copy() |
---|
1078 | assert isinstance(servermap, dict) |
---|
1079 | + for k in servermap: |
---|
1080 | + assert isinstance(servermap[k], set) |
---|
1081 | self.servermap = servermap.copy() |
---|
1082 | |
---|
1083 | def start(self): |
---|
1084 | hunk ./src/allmydata/immutable/encode.py 489 |
---|
1085 | level=log.UNUSUAL, failure=why) |
---|
1086 | if shareid in self.landlords: |
---|
1087 | self.landlords[shareid].abort() |
---|
1088 | + peerid = self.landlords[shareid].get_peerid() |
---|
1089 | del self.landlords[shareid] |
---|
1090 | hunk ./src/allmydata/immutable/encode.py 491 |
---|
1091 | + if peerid: |
---|
1092 | + self.servermap[shareid].remove(peerid) |
---|
1093 | + if not self.servermap[shareid]: |
---|
1094 | + del self.servermap[shareid] |
---|
1095 | else: |
---|
1096 | # even more UNUSUAL |
---|
1097 | self.log("they weren't in our list of landlords", parent=ln, |
---|
1098 | hunk ./src/allmydata/immutable/encode.py 499 |
---|
1099 | level=log.WEIRD, umid="TQGFRw") |
---|
1100 | - del(self.servermap[shareid]) |
---|
1101 | - servers_left = list(set(self.servermap.values())) |
---|
1102 | - if len(servers_left) < self.servers_of_happiness: |
---|
1103 | - msg = "lost too many servers during upload (still have %d, want %d): %s" % \ |
---|
1104 | - (len(servers_left), |
---|
1105 | - self.servers_of_happiness, why) |
---|
1106 | + happiness = happinessutil.servers_of_happiness(self.servermap) |
---|
1107 | + if happiness < self.servers_of_happiness: |
---|
1108 | + msg = ("lost too many servers during upload " |
---|
1109 | + "(happiness is now %d, but we wanted %d): %s" % |
---|
1110 | + (happiness, |
---|
1111 | + self.servers_of_happiness, why)) |
---|
1112 | raise UploadUnhappinessError(msg) |
---|
1113 | self.log("but we can still continue with %s shares, we'll be happy " |
---|
1114 | hunk ./src/allmydata/immutable/encode.py 507 |
---|
1115 | - "with at least %s" % (len(servers_left), |
---|
1116 | + "with at least %s" % (happiness, |
---|
1117 | self.servers_of_happiness), |
---|
1118 | parent=ln) |
---|
1119 | |
---|
1120 | hunk ./src/allmydata/immutable/encode.py 513 |
---|
1121 | def _gather_responses(self, dl): |
---|
1122 | d = defer.DeferredList(dl, fireOnOneErrback=True) |
---|
1123 | - def _eatNotEnoughSharesError(f): |
---|
1124 | + def _eatUploadUnhappinessError(f): |
---|
1125 | # all exceptions that occur while talking to a peer are handled |
---|
1126 | # in _remove_shareholder. That might raise UploadUnhappinessError, |
---|
1127 | # which will cause the DeferredList to errback but which should |
---|
1128 | hunk ./src/allmydata/immutable/encode.py 523 |
---|
1129 | f.trap(UploadUnhappinessError) |
---|
1130 | return None |
---|
1131 | for d0 in dl: |
---|
1132 | - d0.addErrback(_eatNotEnoughSharesError) |
---|
1133 | + d0.addErrback(_eatUploadUnhappinessError) |
---|
1134 | return d |
---|
1135 | |
---|
1136 | def finish_hashing(self): |
---|
1137 | hunk ./src/allmydata/immutable/layout.py 245 |
---|
1138 | def abort(self): |
---|
1139 | return self._rref.callRemoteOnly("abort") |
---|
1140 | |
---|
1141 | + |
---|
1142 | + def get_peerid(self): |
---|
1143 | + if self._nodeid: |
---|
1144 | + return self._nodeid |
---|
1145 | + return None |
---|
1146 | + |
---|
1147 | class WriteBucketProxy_v2(WriteBucketProxy): |
---|
1148 | fieldsize = 8 |
---|
1149 | fieldstruct = ">Q" |
---|
1150 | hunk ./src/allmydata/immutable/upload.py 16 |
---|
1151 | from allmydata.storage.server import si_b2a |
---|
1152 | from allmydata.immutable import encode |
---|
1153 | from allmydata.util import base32, dictutil, idlib, log, mathutil |
---|
1154 | +from allmydata.util.happinessutil import servers_of_happiness, \ |
---|
1155 | + shares_by_server, merge_peers |
---|
1156 | from allmydata.util.assertutil import precondition |
---|
1157 | from allmydata.util.rrefutil import add_version_to_remote_reference |
---|
1158 | from allmydata.interfaces import IUploadable, IUploader, IUploadResults, \ |
---|
1159 | hunk ./src/allmydata/immutable/upload.py 119 |
---|
1160 | return d |
---|
1161 | |
---|
1162 | def query_allocated(self): |
---|
1163 | - d = self._storageserver.callRemote("get_buckets", |
---|
1164 | - self.storage_index) |
---|
1165 | - return d |
---|
1166 | + return self._storageserver.callRemote("get_buckets", |
---|
1167 | + self.storage_index) |
---|
1168 | |
---|
1169 | def _got_reply(self, (alreadygot, buckets)): |
---|
1170 | #log.msg("%s._got_reply(%s)" % (self, (alreadygot, buckets))) |
---|
1171 | hunk ./src/allmydata/immutable/upload.py 136 |
---|
1172 | self.buckets.update(b) |
---|
1173 | return (alreadygot, set(b.keys())) |
---|
1174 | |
---|
1175 | -def servers_with_unique_shares(existing_shares, used_peers=None): |
---|
1176 | - """ |
---|
1177 | - I accept a dict of shareid -> peerid mappings (and optionally a list |
---|
1178 | - of PeerTracker instances) and return a list of servers that have shares. |
---|
1179 | - """ |
---|
1180 | - servers = [] |
---|
1181 | - existing_shares = existing_shares.copy() |
---|
1182 | - if used_peers: |
---|
1183 | - peerdict = {} |
---|
1184 | - for peer in used_peers: |
---|
1185 | - peerdict.update(dict([(i, peer.peerid) for i in peer.buckets])) |
---|
1186 | - for k in peerdict.keys(): |
---|
1187 | - if existing_shares.has_key(k): |
---|
1188 | - # Prevent overcounting; favor the bucket, and not the |
---|
1189 | - # prexisting share. |
---|
1190 | - del(existing_shares[k]) |
---|
1191 | - peers = list(used_peers.copy()) |
---|
1192 | - # We do this because the preexisting shares list goes by peerid. |
---|
1193 | - peers = [x.peerid for x in peers] |
---|
1194 | - servers.extend(peers) |
---|
1195 | - servers.extend(existing_shares.values()) |
---|
1196 | - return list(set(servers)) |
---|
1197 | - |
---|
1198 | -def shares_by_server(existing_shares): |
---|
1199 | - """ |
---|
1200 | - I accept a dict of shareid -> peerid mappings, and return a dict |
---|
1201 | - of peerid -> shareid mappings |
---|
1202 | - """ |
---|
1203 | - servers = {} |
---|
1204 | - for server in set(existing_shares.values()): |
---|
1205 | - servers[server] = set([x for x in existing_shares.keys() |
---|
1206 | - if existing_shares[x] == server]) |
---|
1207 | - return servers |
---|
1208 | - |
---|
1209 | -def should_add_server(existing_shares, server, bucket): |
---|
1210 | - """ |
---|
1211 | - I tell my caller whether the servers_of_happiness number will be |
---|
1212 | - increased or decreased if a particular server is added as the peer |
---|
1213 | - already holding a particular share. I take a dictionary, a peerid, |
---|
1214 | - and a bucket as arguments, and return a boolean. |
---|
1215 | - """ |
---|
1216 | - old_size = len(servers_with_unique_shares(existing_shares)) |
---|
1217 | - new_candidate = existing_shares.copy() |
---|
1218 | - new_candidate[bucket] = server |
---|
1219 | - new_size = len(servers_with_unique_shares(new_candidate)) |
---|
1220 | - return old_size < new_size |
---|
1221 | |
---|
1222 | class Tahoe2PeerSelector: |
---|
1223 | |
---|
1224 | hunk ./src/allmydata/immutable/upload.py 161 |
---|
1225 | @return: (used_peers, already_peers), where used_peers is a set of |
---|
1226 | PeerTracker instances that have agreed to hold some shares |
---|
1227 | for us (the shnum is stashed inside the PeerTracker), |
---|
1228 | - and already_peers is a dict mapping shnum to a peer |
---|
1229 | - which claims to already have the share. |
---|
1230 | + and already_peers is a dict mapping shnum to a set of peers |
---|
1231 | + which claim to already have the share. |
---|
1232 | """ |
---|
1233 | |
---|
1234 | if self._status: |
---|
1235 | hunk ./src/allmydata/immutable/upload.py 173 |
---|
1236 | self.needed_shares = needed_shares |
---|
1237 | |
---|
1238 | self.homeless_shares = range(total_shares) |
---|
1239 | - # self.uncontacted_peers = list() # peers we haven't asked yet |
---|
1240 | self.contacted_peers = [] # peers worth asking again |
---|
1241 | self.contacted_peers2 = [] # peers that we have asked again |
---|
1242 | self._started_second_pass = False |
---|
1243 | hunk ./src/allmydata/immutable/upload.py 177 |
---|
1244 | self.use_peers = set() # PeerTrackers that have shares assigned to them |
---|
1245 | - self.preexisting_shares = {} # sharenum -> peerid holding the share |
---|
1246 | - # We don't try to allocate shares to these servers, since they've |
---|
1247 | - # said that they're incapable of storing shares of the size that |
---|
1248 | - # we'd want to store. We keep them around because they may have |
---|
1249 | - # existing shares for this storage index, which we want to know |
---|
1250 | - # about for accurate servers_of_happiness accounting |
---|
1251 | - self.readonly_peers = [] |
---|
1252 | - # These peers have shares -- any shares -- for our SI. We keep track |
---|
1253 | - # of these to write an error message with them later. |
---|
1254 | + self.preexisting_shares = {} # shareid => set(peerids) holding shareid |
---|
1255 | + # We don't try to allocate shares to these servers, since they've said |
---|
1256 | + # that they're incapable of storing shares of the size that we'd want |
---|
1257 | + # to store. We keep them around because they may have existing shares |
---|
1258 | + # for this storage index, which we want to know about for accurate |
---|
1259 | + # servers_of_happiness accounting |
---|
1260 | + # (this is eventually a list, but it is initialized later) |
---|
1261 | + self.readonly_peers = None |
---|
1262 | + # These peers have shares -- any shares -- for our SI. We keep |
---|
1263 | + # track of these to write an error message with them later. |
---|
1264 | self.peers_with_shares = [] |
---|
1265 | |
---|
1266 | hunk ./src/allmydata/immutable/upload.py 189 |
---|
1267 | - peers = storage_broker.get_servers_for_index(storage_index) |
---|
1268 | - if not peers: |
---|
1269 | - raise NoServersError("client gave us zero peers") |
---|
1270 | - |
---|
1271 | # this needed_hashes computation should mirror |
---|
1272 | # Encoder.send_all_share_hash_trees. We use an IncompleteHashTree |
---|
1273 | # (instead of a HashTree) because we don't require actual hashing |
---|
1274 | hunk ./src/allmydata/immutable/upload.py 201 |
---|
1275 | num_share_hashes, EXTENSION_SIZE, |
---|
1276 | None) |
---|
1277 | allocated_size = wbp.get_allocated_size() |
---|
1278 | + all_peers = storage_broker.get_servers_for_index(storage_index) |
---|
1279 | + if not all_peers: |
---|
1280 | + raise NoServersError("client gave us zero peers") |
---|
1281 | |
---|
1282 | # filter the list of peers according to which ones can accomodate |
---|
1283 | # this request. This excludes older peers (which used a 4-byte size |
---|
1284 | hunk ./src/allmydata/immutable/upload.py 213 |
---|
1285 | (peerid, conn) = peer |
---|
1286 | v1 = conn.version["http://allmydata.org/tahoe/protocols/storage/v1"] |
---|
1287 | return v1["maximum-immutable-share-size"] |
---|
1288 | - new_peers = [peer for peer in peers |
---|
1289 | - if _get_maxsize(peer) >= allocated_size] |
---|
1290 | - old_peers = list(set(peers).difference(set(new_peers))) |
---|
1291 | - peers = new_peers |
---|
1292 | + writable_peers = [peer for peer in all_peers |
---|
1293 | + if _get_maxsize(peer) >= allocated_size] |
---|
1294 | + readonly_peers = set(all_peers) - set(writable_peers) |
---|
1295 | |
---|
1296 | # decide upon the renewal/cancel secrets, to include them in the |
---|
1297 | # allocate_buckets query. |
---|
1298 | hunk ./src/allmydata/immutable/upload.py 227 |
---|
1299 | file_cancel_secret = file_cancel_secret_hash(client_cancel_secret, |
---|
1300 | storage_index) |
---|
1301 | def _make_trackers(peers): |
---|
1302 | - return [ PeerTracker(peerid, conn, |
---|
1303 | - share_size, block_size, |
---|
1304 | - num_segments, num_share_hashes, |
---|
1305 | - storage_index, |
---|
1306 | - bucket_renewal_secret_hash(file_renewal_secret, |
---|
1307 | - peerid), |
---|
1308 | - bucket_cancel_secret_hash(file_cancel_secret, |
---|
1309 | - peerid)) |
---|
1310 | + return [PeerTracker(peerid, conn, |
---|
1311 | + share_size, block_size, |
---|
1312 | + num_segments, num_share_hashes, |
---|
1313 | + storage_index, |
---|
1314 | + bucket_renewal_secret_hash(file_renewal_secret, |
---|
1315 | + peerid), |
---|
1316 | + bucket_cancel_secret_hash(file_cancel_secret, |
---|
1317 | + peerid)) |
---|
1318 | for (peerid, conn) in peers] |
---|
1319 | hunk ./src/allmydata/immutable/upload.py 236 |
---|
1320 | - self.uncontacted_peers = _make_trackers(peers) |
---|
1321 | - self.readonly_peers = _make_trackers(old_peers) |
---|
1322 | - # Talk to the readonly servers to get an idea of what servers |
---|
1323 | - # have what shares (if any) for this storage index |
---|
1324 | + self.uncontacted_peers = _make_trackers(writable_peers) |
---|
1325 | + self.readonly_peers = _make_trackers(readonly_peers) |
---|
1326 | + # We now ask peers that can't hold any new shares about existing |
---|
1327 | + # shares that they might have for our SI. Once this is done, we |
---|
1328 | + # start placing the shares that we haven't already accounted |
---|
1329 | + # for. |
---|
1330 | d = defer.maybeDeferred(self._existing_shares) |
---|
1331 | d.addCallback(lambda ign: self._loop()) |
---|
1332 | return d |
---|
1333 | hunk ./src/allmydata/immutable/upload.py 247 |
---|
1334 | |
---|
1335 | def _existing_shares(self): |
---|
1336 | + """ |
---|
1337 | + I loop through the list of peers that aren't accepting any new |
---|
1338 | + shares for this upload, asking each of them to tell me about the |
---|
1339 | + shares they already have for this upload's SI. |
---|
1340 | + """ |
---|
1341 | if self.readonly_peers: |
---|
1342 | peer = self.readonly_peers.pop() |
---|
1343 | assert isinstance(peer, PeerTracker) |
---|
1344 | hunk ./src/allmydata/immutable/upload.py 269 |
---|
1345 | return d |
---|
1346 | |
---|
1347 | def _handle_existing_response(self, res, peer): |
---|
1348 | + """ |
---|
1349 | + I handle responses to the queries sent by |
---|
1350 | + Tahoe2PeerSelector._existing_shares. |
---|
1351 | + """ |
---|
1352 | if isinstance(res, failure.Failure): |
---|
1353 | log.msg("%s got error during existing shares check: %s" |
---|
1354 | % (idlib.shortnodeid_b2a(peer), res), |
---|
1355 | hunk ./src/allmydata/immutable/upload.py 287 |
---|
1356 | % (idlib.shortnodeid_b2a(peer), tuple(sorted(buckets))), |
---|
1357 | level=log.NOISY, parent=self._log_parent) |
---|
1358 | for bucket in buckets: |
---|
1359 | - if should_add_server(self.preexisting_shares, peer, bucket): |
---|
1360 | - self.preexisting_shares[bucket] = peer |
---|
1361 | - if self.homeless_shares and bucket in self.homeless_shares: |
---|
1362 | - self.homeless_shares.remove(bucket) |
---|
1363 | + self.preexisting_shares.setdefault(bucket, set()).add(peer) |
---|
1364 | + if self.homeless_shares and bucket in self.homeless_shares: |
---|
1365 | + self.homeless_shares.remove(bucket) |
---|
1366 | self.full_count += 1 |
---|
1367 | self.bad_query_count += 1 |
---|
1368 | return self._existing_shares() |
---|
1369 | hunk ./src/allmydata/immutable/upload.py 316 |
---|
1370 | |
---|
1371 | def _loop(self): |
---|
1372 | if not self.homeless_shares: |
---|
1373 | - effective_happiness = servers_with_unique_shares( |
---|
1374 | - self.preexisting_shares, |
---|
1375 | - self.use_peers) |
---|
1376 | - if self.servers_of_happiness <= len(effective_happiness): |
---|
1377 | + merged = merge_peers(self.preexisting_shares, self.use_peers) |
---|
1378 | + effective_happiness = servers_of_happiness(merged) |
---|
1379 | + if self.servers_of_happiness <= effective_happiness: |
---|
1380 | msg = ("peer selection successful for %s: %s" % (self, |
---|
1381 | self._get_progress_message())) |
---|
1382 | log.msg(msg, parent=self._log_parent) |
---|
1383 | hunk ./src/allmydata/immutable/upload.py 324 |
---|
1384 | return (self.use_peers, self.preexisting_shares) |
---|
1385 | else: |
---|
1386 | - delta = self.servers_of_happiness - len(effective_happiness) |
---|
1387 | + # We're not okay right now, but maybe we can fix it by |
---|
1388 | + # redistributing some shares. In cases where one or two |
---|
1389 | + # servers has, before the upload, all or most of the |
---|
1390 | + # shares for a given SI, this can work by allowing _loop |
---|
1391 | + # a chance to spread those out over the other peers, |
---|
1392 | + delta = self.servers_of_happiness - effective_happiness |
---|
1393 | shares = shares_by_server(self.preexisting_shares) |
---|
1394 | # Each server in shares maps to a set of shares stored on it. |
---|
1395 | # Since we want to keep at least one share on each server |
---|
1396 | hunk ./src/allmydata/immutable/upload.py 341 |
---|
1397 | in shares.items()]) |
---|
1398 | if delta <= len(self.uncontacted_peers) and \ |
---|
1399 | shares_to_spread >= delta: |
---|
1400 | - # Loop through the allocated shares, removing |
---|
1401 | - # one from each server that has more than one and putting |
---|
1402 | - # it back into self.homeless_shares until we've done |
---|
1403 | - # this delta times. |
---|
1404 | items = shares.items() |
---|
1405 | while len(self.homeless_shares) < delta: |
---|
1406 | hunk ./src/allmydata/immutable/upload.py 343 |
---|
1407 | - servernum, sharelist = items.pop() |
---|
1408 | + # Loop through the allocated shares, removing |
---|
1409 | + # one from each server that has more than one |
---|
1410 | + # and putting it back into self.homeless_shares |
---|
1411 | + # until we've done this delta times. |
---|
1412 | + server, sharelist = items.pop() |
---|
1413 | if len(sharelist) > 1: |
---|
1414 | share = sharelist.pop() |
---|
1415 | self.homeless_shares.append(share) |
---|
1416 | hunk ./src/allmydata/immutable/upload.py 351 |
---|
1417 | - del(self.preexisting_shares[share]) |
---|
1418 | - items.append((servernum, sharelist)) |
---|
1419 | + self.preexisting_shares[share].remove(server) |
---|
1420 | + if not self.preexisting_shares[share]: |
---|
1421 | + del self.preexisting_shares[share] |
---|
1422 | + items.append((server, sharelist)) |
---|
1423 | return self._loop() |
---|
1424 | else: |
---|
1425 | hunk ./src/allmydata/immutable/upload.py 357 |
---|
1426 | + # Redistribution won't help us; fail. |
---|
1427 | peer_count = len(list(set(self.peers_with_shares))) |
---|
1428 | # If peer_count < needed_shares, then the second error |
---|
1429 | # message is nonsensical, so we use this one. |
---|
1430 | hunk ./src/allmydata/immutable/upload.py 373 |
---|
1431 | # Otherwise, if we've placed on at least needed_shares |
---|
1432 | # peers, but there isn't an x-happy subset of those peers |
---|
1433 | # for x < needed_shares, we use this error message. |
---|
1434 | - elif len(effective_happiness) < self.needed_shares: |
---|
1435 | + elif effective_happiness < self.needed_shares: |
---|
1436 | msg = ("shares could be placed or found on %d " |
---|
1437 | "server(s), but they are not spread out evenly " |
---|
1438 | "enough to ensure that any %d of these servers " |
---|
1439 | hunk ./src/allmydata/immutable/upload.py 387 |
---|
1440 | self.servers_of_happiness, |
---|
1441 | self.needed_shares)) |
---|
1442 | # Otherwise, if there is an x-happy subset of peers where |
---|
1443 | - # x >= needed_shares, but x < shares_of_happiness, then |
---|
1444 | + # x >= needed_shares, but x < servers_of_happiness, then |
---|
1445 | # we use this message. |
---|
1446 | else: |
---|
1447 | msg = ("shares could only be placed on %d server(s) " |
---|
1448 | hunk ./src/allmydata/immutable/upload.py 394 |
---|
1449 | "such that any %d of them have enough shares " |
---|
1450 | "to recover the file, but we were asked to use " |
---|
1451 | "at least %d such servers." % |
---|
1452 | - (len(effective_happiness), |
---|
1453 | + (effective_happiness, |
---|
1454 | self.needed_shares, |
---|
1455 | self.servers_of_happiness)) |
---|
1456 | raise UploadUnhappinessError(msg) |
---|
1457 | hunk ./src/allmydata/immutable/upload.py 446 |
---|
1458 | else: |
---|
1459 | # no more peers. If we haven't placed enough shares, we fail. |
---|
1460 | placed_shares = self.total_shares - len(self.homeless_shares) |
---|
1461 | - effective_happiness = servers_with_unique_shares( |
---|
1462 | - self.preexisting_shares, |
---|
1463 | - self.use_peers) |
---|
1464 | - if len(effective_happiness) < self.servers_of_happiness: |
---|
1465 | + merged = merge_peers(self.preexisting_shares, self.use_peers) |
---|
1466 | + effective_happiness = servers_of_happiness(merged) |
---|
1467 | + if effective_happiness < self.servers_of_happiness: |
---|
1468 | msg = ("peer selection failed for %s: %s" % (self, |
---|
1469 | self._get_progress_message())) |
---|
1470 | if self.last_failure_msg: |
---|
1471 | hunk ./src/allmydata/immutable/upload.py 491 |
---|
1472 | level=log.NOISY, parent=self._log_parent) |
---|
1473 | progress = False |
---|
1474 | for s in alreadygot: |
---|
1475 | - if should_add_server(self.preexisting_shares, |
---|
1476 | - peer.peerid, s): |
---|
1477 | - self.preexisting_shares[s] = peer.peerid |
---|
1478 | - if s in self.homeless_shares: |
---|
1479 | - self.homeless_shares.remove(s) |
---|
1480 | + self.preexisting_shares.setdefault(s, set()).add(peer.peerid) |
---|
1481 | + if s in self.homeless_shares: |
---|
1482 | + self.homeless_shares.remove(s) |
---|
1483 | |
---|
1484 | # the PeerTracker will remember which shares were allocated on |
---|
1485 | # that peer. We just have to remember to use them. |
---|
1486 | hunk ./src/allmydata/immutable/upload.py 908 |
---|
1487 | def set_shareholders(self, (used_peers, already_peers), encoder): |
---|
1488 | """ |
---|
1489 | @param used_peers: a sequence of PeerTracker objects |
---|
1490 | - @paran already_peers: a dict mapping sharenum to a peerid that |
---|
1491 | - claims to already have this share |
---|
1492 | + @paran already_peers: a dict mapping sharenum to a set of peerids |
---|
1493 | + that claim to already have this share |
---|
1494 | """ |
---|
1495 | self.log("_send_shares, used_peers is %s" % (used_peers,)) |
---|
1496 | # record already-present shares in self._results |
---|
1497 | hunk ./src/allmydata/immutable/upload.py 924 |
---|
1498 | buckets.update(peer.buckets) |
---|
1499 | for shnum in peer.buckets: |
---|
1500 | self._peer_trackers[shnum] = peer |
---|
1501 | - servermap[shnum] = peer.peerid |
---|
1502 | + servermap.setdefault(shnum, set()).add(peer.peerid) |
---|
1503 | assert len(buckets) == sum([len(peer.buckets) for peer in used_peers]) |
---|
1504 | encoder.set_shareholders(buckets, servermap) |
---|
1505 | |
---|
1506 | hunk ./src/allmydata/interfaces.py 1348 |
---|
1507 | must be a dictionary that maps share number (an integer ranging from |
---|
1508 | 0 to n-1) to an instance that provides IStorageBucketWriter. |
---|
1509 | 'servermap' is a dictionary that maps share number (as defined above) |
---|
1510 | - to a peerid. This must be performed before start() can be called.""" |
---|
1511 | + to a set of peerids. This must be performed before start() can be |
---|
1512 | + called.""" |
---|
1513 | |
---|
1514 | def start(): |
---|
1515 | """Begin the encode/upload process. This involves reading encrypted |
---|
1516 | addfile ./src/allmydata/util/happinessutil.py |
---|
1517 | hunk ./src/allmydata/util/happinessutil.py 1 |
---|
1518 | +""" |
---|
1519 | +I contain utilities useful for calculating servers_of_happiness |
---|
1520 | +""" |
---|
1521 | + |
---|
1522 | +def shares_by_server(servermap): |
---|
1523 | + """ |
---|
1524 | + I accept a dict of shareid -> set(peerid) mappings, and return a |
---|
1525 | + dict of peerid -> set(shareid) mappings. My argument is a dictionary |
---|
1526 | + with sets of peers, indexed by shares, and I transform that into a |
---|
1527 | + dictionary of sets of shares, indexed by peerids. |
---|
1528 | + """ |
---|
1529 | + ret = {} |
---|
1530 | + for shareid, peers in servermap.iteritems(): |
---|
1531 | + assert isinstance(peers, set) |
---|
1532 | + for peerid in peers: |
---|
1533 | + ret.setdefault(peerid, set()).add(shareid) |
---|
1534 | + return ret |
---|
1535 | + |
---|
1536 | +def merge_peers(servermap, used_peers=None): |
---|
1537 | + """ |
---|
1538 | + I accept a dict of shareid -> set(peerid) mappings, and optionally a |
---|
1539 | + set of PeerTrackers. If no set of PeerTrackers is provided, I return |
---|
1540 | + my first argument unmodified. Otherwise, I update a copy of my first |
---|
1541 | + argument to include the shareid -> peerid mappings implied in the |
---|
1542 | + set of PeerTrackers, returning the resulting dict. |
---|
1543 | + """ |
---|
1544 | + if not used_peers: |
---|
1545 | + return servermap |
---|
1546 | + |
---|
1547 | + assert(isinstance(servermap, dict)) |
---|
1548 | + assert(isinstance(used_peers, set)) |
---|
1549 | + |
---|
1550 | + # Since we mutate servermap, and are called outside of a |
---|
1551 | + # context where it is okay to do that, make a copy of servermap and |
---|
1552 | + # work with it. |
---|
1553 | + servermap = servermap.copy() |
---|
1554 | + for peer in used_peers: |
---|
1555 | + for shnum in peer.buckets: |
---|
1556 | + servermap.setdefault(shnum, set()).add(peer.peerid) |
---|
1557 | + return servermap |
---|
1558 | + |
---|
1559 | +def servers_of_happiness(sharemap): |
---|
1560 | + """ |
---|
1561 | + I accept 'sharemap', a dict of shareid -> set(peerid) mappings. I |
---|
1562 | + return the 'servers_of_happiness' number that sharemap results in. |
---|
1563 | + |
---|
1564 | + To calculate the 'servers_of_happiness' number for the sharemap, I |
---|
1565 | + construct a bipartite graph with servers in one partition of vertices |
---|
1566 | + and shares in the other, and with an edge between a server s and a share t |
---|
1567 | + if s is to store t. I then compute the size of a maximum matching in |
---|
1568 | + the resulting graph; this is then returned as the 'servers_of_happiness' |
---|
1569 | + for my arguments. |
---|
1570 | + |
---|
1571 | + For example, consider the following layout: |
---|
1572 | + |
---|
1573 | + server 1: shares 1, 2, 3, 4 |
---|
1574 | + server 2: share 6 |
---|
1575 | + server 3: share 3 |
---|
1576 | + server 4: share 4 |
---|
1577 | + server 5: share 2 |
---|
1578 | + |
---|
1579 | + From this, we can construct the following graph: |
---|
1580 | + |
---|
1581 | + L = {server 1, server 2, server 3, server 4, server 5} |
---|
1582 | + R = {share 1, share 2, share 3, share 4, share 6} |
---|
1583 | + V = L U R |
---|
1584 | + E = {(server 1, share 1), (server 1, share 2), (server 1, share 3), |
---|
1585 | + (server 1, share 4), (server 2, share 6), (server 3, share 3), |
---|
1586 | + (server 4, share 4), (server 5, share 2)} |
---|
1587 | + G = (V, E) |
---|
1588 | + |
---|
1589 | + Note that G is bipartite since every edge in e has one endpoint in L |
---|
1590 | + and one endpoint in R. |
---|
1591 | + |
---|
1592 | + A matching in a graph G is a subset M of E such that, for any vertex |
---|
1593 | + v in V, v is incident to at most one edge of M. A maximum matching |
---|
1594 | + in G is a matching that is no smaller than any other matching. For |
---|
1595 | + this graph, a matching of cardinality 5 is: |
---|
1596 | + |
---|
1597 | + M = {(server 1, share 1), (server 2, share 6), |
---|
1598 | + (server 3, share 3), (server 4, share 4), |
---|
1599 | + (server 5, share 2)} |
---|
1600 | + |
---|
1601 | + Since G is bipartite, and since |L| = 5, we cannot have an M' such |
---|
1602 | + that |M'| > |M|. Then M is a maximum matching in G. Intuitively, and |
---|
1603 | + as long as k <= 5, we can see that the layout above has |
---|
1604 | + servers_of_happiness = 5, which matches the results here. |
---|
1605 | + """ |
---|
1606 | + if sharemap == {}: |
---|
1607 | + return 0 |
---|
1608 | + sharemap = shares_by_server(sharemap) |
---|
1609 | + graph = flow_network_for(sharemap) |
---|
1610 | + # This is an implementation of the Ford-Fulkerson method for finding |
---|
1611 | + # a maximum flow in a flow network applied to a bipartite graph. |
---|
1612 | + # Specifically, it is the Edmonds-Karp algorithm, since it uses a |
---|
1613 | + # BFS to find the shortest augmenting path at each iteration, if one |
---|
1614 | + # exists. |
---|
1615 | + # |
---|
1616 | + # The implementation here is an adapation of an algorithm described in |
---|
1617 | + # "Introduction to Algorithms", Cormen et al, 2nd ed., pp 658-662. |
---|
1618 | + dim = len(graph) |
---|
1619 | + flow_function = [[0 for sh in xrange(dim)] for s in xrange(dim)] |
---|
1620 | + residual_graph, residual_function = residual_network(graph, flow_function) |
---|
1621 | + while augmenting_path_for(residual_graph): |
---|
1622 | + path = augmenting_path_for(residual_graph) |
---|
1623 | + # Delta is the largest amount that we can increase flow across |
---|
1624 | + # all of the edges in path. Because of the way that the residual |
---|
1625 | + # function is constructed, f[u][v] for a particular edge (u, v) |
---|
1626 | + # is the amount of unused capacity on that edge. Taking the |
---|
1627 | + # minimum of a list of those values for each edge in the |
---|
1628 | + # augmenting path gives us our delta. |
---|
1629 | + delta = min(map(lambda (u, v): residual_function[u][v], path)) |
---|
1630 | + for (u, v) in path: |
---|
1631 | + flow_function[u][v] += delta |
---|
1632 | + flow_function[v][u] -= delta |
---|
1633 | + residual_graph, residual_function = residual_network(graph, |
---|
1634 | + flow_function) |
---|
1635 | + num_servers = len(sharemap) |
---|
1636 | + # The value of a flow is the total flow out of the source vertex |
---|
1637 | + # (vertex 0, in our graph). We could just as well sum across all of |
---|
1638 | + # f[0], but we know that vertex 0 only has edges to the servers in |
---|
1639 | + # our graph, so we can stop after summing flow across those. The |
---|
1640 | + # value of a flow computed in this way is the size of a maximum |
---|
1641 | + # matching on the bipartite graph described above. |
---|
1642 | + return sum([flow_function[0][v] for v in xrange(1, num_servers+1)]) |
---|
1643 | + |
---|
1644 | +def flow_network_for(sharemap): |
---|
1645 | + """ |
---|
1646 | + I take my argument, a dict of peerid -> set(shareid) mappings, and |
---|
1647 | + turn it into a flow network suitable for use with Edmonds-Karp. I |
---|
1648 | + then return the adjacency list representation of that network. |
---|
1649 | + |
---|
1650 | + Specifically, I build G = (V, E), where: |
---|
1651 | + V = { peerid in sharemap } U { shareid in sharemap } U {s, t} |
---|
1652 | + E = {(s, peerid) for each peerid} |
---|
1653 | + U {(peerid, shareid) if peerid is to store shareid } |
---|
1654 | + U {(shareid, t) for each shareid} |
---|
1655 | + |
---|
1656 | + s and t will be source and sink nodes when my caller starts treating |
---|
1657 | + the graph I return like a flow network. Without s and t, the |
---|
1658 | + returned graph is bipartite. |
---|
1659 | + """ |
---|
1660 | + # Servers don't have integral identifiers, and we can't make any |
---|
1661 | + # assumptions about the way shares are indexed -- it's possible that |
---|
1662 | + # there are missing shares, for example. So before making a graph, |
---|
1663 | + # we re-index so that all of our vertices have integral indices, and |
---|
1664 | + # that there aren't any holes. We start indexing at 1, so that we |
---|
1665 | + # can add a source node at index 0. |
---|
1666 | + sharemap, num_shares = reindex(sharemap, base_index=1) |
---|
1667 | + num_servers = len(sharemap) |
---|
1668 | + graph = [] # index -> [index], an adjacency list |
---|
1669 | + # Add an entry at the top (index 0) that has an edge to every server |
---|
1670 | + # in sharemap |
---|
1671 | + graph.append(sharemap.keys()) |
---|
1672 | + # For each server, add an entry that has an edge to every share that it |
---|
1673 | + # contains (or will contain). |
---|
1674 | + for k in sharemap: |
---|
1675 | + graph.append(sharemap[k]) |
---|
1676 | + # For each share, add an entry that has an edge to the sink. |
---|
1677 | + sink_num = num_servers + num_shares + 1 |
---|
1678 | + for i in xrange(num_shares): |
---|
1679 | + graph.append([sink_num]) |
---|
1680 | + # Add an empty entry for the sink, which has no outbound edges. |
---|
1681 | + graph.append([]) |
---|
1682 | + return graph |
---|
1683 | + |
---|
1684 | +def reindex(sharemap, base_index): |
---|
1685 | + """ |
---|
1686 | + Given sharemap, I map peerids and shareids to integers that don't |
---|
1687 | + conflict with each other, so they're useful as indices in a graph. I |
---|
1688 | + return a sharemap that is reindexed appropriately, and also the |
---|
1689 | + number of distinct shares in the resulting sharemap as a convenience |
---|
1690 | + for my caller. base_index tells me where to start indexing. |
---|
1691 | + """ |
---|
1692 | + shares = {} # shareid -> vertex index |
---|
1693 | + num = base_index |
---|
1694 | + ret = {} # peerid -> [shareid], a reindexed sharemap. |
---|
1695 | + # Number the servers first |
---|
1696 | + for k in sharemap: |
---|
1697 | + ret[num] = sharemap[k] |
---|
1698 | + num += 1 |
---|
1699 | + # Number the shares |
---|
1700 | + for k in ret: |
---|
1701 | + for shnum in ret[k]: |
---|
1702 | + if not shares.has_key(shnum): |
---|
1703 | + shares[shnum] = num |
---|
1704 | + num += 1 |
---|
1705 | + ret[k] = map(lambda x: shares[x], ret[k]) |
---|
1706 | + return (ret, len(shares)) |
---|
1707 | + |
---|
1708 | +def residual_network(graph, f): |
---|
1709 | + """ |
---|
1710 | + I return the residual network and residual capacity function of the |
---|
1711 | + flow network represented by my graph and f arguments. graph is a |
---|
1712 | + flow network in adjacency-list form, and f is a flow in graph. |
---|
1713 | + """ |
---|
1714 | + new_graph = [[] for i in xrange(len(graph))] |
---|
1715 | + cf = [[0 for s in xrange(len(graph))] for sh in xrange(len(graph))] |
---|
1716 | + for i in xrange(len(graph)): |
---|
1717 | + for v in graph[i]: |
---|
1718 | + if f[i][v] == 1: |
---|
1719 | + # We add an edge (v, i) with cf[v,i] = 1. This means |
---|
1720 | + # that we can remove 1 unit of flow from the edge (i, v) |
---|
1721 | + new_graph[v].append(i) |
---|
1722 | + cf[v][i] = 1 |
---|
1723 | + cf[i][v] = -1 |
---|
1724 | + else: |
---|
1725 | + # We add the edge (i, v), since we're not using it right |
---|
1726 | + # now. |
---|
1727 | + new_graph[i].append(v) |
---|
1728 | + cf[i][v] = 1 |
---|
1729 | + cf[v][i] = -1 |
---|
1730 | + return (new_graph, cf) |
---|
1731 | + |
---|
1732 | +def augmenting_path_for(graph): |
---|
1733 | + """ |
---|
1734 | + I return an augmenting path, if there is one, from the source node |
---|
1735 | + to the sink node in the flow network represented by my graph argument. |
---|
1736 | + If there is no augmenting path, I return False. I assume that the |
---|
1737 | + source node is at index 0 of graph, and the sink node is at the last |
---|
1738 | + index. I also assume that graph is a flow network in adjacency list |
---|
1739 | + form. |
---|
1740 | + """ |
---|
1741 | + bfs_tree = bfs(graph, 0) |
---|
1742 | + if bfs_tree[len(graph) - 1]: |
---|
1743 | + n = len(graph) - 1 |
---|
1744 | + path = [] # [(u, v)], where u and v are vertices in the graph |
---|
1745 | + while n != 0: |
---|
1746 | + path.insert(0, (bfs_tree[n], n)) |
---|
1747 | + n = bfs_tree[n] |
---|
1748 | + return path |
---|
1749 | + return False |
---|
1750 | + |
---|
1751 | +def bfs(graph, s): |
---|
1752 | + """ |
---|
1753 | + Perform a BFS on graph starting at s, where graph is a graph in |
---|
1754 | + adjacency list form, and s is a node in graph. I return the |
---|
1755 | + predecessor table that the BFS generates. |
---|
1756 | + """ |
---|
1757 | + # This is an adaptation of the BFS described in "Introduction to |
---|
1758 | + # Algorithms", Cormen et al, 2nd ed., p. 532. |
---|
1759 | + # WHITE vertices are those that we haven't seen or explored yet. |
---|
1760 | + WHITE = 0 |
---|
1761 | + # GRAY vertices are those we have seen, but haven't explored yet |
---|
1762 | + GRAY = 1 |
---|
1763 | + # BLACK vertices are those we have seen and explored |
---|
1764 | + BLACK = 2 |
---|
1765 | + color = [WHITE for i in xrange(len(graph))] |
---|
1766 | + predecessor = [None for i in xrange(len(graph))] |
---|
1767 | + distance = [-1 for i in xrange(len(graph))] |
---|
1768 | + queue = [s] # vertices that we haven't explored yet. |
---|
1769 | + color[s] = GRAY |
---|
1770 | + distance[s] = 0 |
---|
1771 | + while queue: |
---|
1772 | + n = queue.pop(0) |
---|
1773 | + for v in graph[n]: |
---|
1774 | + if color[v] == WHITE: |
---|
1775 | + color[v] = GRAY |
---|
1776 | + distance[v] = distance[n] + 1 |
---|
1777 | + predecessor[v] = n |
---|
1778 | + queue.append(v) |
---|
1779 | + color[n] = BLACK |
---|
1780 | + return predecessor |
---|
1781 | } |
---|
1782 | |
---|
1783 | Context: |
---|
1784 | |
---|
1785 | [web/storage.py: display total-seen on the last-complete-cycle line. For #940. |
---|
1786 | Brian Warner <warner@lothar.com>**20100208002010 |
---|
1787 | Ignore-this: c0ed860f3e9628d3171d2b055d96c5aa |
---|
1788 | ] |
---|
1789 | [adding pycrypto to the auto dependencies |
---|
1790 | secorp@allmydata.com**20100206054314 |
---|
1791 | Ignore-this: b873fc00a6a5b001d30d479e6053cf2f |
---|
1792 | ] |
---|
1793 | [docs running.html - "tahoe run ." does not work with the current installation, replaced with "tahoe start ." |
---|
1794 | secorp@allmydata.com**20100206165320 |
---|
1795 | Ignore-this: fdb2dcb0e417d303cd43b1951a4f8c03 |
---|
1796 | ] |
---|
1797 | [code coverage: replace figleaf with coverage.py, should work on py2.6 now. |
---|
1798 | Brian Warner <warner@lothar.com>**20100203165421 |
---|
1799 | Ignore-this: 46ab590360be6a385cb4fc4e68b6b42c |
---|
1800 | |
---|
1801 | It still lacks the right HTML report (the builtin report is very pretty, but |
---|
1802 | lacks the "lines uncovered" numbers that I want), and the half-finished |
---|
1803 | delta-from-last-run measurements. |
---|
1804 | ] |
---|
1805 | [More comprehensive changes and ticket references for NEWS |
---|
1806 | david-sarah@jacaranda.org**20100202061256 |
---|
1807 | Ignore-this: 696cf0106e8a7fd388afc5b55fba8a1b |
---|
1808 | ] |
---|
1809 | [docs: install.html: link into Python 2.5.5 download page |
---|
1810 | zooko@zooko.com**20100202065852 |
---|
1811 | Ignore-this: 1a9471b8175b7de5741d8445a7ede29d |
---|
1812 | ] |
---|
1813 | [TAG allmydata-tahoe-1.6.0 |
---|
1814 | zooko@zooko.com**20100202061125 |
---|
1815 | Ignore-this: dee6ade7ac1452cf5d1d9c69a8146d84 |
---|
1816 | ] |
---|
1817 | [docs: install.html: recommend Python 2.5 (because I can build extension modules for it with mingw), architecture.txt: point out that our Proof of Retrievability feature is client-side-only |
---|
1818 | zooko@zooko.com**20100202053842 |
---|
1819 | Ignore-this: e33fd413a91771c77b17d7de0f215bea |
---|
1820 | ] |
---|
1821 | [architecture.txt: remove trailing whitespace, wrap lines: no content changes |
---|
1822 | Brian Warner <warner@lothar.com>**20100202055304 |
---|
1823 | Ignore-this: 1662f37d1162858ac2619db27bcc411f |
---|
1824 | ] |
---|
1825 | [docs: a couple of small edits to release notes (thanks Peter) |
---|
1826 | zooko@zooko.com**20100202054832 |
---|
1827 | Ignore-this: 1d0963c43ff19c92775b124c49c8a88a |
---|
1828 | ] |
---|
1829 | [docs: CREDITS: where due |
---|
1830 | zooko@zooko.com**20100202053831 |
---|
1831 | Ignore-this: 11646dd603ac715ae8277a4bb9562215 |
---|
1832 | ] |
---|
1833 | [docs: a few small edits to performance.txt and README |
---|
1834 | zooko@zooko.com**20100202052750 |
---|
1835 | Ignore-this: bf8b1b7438e8fb6da09eec9713c78533 |
---|
1836 | ] |
---|
1837 | [docs: a few edits to architecture.txt, most significantly highlighting "future work" to avoid confusing it with the current version, and adding a "future work" about a random-sampling Proof of Retrievability verifier |
---|
1838 | zooko@zooko.com**20100202045117 |
---|
1839 | Ignore-this: 81122b3042ea9ee6bc12e795c2386d59 |
---|
1840 | ] |
---|
1841 | [docs: a few edits and updates to relnotes.txt, relnotes-short.txt, and NEWS in preparation for v1.6.0 |
---|
1842 | zooko@zooko.com**20100202043222 |
---|
1843 | Ignore-this: d90c644fa61d78e33cbdf0be428bb07a |
---|
1844 | ] |
---|
1845 | [Document leakage of cap URLs via phishing filters in known_issues.txt |
---|
1846 | david-sarah@jacaranda.org**20100202015238 |
---|
1847 | Ignore-this: 78e668dbca77c0e3a73e10c0b74cf024 |
---|
1848 | ] |
---|
1849 | [docs: updates to relnotes.txt, NEWS, architecture, historical_known_issues, install.html, etc. |
---|
1850 | zooko@zooko.com**20100201181809 |
---|
1851 | Ignore-this: f4fc924652af746862c8ee4d9ba97bf6 |
---|
1852 | ] |
---|
1853 | [immutable: downloader accepts notifications of buckets even if those notifications arrive after he has begun downloading shares. |
---|
1854 | zooko@zooko.com**20100201061610 |
---|
1855 | Ignore-this: 5b09709f27603a3157eba7ba70028955 |
---|
1856 | This can be useful if one of the ones that he has already begun downloading fails. See #287 for discussion. This fixes part of #287 which part was a regression caused by #928, namely this fixes fail-over in case a share is corrupted (or the server returns an error or disconnects). This does not fix the related issue mentioned in #287 if a server hangs and doesn't reply to requests for blocks. |
---|
1857 | |
---|
1858 | ] |
---|
1859 | [tests: don't require tahoe to run with no noise if we are using an old twisted that emits DeprecationWarnings |
---|
1860 | zooko@zooko.com**20100201052323 |
---|
1861 | Ignore-this: 69668c772cce612a0c6936a2195ebd2a |
---|
1862 | ] |
---|
1863 | [Use if instead of assert to check for twisted ftp patch |
---|
1864 | david-sarah@jacaranda.org**20100127015529 |
---|
1865 | Ignore-this: 66959d946bd1a835ece6f074e75086b2 |
---|
1866 | ] |
---|
1867 | [tests: stop being surprised that Nevow no longer prints out warnings when it tries to find its static files |
---|
1868 | zooko@zooko.com**20100201041144 |
---|
1869 | Ignore-this: 77b4ac383165d98dfe2a9008ce794742 |
---|
1870 | Unless we are using a sufficiently new version of Nevow, in which case if it prints out warnings then this is a hard test failure. :-) |
---|
1871 | ] |
---|
1872 | [cli: suppress DeprecationWarnings emitted from importing nevow and twisted. Fixes #859 |
---|
1873 | david-sarah@jacaranda.org**20100201004429 |
---|
1874 | Ignore-this: 22d7216921cd5f04381c0194ed501bbe |
---|
1875 | ] |
---|
1876 | [Fill in 'docs/performance.txt' with some performance information |
---|
1877 | Kevan Carstensen <kevan@isnotajoke.com>**20100202005914 |
---|
1878 | Ignore-this: c66b255b2bd2e7e11f5707b25e7b38be |
---|
1879 | ] |
---|
1880 | [Improvements to test_unknownnode to cover invalid cap URIs with known prefixes |
---|
1881 | david-sarah@jacaranda.org**20100130063908 |
---|
1882 | Ignore-this: e1a298942c21207473e418ea5efd6276 |
---|
1883 | ] |
---|
1884 | [Fix invalid trailing commas in JSON example |
---|
1885 | david-sarah@jacaranda.org**20100129201742 |
---|
1886 | Ignore-this: d99e0a8ead4fafabf39a1daf11ec450b |
---|
1887 | ] |
---|
1888 | [Improvements to test_hung_server, and fix for status updates in download.py |
---|
1889 | david-sarah@jacaranda.org**20100130064303 |
---|
1890 | Ignore-this: dd889c643afdcf0f86d55855aafda6ad |
---|
1891 | ] |
---|
1892 | [immutable: fix bug in tests, change line-endings to unix style, add comment |
---|
1893 | zooko@zooko.com**20100129184237 |
---|
1894 | Ignore-this: f6bd875fe974c55c881e05eddf8d3436 |
---|
1895 | ] |
---|
1896 | [New tests for #928 |
---|
1897 | david-sarah@jacaranda.org**20100129123845 |
---|
1898 | Ignore-this: 5c520f40141f0d9c000ffb05a4698995 |
---|
1899 | ] |
---|
1900 | [immutable: download from the first servers which provide at least K buckets instead of waiting for all servers to reply |
---|
1901 | zooko@zooko.com**20100127233417 |
---|
1902 | Ignore-this: c855355a40d96827e1d0c469a8d8ab3f |
---|
1903 | This should put an end to the phenomenon I've been seeing that a single hung server can cause all downloads on a grid to hang. Also it should speed up all downloads by (a) not-waiting for responses to queries that it doesn't need, and (b) downloading shares from the servers which answered the initial query the fastest. |
---|
1904 | Also, do not count how many buckets you've gotten when deciding whether the download has enough shares or not -- instead count how many buckets to *unique* shares that you've gotten. This appears to improve a slightly weird behavior in the current download code in which receiving >= K different buckets all to the same sharenumber would make it think it had enough to download the file when in fact it hadn't. |
---|
1905 | This patch needs tests before it is actually ready for trunk. |
---|
1906 | ] |
---|
1907 | [Eliminate 'foo if test else bar' syntax that isn't supported by Python 2.4 |
---|
1908 | david-sarah@jacaranda.org**20100129035210 |
---|
1909 | Ignore-this: 70eafd487b4b6299beedd63b4a54a0c |
---|
1910 | ] |
---|
1911 | [Fix example JSON in webapi.txt that cannot occur in practice |
---|
1912 | david-sarah@jacaranda.org**20100129032742 |
---|
1913 | Ignore-this: 361a1ba663d77169aeef93caef870097 |
---|
1914 | ] |
---|
1915 | [Add mutable field to t=json output for unknown nodes, when mutability is known |
---|
1916 | david-sarah@jacaranda.org**20100129031424 |
---|
1917 | Ignore-this: 1516d63559bdfeb6355485dff0f5c04e |
---|
1918 | ] |
---|
1919 | [Show -IMM and -RO suffixes for types of immutable and read-only unknown nodes in directory listings |
---|
1920 | david-sarah@jacaranda.org**20100128220800 |
---|
1921 | Ignore-this: dc5c17c0a566398f88e4303c41321e66 |
---|
1922 | ] |
---|
1923 | [Fix inaccurate comment in test_mutant_dirnodes_are_omitted |
---|
1924 | david-sarah@jacaranda.org**20100128202456 |
---|
1925 | Ignore-this: 9fa17ed7feac9e4d084f1b2338c76fca |
---|
1926 | ] |
---|
1927 | [docs: update relnotes.txt for Tahoe-LAFS v1.6 |
---|
1928 | zooko@zooko.com**20100128171257 |
---|
1929 | Ignore-this: 920df92152aead69ef861b9b2e8ff218 |
---|
1930 | ] |
---|
1931 | [Address comments by Kevan on 833 and add test for stripping spaces |
---|
1932 | david-sarah@jacaranda.org**20100127230642 |
---|
1933 | Ignore-this: de36aeaf4afb3ba05dbeb49a5e9a6b26 |
---|
1934 | ] |
---|
1935 | [Miscellaneous documentation, test, and code formatting tweaks. |
---|
1936 | david-sarah@jacaranda.org**20100127070309 |
---|
1937 | Ignore-this: 84ca7e4bb7c64221ae2c61144ef5edef |
---|
1938 | ] |
---|
1939 | [Prevent mutable objects from being retrieved from an immutable directory, and associated forward-compatibility improvements. |
---|
1940 | david-sarah@jacaranda.org**20100127064430 |
---|
1941 | Ignore-this: 5ef6a3554cf6bef0bf0712cc7d6c0252 |
---|
1942 | ] |
---|
1943 | [test_runner: cleanup, refactor common code into a non-executable method |
---|
1944 | Brian Warner <warner@lothar.com>**20100127224040 |
---|
1945 | Ignore-this: 4cb4aada87777771f688edfd8129ffca |
---|
1946 | |
---|
1947 | Having both test_node() and test_client() (one of which calls the other) felt |
---|
1948 | confusing to me, so I changed it to have test_node(), test_client(), and a |
---|
1949 | common do_create() helper method. |
---|
1950 | ] |
---|
1951 | [scripts/runner.py: simplify David-Sarah's clever grouped-commands usage trick |
---|
1952 | Brian Warner <warner@lothar.com>**20100127223758 |
---|
1953 | Ignore-this: 70877ebf06ae59f32960b0aa4ce1d1ae |
---|
1954 | ] |
---|
1955 | [tahoe backup: skip all symlinks, with warning. Fixes #850, addresses #641. |
---|
1956 | Brian Warner <warner@lothar.com>**20100127223517 |
---|
1957 | Ignore-this: ab5cf05158d32a575ca8efc0f650033f |
---|
1958 | ] |
---|
1959 | [NEWS: update with all recent user-visible changes |
---|
1960 | Brian Warner <warner@lothar.com>**20100127222209 |
---|
1961 | Ignore-this: 277d24568018bf4f3fb7736fda64eceb |
---|
1962 | ] |
---|
1963 | ["tahoe backup": fix --exclude-vcs docs to include Git |
---|
1964 | Brian Warner <warner@lothar.com>**20100127201044 |
---|
1965 | Ignore-this: 756a58dde21bdc65aa62b81803605b5 |
---|
1966 | ] |
---|
1967 | [docs: fix references to --no-storage, explanation of [storage] section |
---|
1968 | Brian Warner <warner@lothar.com>**20100127200956 |
---|
1969 | Ignore-this: f4be1763a585e1ac6299a4f1b94a59e0 |
---|
1970 | ] |
---|
1971 | [docs: further CREDITS level-ups for Nils, Kevan, David-Sarah |
---|
1972 | zooko@zooko.com**20100126170021 |
---|
1973 | Ignore-this: 1e513e85cf7b7abf57f056e6d7544b38 |
---|
1974 | ] |
---|
1975 | [Patch to accept t=set-children as well as t=set_children |
---|
1976 | david-sarah@jacaranda.org**20100124030020 |
---|
1977 | Ignore-this: 2c061f12af817cdf77feeeb64098ec3a |
---|
1978 | ] |
---|
1979 | [Fix boodlegrid use of set_children |
---|
1980 | david-sarah@jacaranda.org**20100126063414 |
---|
1981 | Ignore-this: 3aa2d4836f76303b2bacecd09611f999 |
---|
1982 | ] |
---|
1983 | [ftpd: clearer error message if Twisted needs a patch (by Nils Durner) |
---|
1984 | zooko@zooko.com**20100126143411 |
---|
1985 | Ignore-this: 440e6831ae6da5135c1edd081c93871f |
---|
1986 | ] |
---|
1987 | [Add 'docs/performance.txt', which (for the moment) describes mutable file performance issues |
---|
1988 | Kevan Carstensen <kevan@isnotajoke.com>**20100115204500 |
---|
1989 | Ignore-this: ade4e500217db2509aee35aacc8c5dbf |
---|
1990 | ] |
---|
1991 | [docs: more CREDITS for François, Kevan, and David-Sarah |
---|
1992 | zooko@zooko.com**20100126132133 |
---|
1993 | Ignore-this: f37d4977c13066fcac088ba98a31b02e |
---|
1994 | ] |
---|
1995 | [tahoe_backup.py: display warnings on errors instead of stopping the whole backup. Fix #729. |
---|
1996 | francois@ctrlaltdel.ch**20100120094249 |
---|
1997 | Ignore-this: 7006ea4b0910b6d29af6ab4a3997a8f9 |
---|
1998 | |
---|
1999 | This patch displays a warning to the user in two cases: |
---|
2000 | |
---|
2001 | 1. When special files like symlinks, fifos, devices, etc. are found in the |
---|
2002 | local source. |
---|
2003 | |
---|
2004 | 2. If files or directories are not readables by the user running the 'tahoe |
---|
2005 | backup' command. |
---|
2006 | |
---|
2007 | In verbose mode, the number of skipped files and directories is printed at the |
---|
2008 | end of the backup. |
---|
2009 | |
---|
2010 | Exit status returned by 'tahoe backup': |
---|
2011 | |
---|
2012 | - 0 everything went fine |
---|
2013 | - 1 the backup failed |
---|
2014 | - 2 files were skipped during the backup |
---|
2015 | |
---|
2016 | ] |
---|
2017 | [Warn about test failures due to setting FLOG* env vars |
---|
2018 | david-sarah@jacaranda.org**20100124220629 |
---|
2019 | Ignore-this: 1c25247ca0f0840390a1b7259a9f4a3c |
---|
2020 | ] |
---|
2021 | [Message saying that we couldn't find bin/tahoe should say where we looked |
---|
2022 | david-sarah@jacaranda.org**20100116204556 |
---|
2023 | Ignore-this: 1068576fd59ea470f1e19196315d1bb |
---|
2024 | ] |
---|
2025 | [Change running.html to describe 'tahoe run' |
---|
2026 | david-sarah@jacaranda.org**20100112044409 |
---|
2027 | Ignore-this: 23ad0114643ce31b56e19bb14e011e4f |
---|
2028 | ] |
---|
2029 | [cli: merge the better version of David-Sarah's split-usage-and-help patch with the earlier version that I mistakenly committed |
---|
2030 | zooko@zooko.com**20100126044559 |
---|
2031 | Ignore-this: 284d188e13b7901013cbb650168e6447 |
---|
2032 | ] |
---|
2033 | [Split tahoe --help options into groups. |
---|
2034 | david-sarah@jacaranda.org**20100112043935 |
---|
2035 | Ignore-this: 610f9c41b00e6863e3cd047379733e3a |
---|
2036 | ] |
---|
2037 | [cli: split usage strings into groups (patch by David-Sarah Hopwood) |
---|
2038 | zooko@zooko.com**20100126043921 |
---|
2039 | Ignore-this: 51928d266a7292b873f87f7d53c9a01e |
---|
2040 | ] |
---|
2041 | [Add create-node CLI command, and make create-client equivalent to create-node --no-storage (fixes #760) |
---|
2042 | david-sarah@jacaranda.org**20100116052055 |
---|
2043 | Ignore-this: 47d08b18c69738685e13ff365738d5a |
---|
2044 | ] |
---|
2045 | [Remove replace= parameter to mkdir-immutable and mkdir-with-children |
---|
2046 | david-sarah@jacaranda.org**20100124224325 |
---|
2047 | Ignore-this: 25207bcc946c0c43d9528718e76ba7b |
---|
2048 | ] |
---|
2049 | [contrib/fuse/runtests.py: Fix #888, configure settings in tahoe.cfg and don't treat warnings as failure |
---|
2050 | francois@ctrlaltdel.ch**20100109123010 |
---|
2051 | Ignore-this: 2590d44044acd7dfa3690c416cae945c |
---|
2052 | |
---|
2053 | Fix a few bitrotten pieces in the FUSE test script. It now configures tahoe |
---|
2054 | node settings by editing tahoe.cfg which is the new supported method. |
---|
2055 | |
---|
2056 | It alos tolerate warnings issued by the mount command, the cause of these |
---|
2057 | warnings is the same as in #876 (contrib/fuse/runtests.py doesn't tolerate |
---|
2058 | deprecations warnings). |
---|
2059 | |
---|
2060 | ] |
---|
2061 | [Fix webapi t=mkdir with multpart/form-data, as on the Welcome page. Closes #919. |
---|
2062 | Brian Warner <warner@lothar.com>**20100121065052 |
---|
2063 | Ignore-this: 1f20ea0a0f1f6d6c1e8e14f193a92c87 |
---|
2064 | ] |
---|
2065 | [tahoe_add_alias.py: minor refactoring |
---|
2066 | Brian Warner <warner@lothar.com>**20100115064220 |
---|
2067 | Ignore-this: 29910e81ad11209c9e493d65fd2dab9b |
---|
2068 | ] |
---|
2069 | [test_dirnode.py: reduce scope of a Client instance, suggested by Kevan. |
---|
2070 | Brian Warner <warner@lothar.com>**20100115062713 |
---|
2071 | Ignore-this: b35efd9e6027e43de6c6f509bfb4ccaa |
---|
2072 | ] |
---|
2073 | [test_provisioning: STAN is not always a list. Fix by David-Sarah Hopwood. |
---|
2074 | Brian Warner <warner@lothar.com>**20100115014632 |
---|
2075 | Ignore-this: 9989de7f1e00907706d2b63153138219 |
---|
2076 | ] |
---|
2077 | [web/directory.py mkdir-immutable: hush pyflakes, add TODO for #903 behavior |
---|
2078 | Brian Warner <warner@lothar.com>**20100114222804 |
---|
2079 | Ignore-this: 717cd3b9a1c8aeee76938c9641db7356 |
---|
2080 | ] |
---|
2081 | [hush pyflakes-0.4.0 warnings: slightly less-trivial fixes. Closes #900. |
---|
2082 | Brian Warner <warner@lothar.com>**20100114221719 |
---|
2083 | Ignore-this: f774f4637e256ad55502659413a811a8 |
---|
2084 | |
---|
2085 | This includes one fix (in test_web) which was testing the wrong thing. |
---|
2086 | ] |
---|
2087 | [hush pyflakes-0.4.0 warnings: remove trivial unused variables. For #900. |
---|
2088 | Brian Warner <warner@lothar.com>**20100114221529 |
---|
2089 | Ignore-this: e96106c8f1a99fbf93306fbfe9a294cf |
---|
2090 | ] |
---|
2091 | [tahoe add-alias/create-alias: don't corrupt non-newline-terminated alias |
---|
2092 | Brian Warner <warner@lothar.com>**20100114210246 |
---|
2093 | Ignore-this: 9c994792e53a85159d708760a9b1b000 |
---|
2094 | file. Closes #741. |
---|
2095 | ] |
---|
2096 | [change docs and --help to use "grid" instead of "virtual drive": closes #892. |
---|
2097 | Brian Warner <warner@lothar.com>**20100114201119 |
---|
2098 | Ignore-this: a20d4a4dcc4de4e3b404ff72d40fc29b |
---|
2099 | |
---|
2100 | Thanks to David-Sarah Hopwood for the patch. |
---|
2101 | ] |
---|
2102 | [backupdb.txt: fix ST_CTIME reference |
---|
2103 | Brian Warner <warner@lothar.com>**20100114194052 |
---|
2104 | Ignore-this: 5a189c7a1181b07dd87f0a08ea31b6d3 |
---|
2105 | ] |
---|
2106 | [client.py: fix/update comments on KeyGenerator |
---|
2107 | Brian Warner <warner@lothar.com>**20100113004226 |
---|
2108 | Ignore-this: 2208adbb3fd6a911c9f44e814583cabd |
---|
2109 | ] |
---|
2110 | [Clean up log.err calls, for one of the issues in #889. |
---|
2111 | Brian Warner <warner@lothar.com>**20100112013343 |
---|
2112 | Ignore-this: f58455ce15f1fda647c5fb25d234d2db |
---|
2113 | |
---|
2114 | allmydata.util.log.err() either takes a Failure as the first positional |
---|
2115 | argument, or takes no positional arguments and must be invoked in an |
---|
2116 | exception handler. Fixed its signature to match both foolscap.logging.log.err |
---|
2117 | and twisted.python.log.err . Included a brief unit test. |
---|
2118 | ] |
---|
2119 | [tidy up DeadReferenceError handling, ignore them in add_lease calls |
---|
2120 | Brian Warner <warner@lothar.com>**20100112000723 |
---|
2121 | Ignore-this: 72f1444e826fd0b9db6d318f89603c38 |
---|
2122 | |
---|
2123 | Stop checking separately for ConnectionDone/ConnectionLost, since those have |
---|
2124 | been folded into DeadReferenceError since foolscap-0.3.1 . Write |
---|
2125 | rrefutil.trap_deadref() in terms of rrefutil.trap_and_discard() to improve |
---|
2126 | code coverage. |
---|
2127 | ] |
---|
2128 | [NEWS: improve "tahoe backup" notes, mention first-backup-after-upgrade duration |
---|
2129 | Brian Warner <warner@lothar.com>**20100111190132 |
---|
2130 | Ignore-this: 10347c590b3375964579ba6c2b0edb4f |
---|
2131 | |
---|
2132 | Thanks to Francois Deppierraz for the suggestion. |
---|
2133 | ] |
---|
2134 | [test_repairer: add (commented-out) test_each_byte, to see exactly what the |
---|
2135 | Brian Warner <warner@lothar.com>**20100110203552 |
---|
2136 | Ignore-this: 8e84277d5304752edeff052b97821815 |
---|
2137 | Verifier misses |
---|
2138 | |
---|
2139 | The results (described in #819) match our expectations: it misses corruption |
---|
2140 | in unused share fields and in most container fields (which are only visible |
---|
2141 | to the storage server, not the client). 1265 bytes of a 2753 byte |
---|
2142 | share (hosting a 56-byte file with an artifically small segment size) are |
---|
2143 | unused, mostly in the unused tail of the overallocated UEB space (765 bytes), |
---|
2144 | and the allocated-but-unwritten plaintext_hash_tree (480 bytes). |
---|
2145 | ] |
---|
2146 | [repairer: fix some wrong offsets in the randomized verifier tests, debugged by Brian |
---|
2147 | zooko@zooko.com**20100110203721 |
---|
2148 | Ignore-this: 20604a609db8706555578612c1c12feb |
---|
2149 | fixes #819 |
---|
2150 | ] |
---|
2151 | [test_repairer: fix colliding basedir names, which caused test inconsistencies |
---|
2152 | Brian Warner <warner@lothar.com>**20100110084619 |
---|
2153 | Ignore-this: b1d56dd27e6ab99a7730f74ba10abd23 |
---|
2154 | ] |
---|
2155 | [repairer: add deterministic test for #819, mark as TODO |
---|
2156 | zooko@zooko.com**20100110013619 |
---|
2157 | Ignore-this: 4cb8bb30b25246de58ed2b96fa447d68 |
---|
2158 | ] |
---|
2159 | [contrib/fuse/runtests.py: Tolerate the tahoe CLI returning deprecation warnings |
---|
2160 | francois@ctrlaltdel.ch**20100109175946 |
---|
2161 | Ignore-this: 419c354d9f2f6eaec03deb9b83752aee |
---|
2162 | |
---|
2163 | Depending on the versions of external libraries such as Twisted of Foolscap, |
---|
2164 | the tahoe CLI can display deprecation warnings on stdout. The tests should |
---|
2165 | not interpret those warnings as a failure if the node is in fact correctly |
---|
2166 | started. |
---|
2167 | |
---|
2168 | See http://allmydata.org/trac/tahoe/ticket/859 for an example of deprecation |
---|
2169 | warnings. |
---|
2170 | |
---|
2171 | fixes #876 |
---|
2172 | ] |
---|
2173 | [contrib: fix fuse_impl_c to use new Python API |
---|
2174 | zooko@zooko.com**20100109174956 |
---|
2175 | Ignore-this: 51ca1ec7c2a92a0862e9b99e52542179 |
---|
2176 | original patch by Thomas Delaet, fixed by François, reviewed by Brian, committed by me |
---|
2177 | ] |
---|
2178 | [docs: CREDITS: add David-Sarah to the CREDITS file |
---|
2179 | zooko@zooko.com**20100109060435 |
---|
2180 | Ignore-this: 896062396ad85f9d2d4806762632f25a |
---|
2181 | ] |
---|
2182 | [mutable/publish: don't loop() right away upon DeadReferenceError. Closes #877 |
---|
2183 | Brian Warner <warner@lothar.com>**20100102220841 |
---|
2184 | Ignore-this: b200e707b3f13aa8251981362b8a3e61 |
---|
2185 | |
---|
2186 | The bug was that a disconnected server could cause us to re-enter the initial |
---|
2187 | loop() call, sending multiple queries to a single server, provoking an |
---|
2188 | incorrect UCWE. To fix it, stall the loop() with an eventual.fireEventually() |
---|
2189 | ] |
---|
2190 | [immutable/checker.py: oops, forgot some imports. Also hush pyflakes. |
---|
2191 | Brian Warner <warner@lothar.com>**20091229233909 |
---|
2192 | Ignore-this: 4d61bd3f8113015a4773fd4768176e51 |
---|
2193 | ] |
---|
2194 | [mutable repair: return successful=False when numshares<k (thus repair fails), |
---|
2195 | Brian Warner <warner@lothar.com>**20091229233746 |
---|
2196 | Ignore-this: d881c3275ff8c8bee42f6a80ca48441e |
---|
2197 | instead of weird errors. Closes #874 and #786. |
---|
2198 | |
---|
2199 | Previously, if the file had 0 shares, this would raise TypeError as it tried |
---|
2200 | to call download_version(None). If the file had some shares but fewer than |
---|
2201 | 'k', it would incorrectly raise MustForceRepairError. |
---|
2202 | |
---|
2203 | Added get_successful() to the IRepairResults API, to give repair() a place to |
---|
2204 | report non-code-bug problems like this. |
---|
2205 | ] |
---|
2206 | [node.py/interfaces.py: minor docs fixes |
---|
2207 | Brian Warner <warner@lothar.com>**20091229230409 |
---|
2208 | Ignore-this: c86ad6342ef0f95d50639b4f99cd4ddf |
---|
2209 | ] |
---|
2210 | [NEWS: fix 1.4.1 announcement w.r.t. add-lease behavior in older releases |
---|
2211 | Brian Warner <warner@lothar.com>**20091229230310 |
---|
2212 | Ignore-this: bbbbb9c961f3bbcc6e5dbe0b1594822 |
---|
2213 | ] |
---|
2214 | [checker: don't let failures in add-lease affect checker results. Closes #875. |
---|
2215 | Brian Warner <warner@lothar.com>**20091229230108 |
---|
2216 | Ignore-this: ef1a367b93e4d01298c2b1e6ca59c492 |
---|
2217 | |
---|
2218 | Mutable servermap updates and the immutable checker, when run with |
---|
2219 | add_lease=True, send both the do-you-have-block and add-lease commands in |
---|
2220 | parallel, to avoid an extra round trip time. Many older servers have problems |
---|
2221 | with add-lease and raise various exceptions, which don't generally matter. |
---|
2222 | The client-side code was catching+ignoring some of them, but unrecognized |
---|
2223 | exceptions were passed through to the DYHB code, concealing the DYHB results |
---|
2224 | from the checker, making it think the server had no shares. |
---|
2225 | |
---|
2226 | The fix is to separate the code paths. Both commands are sent at the same |
---|
2227 | time, but the errback path from add-lease is handled separately. Known |
---|
2228 | exceptions are ignored, the others (both unknown-remote and all-local) are |
---|
2229 | logged (log.WEIRD, which will trigger an Incident), but neither will affect |
---|
2230 | the DYHB results. |
---|
2231 | |
---|
2232 | The add-lease message is sent first, and we know that the server handles them |
---|
2233 | synchronously. So when the checker is done, we can be sure that all the |
---|
2234 | add-lease messages have been retired. This makes life easier for unit tests. |
---|
2235 | ] |
---|
2236 | [test_cli: verify fix for "tahoe get" not creating empty file on error (#121) |
---|
2237 | Brian Warner <warner@lothar.com>**20091227235444 |
---|
2238 | Ignore-this: 6444d52413b68eb7c11bc3dfdc69c55f |
---|
2239 | ] |
---|
2240 | [addendum to "Fix 'tahoe ls' on files (#771)" |
---|
2241 | Brian Warner <warner@lothar.com>**20091227232149 |
---|
2242 | Ignore-this: 6dd5e25f8072a3153ba200b7fdd49491 |
---|
2243 | |
---|
2244 | tahoe_ls.py: tolerate missing metadata |
---|
2245 | web/filenode.py: minor cleanups |
---|
2246 | test_cli.py: test 'tahoe ls FILECAP' |
---|
2247 | ] |
---|
2248 | [Fix 'tahoe ls' on files (#771). Patch adapted from Kevan Carstensen. |
---|
2249 | Brian Warner <warner@lothar.com>**20091227225443 |
---|
2250 | Ignore-this: 8bf8c7b1cd14ea4b0ebd453434f4fe07 |
---|
2251 | |
---|
2252 | web/filenode.py: also serve edge metadata when using t=json on a |
---|
2253 | DIRCAP/childname object. |
---|
2254 | tahoe_ls.py: list file objects as if we were listing one-entry directories. |
---|
2255 | Show edge metadata if we have it, which will be true when doing |
---|
2256 | 'tahoe ls DIRCAP/filename' and false when doing 'tahoe ls |
---|
2257 | FILECAP' |
---|
2258 | ] |
---|
2259 | [tahoe_get: don't create the output file on error. Closes #121. |
---|
2260 | Brian Warner <warner@lothar.com>**20091227220404 |
---|
2261 | Ignore-this: 58d5e793a77ec6e87d9394ade074b926 |
---|
2262 | ] |
---|
2263 | [webapi: don't accept zero-length childnames during traversal. Closes #358, #676. |
---|
2264 | Brian Warner <warner@lothar.com>**20091227201043 |
---|
2265 | Ignore-this: a9119dec89e1c7741f2289b0cad6497b |
---|
2266 | |
---|
2267 | This forbids operations that would implicitly create a directory with a |
---|
2268 | zero-length (empty string) name, like what you'd get if you did "tahoe put |
---|
2269 | local /oops/blah" (#358) or "POST /uri/CAP//?t=mkdir" (#676). The error |
---|
2270 | message is fairly friendly too. |
---|
2271 | |
---|
2272 | Also added code to "tahoe put" to catch this error beforehand and suggest the |
---|
2273 | correct syntax (i.e. without the leading slash). |
---|
2274 | ] |
---|
2275 | [CLI: send 'Accept:' header to ask for text/plain tracebacks. Closes #646. |
---|
2276 | Brian Warner <warner@lothar.com>**20091227195828 |
---|
2277 | Ignore-this: 44c258d4d4c7dac0ed58adb22f73331 |
---|
2278 | |
---|
2279 | The webapi has been looking for an Accept header since 1.4.0, but it treats a |
---|
2280 | missing header as equal to */* (to honor RFC2616). This change finally |
---|
2281 | modifies our CLI tools to ask for "text/plain, application/octet-stream", |
---|
2282 | which seems roughly correct (we either want a plain-text traceback or error |
---|
2283 | message, or an uninterpreted chunk of binary data to save to disk). Some day |
---|
2284 | we'll figure out how JSON fits into this scheme. |
---|
2285 | ] |
---|
2286 | [Makefile: upload-tarballs: switch from xfer-client to flappclient, closes #350 |
---|
2287 | Brian Warner <warner@lothar.com>**20091227163703 |
---|
2288 | Ignore-this: 3beeecdf2ad9c2438ab57f0e33dcb357 |
---|
2289 | |
---|
2290 | I've also set up a new flappserver on source@allmydata.org to receive the |
---|
2291 | tarballs. We still need to replace the gutsy buildslave (which is where the |
---|
2292 | tarballs used to be generated+uploaded) and give it the new FURL. |
---|
2293 | ] |
---|
2294 | [misc/ringsim.py: make it deterministic, more detail about grid-is-full behavior |
---|
2295 | Brian Warner <warner@lothar.com>**20091227024832 |
---|
2296 | Ignore-this: a691cc763fb2e98a4ce1767c36e8e73f |
---|
2297 | ] |
---|
2298 | [misc/ringsim.py: tool to discuss #302 |
---|
2299 | Brian Warner <warner@lothar.com>**20091226060339 |
---|
2300 | Ignore-this: fc171369b8f0d97afeeb8213e29d10ed |
---|
2301 | ] |
---|
2302 | [docs: fix helper.txt to describe new config style |
---|
2303 | zooko@zooko.com**20091224223522 |
---|
2304 | Ignore-this: 102e7692dc414a4b466307f7d78601fe |
---|
2305 | ] |
---|
2306 | [docs/stats.txt: add TOC, notes about controlling gatherer's listening port |
---|
2307 | Brian Warner <warner@lothar.com>**20091224202133 |
---|
2308 | Ignore-this: 8eef63b0e18db5aa8249c2eafde02c05 |
---|
2309 | |
---|
2310 | Thanks to Jody Harris for the suggestions. |
---|
2311 | ] |
---|
2312 | [Add docs/stats.py, explaining Tahoe stats, the gatherer, and the munin plugins. |
---|
2313 | Brian Warner <warner@lothar.com>**20091223052400 |
---|
2314 | Ignore-this: 7c9eeb6e5644eceda98b59a67730ccd5 |
---|
2315 | ] |
---|
2316 | [more #859: avoid deprecation warning for unit tests too, hush pyflakes |
---|
2317 | Brian Warner <warner@lothar.com>**20091215000147 |
---|
2318 | Ignore-this: 193622e24d31077da825a11ed2325fd3 |
---|
2319 | |
---|
2320 | * factor maybe-import-sha logic into util.hashutil |
---|
2321 | ] |
---|
2322 | [use hashlib module if available, thus avoiding a DeprecationWarning for importing the old sha module; fixes #859 |
---|
2323 | zooko@zooko.com**20091214212703 |
---|
2324 | Ignore-this: 8d0f230a4bf8581dbc1b07389d76029c |
---|
2325 | ] |
---|
2326 | [docs: reflow architecture.txt to 78-char lines |
---|
2327 | zooko@zooko.com**20091208232943 |
---|
2328 | Ignore-this: 88f55166415f15192e39407815141f77 |
---|
2329 | ] |
---|
2330 | [docs: update the about.html a little |
---|
2331 | zooko@zooko.com**20091208212737 |
---|
2332 | Ignore-this: 3fe2d9653c6de0727d3e82bd70f2a8ed |
---|
2333 | ] |
---|
2334 | [docs: remove obsolete doc file "codemap.txt" |
---|
2335 | zooko@zooko.com**20091113163033 |
---|
2336 | Ignore-this: 16bc21a1835546e71d1b344c06c61ebb |
---|
2337 | I started to update this to reflect the current codebase, but then I thought (a) nobody seemed to notice that it hasn't been updated since December 2007, and (b) it will just bit-rot again, so I'm removing it. |
---|
2338 | ] |
---|
2339 | [mutable/retrieve.py: stop reaching into private MutableFileNode attributes |
---|
2340 | Brian Warner <warner@lothar.com>**20091208172921 |
---|
2341 | Ignore-this: 61e548798c1105aed66a792bf26ceef7 |
---|
2342 | ] |
---|
2343 | [mutable/servermap.py: stop reaching into private MutableFileNode attributes |
---|
2344 | Brian Warner <warner@lothar.com>**20091208172608 |
---|
2345 | Ignore-this: b40a6b62f623f9285ad96fda139c2ef2 |
---|
2346 | ] |
---|
2347 | [mutable/servermap.py: oops, query N+e servers in MODE_WRITE, not k+e |
---|
2348 | Brian Warner <warner@lothar.com>**20091208171156 |
---|
2349 | Ignore-this: 3497f4ab70dae906759007c3cfa43bc |
---|
2350 | |
---|
2351 | under normal conditions, this wouldn't cause any problems, but if the shares |
---|
2352 | are really sparse (perhaps because new servers were added), then |
---|
2353 | file-modifies might stop looking too early and leave old shares in place |
---|
2354 | ] |
---|
2355 | [control.py: fix speedtest: use download_best_version (not read) on mutable nodes |
---|
2356 | Brian Warner <warner@lothar.com>**20091207060512 |
---|
2357 | Ignore-this: 7125eabfe74837e05f9291dd6414f917 |
---|
2358 | ] |
---|
2359 | [FTP-and-SFTP.txt: fix ssh-keygen pointer |
---|
2360 | Brian Warner <warner@lothar.com>**20091207052803 |
---|
2361 | Ignore-this: bc2a70ee8c58ec314e79c1262ccb22f7 |
---|
2362 | ] |
---|
2363 | [setup: ignore _darcs in the "test-clean" test and make the "clean" step remove all .egg's in the root dir |
---|
2364 | zooko@zooko.com**20091206184835 |
---|
2365 | Ignore-this: 6066bd160f0db36d7bf60aba405558d2 |
---|
2366 | ] |
---|
2367 | [remove MutableFileNode.download(), prefer download_best_version() instead |
---|
2368 | Brian Warner <warner@lothar.com>**20091201225438 |
---|
2369 | Ignore-this: 5733eb373a902063e09fd52cc858dec0 |
---|
2370 | ] |
---|
2371 | [Simplify immutable download API: use just filenode.read(consumer, offset, size) |
---|
2372 | Brian Warner <warner@lothar.com>**20091201225330 |
---|
2373 | Ignore-this: bdedfb488ac23738bf52ae6d4ab3a3fb |
---|
2374 | |
---|
2375 | * remove Downloader.download_to_data/download_to_filename/download_to_filehandle |
---|
2376 | * remove download.Data/FileName/FileHandle targets |
---|
2377 | * remove filenode.download/download_to_data/download_to_filename methods |
---|
2378 | * leave Downloader.download (the whole Downloader will go away eventually) |
---|
2379 | * add util.consumer.MemoryConsumer/download_to_data, for convenience |
---|
2380 | (this is mostly used by unit tests, but it gets used by enough non-test |
---|
2381 | code to warrant putting it in allmydata.util) |
---|
2382 | * update tests |
---|
2383 | * removes about 180 lines of code. Yay negative code days! |
---|
2384 | |
---|
2385 | Overall plan is to rewrite immutable/download.py and leave filenode.read() as |
---|
2386 | the sole read-side API. |
---|
2387 | ] |
---|
2388 | [server.py: undo my bogus 'correction' of David-Sarah's comment fix |
---|
2389 | Brian Warner <warner@lothar.com>**20091201024607 |
---|
2390 | Ignore-this: ff4bb58f6a9e045b900ac3a89d6f506a |
---|
2391 | |
---|
2392 | and move it to a better line |
---|
2393 | ] |
---|
2394 | [Implement more coherent behavior when copying with dircaps/filecaps (closes #761). Patch by Kevan Carstensen. |
---|
2395 | "Brian Warner <warner@lothar.com>"**20091130211009] |
---|
2396 | [storage.py: update comment |
---|
2397 | "Brian Warner <warner@lothar.com>"**20091130195913] |
---|
2398 | [storage server: detect disk space usage on Windows too (fixes #637) |
---|
2399 | david-sarah@jacaranda.org**20091121055644 |
---|
2400 | Ignore-this: 20fb30498174ce997befac7701fab056 |
---|
2401 | ] |
---|
2402 | [make status of finished operations consistently "Finished" |
---|
2403 | david-sarah@jacaranda.org**20091121061543 |
---|
2404 | Ignore-this: 97d483e8536ccfc2934549ceff7055a3 |
---|
2405 | ] |
---|
2406 | [NEWS: update with all user-visible changes since the last release |
---|
2407 | Brian Warner <warner@lothar.com>**20091127224217 |
---|
2408 | Ignore-this: 741da6cd928e939fb6d21a61ea3daf0b |
---|
2409 | ] |
---|
2410 | [update "tahoe backup" docs, and webapi.txt's mkdir-with-children |
---|
2411 | Brian Warner <warner@lothar.com>**20091127055900 |
---|
2412 | Ignore-this: defac1fb9a2335b0af3ef9dbbcc67b7e |
---|
2413 | ] |
---|
2414 | [Add dirnodes to backupdb and "tahoe backup", closes #606. |
---|
2415 | Brian Warner <warner@lothar.com>**20091126234257 |
---|
2416 | Ignore-this: fa88796fcad1763c6a2bf81f56103223 |
---|
2417 | |
---|
2418 | * backups now share dirnodes with any previous backup, in any location, |
---|
2419 | so renames and moves are handled very efficiently |
---|
2420 | * "tahoe backup" no longer bothers reading the previous snapshot |
---|
2421 | * if you switch grids, you should delete ~/.tahoe/private/backupdb.sqlite, |
---|
2422 | to force new uploads of all files and directories |
---|
2423 | ] |
---|
2424 | [webapi: fix t=check for DIR2-LIT (i.e. empty immutable directories) |
---|
2425 | Brian Warner <warner@lothar.com>**20091126232731 |
---|
2426 | Ignore-this: 8513c890525c69c1eca0e80d53a231f8 |
---|
2427 | ] |
---|
2428 | [PipelineError: fix str() on python2.4 . Closes #842. |
---|
2429 | Brian Warner <warner@lothar.com>**20091124212512 |
---|
2430 | Ignore-this: e62c92ea9ede2ab7d11fe63f43b9c942 |
---|
2431 | ] |
---|
2432 | [test_uri.py: s/NewDirnode/Dirnode/ , now that they aren't "new" anymore |
---|
2433 | Brian Warner <warner@lothar.com>**20091120075553 |
---|
2434 | Ignore-this: 61c8ef5e45a9d966873a610d8349b830 |
---|
2435 | ] |
---|
2436 | [interface name cleanups: IFileNode, IImmutableFileNode, IMutableFileNode |
---|
2437 | Brian Warner <warner@lothar.com>**20091120075255 |
---|
2438 | Ignore-this: e3d193c229e2463e1d0b0c92306de27f |
---|
2439 | |
---|
2440 | The proper hierarchy is: |
---|
2441 | IFilesystemNode |
---|
2442 | +IFileNode |
---|
2443 | ++IMutableFileNode |
---|
2444 | ++IImmutableFileNode |
---|
2445 | +IDirectoryNode |
---|
2446 | |
---|
2447 | Also expand test_client.py (NodeMaker) to hit all IFilesystemNode types. |
---|
2448 | ] |
---|
2449 | [class name cleanups: s/FileNode/ImmutableFileNode/ |
---|
2450 | Brian Warner <warner@lothar.com>**20091120072239 |
---|
2451 | Ignore-this: 4b3218f2d0e585c62827e14ad8ed8ac1 |
---|
2452 | |
---|
2453 | also fix test/bench_dirnode.py for recent dirnode changes |
---|
2454 | ] |
---|
2455 | [Use DIR-IMM and t=mkdir-immutable for "tahoe backup", for #828 |
---|
2456 | Brian Warner <warner@lothar.com>**20091118192813 |
---|
2457 | Ignore-this: a4720529c9bc6bc8b22a3d3265925491 |
---|
2458 | ] |
---|
2459 | [web/directory.py: use "DIR-IMM" to describe immutable directories, not DIR-RO |
---|
2460 | Brian Warner <warner@lothar.com>**20091118191832 |
---|
2461 | Ignore-this: aceafd6ab4bf1cc0c2a719ef7319ac03 |
---|
2462 | ] |
---|
2463 | [web/info.py: hush pyflakes |
---|
2464 | Brian Warner <warner@lothar.com>**20091118191736 |
---|
2465 | Ignore-this: edc5f128a2b8095fb20686a75747c8 |
---|
2466 | ] |
---|
2467 | [make get_size/get_current_size consistent for all IFilesystemNode classes |
---|
2468 | Brian Warner <warner@lothar.com>**20091118191624 |
---|
2469 | Ignore-this: bd3449cf96e4827abaaf962672c1665a |
---|
2470 | |
---|
2471 | * stop caching most_recent_size in dirnode, rely upon backing filenode for it |
---|
2472 | * start caching most_recent_size in MutableFileNode |
---|
2473 | * return None when you don't know, not "?" |
---|
2474 | * only render None as "?" in the web "more info" page |
---|
2475 | * add get_size/get_current_size to UnknownNode |
---|
2476 | ] |
---|
2477 | [ImmutableDirectoryURIVerifier: fix verifycap handling |
---|
2478 | Brian Warner <warner@lothar.com>**20091118164238 |
---|
2479 | Ignore-this: 6bba5c717b54352262eabca6e805d590 |
---|
2480 | ] |
---|
2481 | [Add t=mkdir-immutable to the webapi. Closes #607. |
---|
2482 | Brian Warner <warner@lothar.com>**20091118070900 |
---|
2483 | Ignore-this: 311e5fab9a5f28b9e8a28d3d08f3c0d |
---|
2484 | |
---|
2485 | * change t=mkdir-with-children to not use multipart/form encoding. Instead, |
---|
2486 | the request body is all JSON. t=mkdir-immutable uses this format too. |
---|
2487 | * make nodemaker.create_immutable_dirnode() get convergence from SecretHolder, |
---|
2488 | but let callers override it |
---|
2489 | * raise NotDeepImmutableError instead of using assert() |
---|
2490 | * add mutable= argument to DirectoryNode.create_subdirectory(), default True |
---|
2491 | ] |
---|
2492 | [move convergence secret into SecretHolder, next to lease secret |
---|
2493 | Brian Warner <warner@lothar.com>**20091118015444 |
---|
2494 | Ignore-this: 312f85978a339f2d04deb5bcb8f511bc |
---|
2495 | ] |
---|
2496 | [nodemaker: implement immutable directories (internal interface), for #607 |
---|
2497 | Brian Warner <warner@lothar.com>**20091112002233 |
---|
2498 | Ignore-this: d09fccf41813fdf7e0db177ed9e5e130 |
---|
2499 | |
---|
2500 | * nodemaker.create_from_cap() now handles DIR2-CHK and DIR2-LIT |
---|
2501 | * client.create_immutable_dirnode() is used to create them |
---|
2502 | * no webapi yet |
---|
2503 | ] |
---|
2504 | [stop using IURI()/etc as an adapter |
---|
2505 | Brian Warner <warner@lothar.com>**20091111224542 |
---|
2506 | Ignore-this: 9611da7ea6a4696de2a3b8c08776e6e0 |
---|
2507 | ] |
---|
2508 | [clean up uri-vs-cap terminology, emphasize cap instances instead of URI strings |
---|
2509 | Brian Warner <warner@lothar.com>**20091111222619 |
---|
2510 | Ignore-this: 93626385f6e7f039ada71f54feefe267 |
---|
2511 | |
---|
2512 | * "cap" means a python instance which encapsulates a filecap/dircap (uri.py) |
---|
2513 | * "uri" means a string with a "URI:" prefix |
---|
2514 | * FileNode instances are created with (and retain) a cap instance, and |
---|
2515 | generate uri strings on demand |
---|
2516 | * .get_cap/get_readcap/get_verifycap/get_repaircap return cap instances |
---|
2517 | * .get_uri/get_readonly_uri return uri strings |
---|
2518 | |
---|
2519 | * add filenode.download_to_filename() for control.py, should find a better way |
---|
2520 | * use MutableFileNode.init_from_cap, not .init_from_uri |
---|
2521 | * directory URI instances: use get_filenode_cap, not get_filenode_uri |
---|
2522 | * update/cleanup bench_dirnode.py to match, add Makefile target to run it |
---|
2523 | ] |
---|
2524 | [add parser for immutable directory caps: DIR2-CHK, DIR2-LIT, DIR2-CHK-Verifier |
---|
2525 | Brian Warner <warner@lothar.com>**20091104181351 |
---|
2526 | Ignore-this: 854398cc7a75bada57fa97c367b67518 |
---|
2527 | ] |
---|
2528 | [wui: s/TahoeLAFS/Tahoe-LAFS/ |
---|
2529 | zooko@zooko.com**20091029035050 |
---|
2530 | Ignore-this: 901e64cd862e492ed3132bd298583c26 |
---|
2531 | ] |
---|
2532 | [tests: bump up the timeout on test_repairer to see if 120 seconds was too short for François's ARM box to do the test even when it was doing it right. |
---|
2533 | zooko@zooko.com**20091027224800 |
---|
2534 | Ignore-this: 95e93dc2e018b9948253c2045d506f56 |
---|
2535 | ] |
---|
2536 | [dirnode.pack_children(): add deep_immutable= argument |
---|
2537 | Brian Warner <warner@lothar.com>**20091026162809 |
---|
2538 | Ignore-this: d5a2371e47662c4bc6eff273e8181b00 |
---|
2539 | |
---|
2540 | This will be used by DIR2:CHK to enforce the deep-immutability requirement. |
---|
2541 | ] |
---|
2542 | [webapi: use t=mkdir-with-children instead of a children= arg to t=mkdir . |
---|
2543 | Brian Warner <warner@lothar.com>**20091026011321 |
---|
2544 | Ignore-this: 769cab30b6ab50db95000b6c5a524916 |
---|
2545 | |
---|
2546 | This is safer: in the earlier API, an old webapi server would silently ignore |
---|
2547 | the initial children, and clients trying to set them would have to fetch the |
---|
2548 | newly-created directory to discover the incompatibility. In the new API, |
---|
2549 | clients using t=mkdir-with-children against an old webapi server will get a |
---|
2550 | clear error. |
---|
2551 | ] |
---|
2552 | [nodemaker.create_new_mutable_directory: pack_children() in initial_contents= |
---|
2553 | Brian Warner <warner@lothar.com>**20091020005118 |
---|
2554 | Ignore-this: bd43c4eefe06fd32b7492bcb0a55d07e |
---|
2555 | instead of creating an empty file and then adding the children later. |
---|
2556 | |
---|
2557 | This should speed up mkdir(initial_children) considerably, removing two |
---|
2558 | roundtrips and an entire read-modify-write cycle, probably bringing it down |
---|
2559 | to a single roundtrip. A quick test (against the volunteergrid) suggests a |
---|
2560 | 30% speedup. |
---|
2561 | |
---|
2562 | test_dirnode: add new tests to enforce the restrictions that interfaces.py |
---|
2563 | claims for create_new_mutable_directory(): no UnknownNodes, metadata dicts |
---|
2564 | ] |
---|
2565 | [test_dirnode.py: add tests of initial_children= args to client.create_dirnode |
---|
2566 | Brian Warner <warner@lothar.com>**20091017194159 |
---|
2567 | Ignore-this: 2e2da28323a4d5d815466387914abc1b |
---|
2568 | and nodemaker.create_new_mutable_directory |
---|
2569 | ] |
---|
2570 | [update many dirnode interfaces to accept dict-of-nodes instead of dict-of-caps |
---|
2571 | Brian Warner <warner@lothar.com>**20091017192829 |
---|
2572 | Ignore-this: b35472285143862a856bf4b361d692f0 |
---|
2573 | |
---|
2574 | interfaces.py: define INodeMaker, document argument values, change |
---|
2575 | create_new_mutable_directory() to take dict-of-nodes. Change |
---|
2576 | dirnode.set_nodes() and dirnode.create_subdirectory() too. |
---|
2577 | nodemaker.py: use INodeMaker, update create_new_mutable_directory() |
---|
2578 | client.py: have create_dirnode() delegate initial_children= to nodemaker |
---|
2579 | dirnode.py (Adder): take dict-of-nodes instead of list-of-nodes, which |
---|
2580 | updates set_nodes() and create_subdirectory() |
---|
2581 | web/common.py (convert_initial_children_json): create dict-of-nodes |
---|
2582 | web/directory.py: same |
---|
2583 | web/unlinked.py: same |
---|
2584 | test_dirnode.py: update tests to match |
---|
2585 | ] |
---|
2586 | [dirnode.py: move pack_children() out to a function, for eventual use by others |
---|
2587 | Brian Warner <warner@lothar.com>**20091017180707 |
---|
2588 | Ignore-this: 6a823fb61f2c180fd38d6742d3196a7a |
---|
2589 | ] |
---|
2590 | [move dirnode.CachingDict to dictutil.AuxValueDict, generalize method names, |
---|
2591 | Brian Warner <warner@lothar.com>**20091017180005 |
---|
2592 | Ignore-this: b086933cf429df0fcea16a308d2640dd |
---|
2593 | improve tests. Let dirnode _pack_children accept either dict or AuxValueDict. |
---|
2594 | ] |
---|
2595 | [test/common.py: update FakeMutableFileNode to new contents= callable scheme |
---|
2596 | Brian Warner <warner@lothar.com>**20091013052154 |
---|
2597 | Ignore-this: 62f00a76454a2190d1c8641c5993632f |
---|
2598 | ] |
---|
2599 | [The initial_children= argument to nodemaker.create_new_mutable_directory is |
---|
2600 | Brian Warner <warner@lothar.com>**20091013031922 |
---|
2601 | Ignore-this: 72e45317c21f9eb9ec3bd79bd4311f48 |
---|
2602 | now enabled. |
---|
2603 | ] |
---|
2604 | [client.create_mutable_file(contents=) now accepts a callable, which is |
---|
2605 | Brian Warner <warner@lothar.com>**20091013031232 |
---|
2606 | Ignore-this: 3c89d2f50c1e652b83f20bd3f4f27c4b |
---|
2607 | invoked with the new MutableFileNode and is supposed to return the initial |
---|
2608 | contents. This can be used by e.g. a new dirnode which needs the filenode's |
---|
2609 | writekey to encrypt its initial children. |
---|
2610 | |
---|
2611 | create_mutable_file() still accepts a bytestring too, or None for an empty |
---|
2612 | file. |
---|
2613 | ] |
---|
2614 | [webapi: t=mkdir now accepts initial children, using the same JSON that t=json |
---|
2615 | Brian Warner <warner@lothar.com>**20091013023444 |
---|
2616 | Ignore-this: 574a46ed46af4251abf8c9580fd31ef7 |
---|
2617 | emits. |
---|
2618 | |
---|
2619 | client.create_dirnode(initial_children=) now works. |
---|
2620 | ] |
---|
2621 | [replace dirnode.create_empty_directory() with create_subdirectory(), which |
---|
2622 | Brian Warner <warner@lothar.com>**20091013021520 |
---|
2623 | Ignore-this: 6b57cb51bcfcc6058d0df569fdc8a9cf |
---|
2624 | takes an initial_children= argument |
---|
2625 | ] |
---|
2626 | [dirnode.set_children: change return value: fire with self instead of None |
---|
2627 | Brian Warner <warner@lothar.com>**20091013015026 |
---|
2628 | Ignore-this: f1d14e67e084e4b2a4e25fa849b0e753 |
---|
2629 | ] |
---|
2630 | [dirnode.set_nodes: change return value: fire with self instead of None |
---|
2631 | Brian Warner <warner@lothar.com>**20091013014546 |
---|
2632 | Ignore-this: b75b3829fb53f7399693f1c1a39aacae |
---|
2633 | ] |
---|
2634 | [dirnode.set_children: take a dict, not a list |
---|
2635 | Brian Warner <warner@lothar.com>**20091013002440 |
---|
2636 | Ignore-this: 540ce72ce2727ee053afaae1ff124e21 |
---|
2637 | ] |
---|
2638 | [dirnode.set_uri/set_children: change signature to take writecap+readcap |
---|
2639 | Brian Warner <warner@lothar.com>**20091012235126 |
---|
2640 | Ignore-this: 5df617b2d379a51c79148a857e6026b1 |
---|
2641 | instead of a single cap. The webapi t=set_children call benefits too. |
---|
2642 | ] |
---|
2643 | [replace Client.create_empty_dirnode() with create_dirnode(), in anticipation |
---|
2644 | Brian Warner <warner@lothar.com>**20091012224506 |
---|
2645 | Ignore-this: cbdaa4266ecb3c6496ffceab4f95709d |
---|
2646 | of adding initial_children= argument. |
---|
2647 | |
---|
2648 | Includes stubbed-out initial_children= support. |
---|
2649 | ] |
---|
2650 | [test_web.py: use a less-fake client, making test harness smaller |
---|
2651 | Brian Warner <warner@lothar.com>**20091012222808 |
---|
2652 | Ignore-this: 29e95147f8c94282885c65b411d100bb |
---|
2653 | ] |
---|
2654 | [webapi.txt: document t=set_children, other small edits |
---|
2655 | Brian Warner <warner@lothar.com>**20091009200446 |
---|
2656 | Ignore-this: 4d7e76b04a7b8eaa0a981879f778ea5d |
---|
2657 | ] |
---|
2658 | [Verifier: check the full cryptext-hash tree on each share. Removed .todos |
---|
2659 | Brian Warner <warner@lothar.com>**20091005221849 |
---|
2660 | Ignore-this: 6fb039c5584812017d91725e687323a5 |
---|
2661 | from the last few test_repairer tests that were waiting on this. |
---|
2662 | ] |
---|
2663 | [Verifier: check the full block-hash-tree on each share |
---|
2664 | Brian Warner <warner@lothar.com>**20091005214844 |
---|
2665 | Ignore-this: 3f7ccf6d253f32340f1bf1da27803eee |
---|
2666 | |
---|
2667 | Removed the .todo from two test_repairer tests that check this. The only |
---|
2668 | remaining .todos are on the three crypttext-hash-tree tests. |
---|
2669 | ] |
---|
2670 | [Verifier: check the full share-hash chain on each share |
---|
2671 | Brian Warner <warner@lothar.com>**20091005213443 |
---|
2672 | Ignore-this: 3d30111904158bec06a4eac22fd39d17 |
---|
2673 | |
---|
2674 | Removed the .todo from two test_repairer tests that check this. |
---|
2675 | ] |
---|
2676 | [test_repairer: rename Verifier test cases to be more precise and less verbose |
---|
2677 | Brian Warner <warner@lothar.com>**20091005201115 |
---|
2678 | Ignore-this: 64be7094e33338c7c2aea9387e138771 |
---|
2679 | ] |
---|
2680 | [immutable/checker.py: rearrange code a little bit, make it easier to follow |
---|
2681 | Brian Warner <warner@lothar.com>**20091005200252 |
---|
2682 | Ignore-this: 91cc303fab66faf717433a709f785fb5 |
---|
2683 | ] |
---|
2684 | [test/common.py: wrap docstrings to 80cols so I can read them more easily |
---|
2685 | Brian Warner <warner@lothar.com>**20091005200143 |
---|
2686 | Ignore-this: b180a3a0235cbe309c87bd5e873cbbb3 |
---|
2687 | ] |
---|
2688 | [immutable/download.py: wrap to 80cols, no functional changes |
---|
2689 | Brian Warner <warner@lothar.com>**20091005192542 |
---|
2690 | Ignore-this: 6b05fe3dc6d78832323e708b9e6a1fe |
---|
2691 | ] |
---|
2692 | [CHK-hashes.svg: cross out plaintext hashes, since we don't include |
---|
2693 | Brian Warner <warner@lothar.com>**20091005010803 |
---|
2694 | Ignore-this: bea2e953b65ec7359363aa20de8cb603 |
---|
2695 | them (until we finish #453) |
---|
2696 | ] |
---|
2697 | [docs: a few licensing clarifications requested by Ubuntu |
---|
2698 | zooko@zooko.com**20090927033226 |
---|
2699 | Ignore-this: 749fc8c9aeb6dc643669854a3e81baa7 |
---|
2700 | ] |
---|
2701 | [setup: remove binary WinFUSE modules |
---|
2702 | zooko@zooko.com**20090924211436 |
---|
2703 | Ignore-this: 8aefc571d2ae22b9405fc650f2c2062 |
---|
2704 | I would prefer to have just source code, or indications of what 3rd-party packages are required, under revision control, and have the build process generate o |
---|
2705 | r acquire the binaries as needed. Also, having these in our release tarballs is interfering with getting Tahoe-LAFS uploaded into Ubuntu Karmic. (Technicall |
---|
2706 | y, they would accept binary modules as long as they came with the accompanying source so that they could satisfy their obligations under GPL2+ and TGPPL1+, bu |
---|
2707 | t it is easier for now to remove the binaries from the source tree.) |
---|
2708 | In this case, the binaries are from the tahoe-w32-client project: http://allmydata.org/trac/tahoe-w32-client , from which you can also get the source. |
---|
2709 | ] |
---|
2710 | [setup: remove binary _fusemodule.so 's |
---|
2711 | zooko@zooko.com**20090924211130 |
---|
2712 | Ignore-this: 74487bbe27d280762ac5dd5f51e24186 |
---|
2713 | I would prefer to have just source code, or indications of what 3rd-party packages are required, under revision control, and have the build process generate or acquire the binaries as needed. Also, having these in our release tarballs is interfering with getting Tahoe-LAFS uploaded into Ubuntu Karmic. (Technically, they would accept binary modules as long as they came with the accompanying source so that they could satisfy their obligations under GPL2+ and TGPPL1+, but it is easier for now to remove the binaries from the source tree.) |
---|
2714 | In this case, these modules come from the MacFUSE project: http://code.google.com/p/macfuse/ |
---|
2715 | ] |
---|
2716 | [doc: add a copy of LGPL2 for documentation purposes for ubuntu |
---|
2717 | zooko@zooko.com**20090924054218 |
---|
2718 | Ignore-this: 6a073b48678a7c84dc4fbcef9292ab5b |
---|
2719 | ] |
---|
2720 | [setup: remove a convenience copy of figleaf, to ease inclusion into Ubuntu Karmic Koala |
---|
2721 | zooko@zooko.com**20090924053215 |
---|
2722 | Ignore-this: a0b0c990d6e2ee65c53a24391365ac8d |
---|
2723 | We need to carefully document the licence of figleaf in order to get Tahoe-LAFS into Ubuntu Karmic Koala. However, figleaf isn't really a part of Tahoe-LAFS per se -- this is just a "convenience copy" of a development tool. The quickest way to make Tahoe-LAFS acceptable for Karmic then, is to remove figleaf from the Tahoe-LAFS tarball itself. People who want to run figleaf on Tahoe-LAFS (as everyone should want) can install figleaf themselves. I haven't tested this -- there may be incompatibilities between upstream figleaf and the copy that we had here... |
---|
2724 | ] |
---|
2725 | [setup: shebang for misc/build-deb.py to fail quickly |
---|
2726 | zooko@zooko.com**20090819135626 |
---|
2727 | Ignore-this: 5a1b893234d2d0bb7b7346e84b0a6b4d |
---|
2728 | Without this patch, when I ran "chmod +x ./misc/build-deb.py && ./misc/build-deb.py" then it hung indefinitely. (I wonder what it was doing.) |
---|
2729 | ] |
---|
2730 | [docs: Shawn Willden grants permission for his contributions under GPL2+|TGPPL1+ |
---|
2731 | zooko@zooko.com**20090921164651 |
---|
2732 | Ignore-this: ef1912010d07ff2ffd9678e7abfd0d57 |
---|
2733 | ] |
---|
2734 | [docs: Csaba Henk granted permission to license fuse.py under the same terms as Tahoe-LAFS itself |
---|
2735 | zooko@zooko.com**20090921154659 |
---|
2736 | Ignore-this: c61ba48dcb7206a89a57ca18a0450c53 |
---|
2737 | ] |
---|
2738 | [setup: mark setup.py as having utf-8 encoding in it |
---|
2739 | zooko@zooko.com**20090920180343 |
---|
2740 | Ignore-this: 9d3850733700a44ba7291e9c5e36bb91 |
---|
2741 | ] |
---|
2742 | [doc: licensing cleanups |
---|
2743 | zooko@zooko.com**20090920171631 |
---|
2744 | Ignore-this: 7654f2854bf3c13e6f4d4597633a6630 |
---|
2745 | Use nice utf-8 © instead of "(c)". Remove licensing statements on utility modules that have been assigned to allmydata.com by their original authors. (Nattraverso was not assigned to allmydata.com -- it was LGPL'ed -- but I checked and src/allmydata/util/iputil.py was completely rewritten and doesn't contain any line of code from nattraverso.) Add notes to misc/debian/copyright about licensing on files that aren't just allmydata.com-licensed. |
---|
2746 | ] |
---|
2747 | [build-deb.py: run darcsver early, otherwise we get the wrong version later on |
---|
2748 | Brian Warner <warner@lothar.com>**20090918033620 |
---|
2749 | Ignore-this: 6635c5b85e84f8aed0d8390490c5392a |
---|
2750 | ] |
---|
2751 | [new approach for debian packaging, sharing pieces across distributions. Still experimental, still only works for sid. |
---|
2752 | warner@lothar.com**20090818190527 |
---|
2753 | Ignore-this: a75eb63db9106b3269badbfcdd7f5ce1 |
---|
2754 | ] |
---|
2755 | [new experimental deb-packaging rules. Only works for sid so far. |
---|
2756 | Brian Warner <warner@lothar.com>**20090818014052 |
---|
2757 | Ignore-this: 3a26ad188668098f8f3cc10a7c0c2f27 |
---|
2758 | ] |
---|
2759 | [setup.py: read _version.py and pass to setup(version=), so more commands work |
---|
2760 | Brian Warner <warner@lothar.com>**20090818010057 |
---|
2761 | Ignore-this: b290eb50216938e19f72db211f82147e |
---|
2762 | like "setup.py --version" and "setup.py --fullname" |
---|
2763 | ] |
---|
2764 | [test/check_speed.py: fix shbang line |
---|
2765 | Brian Warner <warner@lothar.com>**20090818005948 |
---|
2766 | Ignore-this: 7f3a37caf349c4c4de704d0feb561f8d |
---|
2767 | ] |
---|
2768 | [setup: remove bundled version of darcsver-1.2.1 |
---|
2769 | zooko@zooko.com**20090816233432 |
---|
2770 | Ignore-this: 5357f26d2803db2d39159125dddb963a |
---|
2771 | That version of darcsver emits a scary error message when the darcs executable or the _darcs subdirectory is not found. |
---|
2772 | This error is hidden (unless the --loud option is passed) in darcsver >= 1.3.1. |
---|
2773 | Fixes #788. |
---|
2774 | ] |
---|
2775 | [de-Service-ify Helper, pass in storage_broker and secret_holder directly. |
---|
2776 | Brian Warner <warner@lothar.com>**20090815201737 |
---|
2777 | Ignore-this: 86b8ac0f90f77a1036cd604dd1304d8b |
---|
2778 | This makes it more obvious that the Helper currently generates leases with |
---|
2779 | the Helper's own secrets, rather than getting values from the client, which |
---|
2780 | is arguably a bug that will likely be resolved with the Accounting project. |
---|
2781 | ] |
---|
2782 | [immutable.Downloader: pass StorageBroker to constructor, stop being a Service |
---|
2783 | Brian Warner <warner@lothar.com>**20090815192543 |
---|
2784 | Ignore-this: af5ab12dbf75377640a670c689838479 |
---|
2785 | child of the client, access with client.downloader instead of |
---|
2786 | client.getServiceNamed("downloader"). The single "Downloader" instance is |
---|
2787 | scheduled for demolition anyways, to be replaced by individual |
---|
2788 | filenode.download calls. |
---|
2789 | ] |
---|
2790 | [tests: double the timeout on test_runner.RunNode.test_introducer since feisty hit a timeout |
---|
2791 | zooko@zooko.com**20090815160512 |
---|
2792 | Ignore-this: ca7358bce4bdabe8eea75dedc39c0e67 |
---|
2793 | I'm not sure if this is an actual timing issue (feisty is running on an overloaded VM if I recall correctly), or it there is a deeper bug. |
---|
2794 | ] |
---|
2795 | [stop making History be a Service, it wasn't necessary |
---|
2796 | Brian Warner <warner@lothar.com>**20090815114415 |
---|
2797 | Ignore-this: b60449231557f1934a751c7effa93cfe |
---|
2798 | ] |
---|
2799 | [Overhaul IFilesystemNode handling, to simplify tests and use POLA internally. |
---|
2800 | Brian Warner <warner@lothar.com>**20090815112846 |
---|
2801 | Ignore-this: 1db1b9c149a60a310228aba04c5c8e5f |
---|
2802 | |
---|
2803 | * stop using IURI as an adapter |
---|
2804 | * pass cap strings around instead of URI instances |
---|
2805 | * move filenode/dirnode creation duties from Client to new NodeMaker class |
---|
2806 | * move other Client duties to KeyGenerator, SecretHolder, History classes |
---|
2807 | * stop passing Client reference to dirnode/filenode constructors |
---|
2808 | - pass less-powerful references instead, like StorageBroker or Uploader |
---|
2809 | * always create DirectoryNodes by wrapping a filenode (mutable for now) |
---|
2810 | * remove some specialized mock classes from unit tests |
---|
2811 | |
---|
2812 | Detailed list of changes (done one at a time, then merged together) |
---|
2813 | |
---|
2814 | always pass a string to create_node_from_uri(), not an IURI instance |
---|
2815 | always pass a string to IFilesystemNode constructors, not an IURI instance |
---|
2816 | stop using IURI() as an adapter, switch on cap prefix in create_node_from_uri() |
---|
2817 | client.py: move SecretHolder code out to a separate class |
---|
2818 | test_web.py: hush pyflakes |
---|
2819 | client.py: move NodeMaker functionality out into a separate object |
---|
2820 | LiteralFileNode: stop storing a Client reference |
---|
2821 | immutable Checker: remove Client reference, it only needs a SecretHolder |
---|
2822 | immutable Upload: remove Client reference, leave SecretHolder and StorageBroker |
---|
2823 | immutable Repairer: replace Client reference with StorageBroker and SecretHolder |
---|
2824 | immutable FileNode: remove Client reference |
---|
2825 | mutable.Publish: stop passing Client |
---|
2826 | mutable.ServermapUpdater: get StorageBroker in constructor, not by peeking into Client reference |
---|
2827 | MutableChecker: reference StorageBroker and History directly, not through Client |
---|
2828 | mutable.FileNode: removed unused indirection to checker classes |
---|
2829 | mutable.FileNode: remove Client reference |
---|
2830 | client.py: move RSA key generation into a separate class, so it can be passed to the nodemaker |
---|
2831 | move create_mutable_file() into NodeMaker |
---|
2832 | test_dirnode.py: stop using FakeClient mockups, use NoNetworkGrid instead. This simplifies the code, but takes longer to run (17s instead of 6s). This should come down later when other cleanups make it possible to use simpler (non-RSA) fake mutable files for dirnode tests. |
---|
2833 | test_mutable.py: clean up basedir names |
---|
2834 | client.py: move create_empty_dirnode() into NodeMaker |
---|
2835 | dirnode.py: get rid of DirectoryNode.create |
---|
2836 | remove DirectoryNode.init_from_uri, refactor NodeMaker for customization, simplify test_web's mock Client to match |
---|
2837 | stop passing Client to DirectoryNode, make DirectoryNode.create_with_mutablefile the normal DirectoryNode constructor, start removing client from NodeMaker |
---|
2838 | remove Client from NodeMaker |
---|
2839 | move helper status into History, pass History to web.Status instead of Client |
---|
2840 | test_mutable.py: fix minor typo |
---|
2841 | ] |
---|
2842 | [docs: edits for docs/running.html from Sam Mason |
---|
2843 | zooko@zooko.com**20090809201416 |
---|
2844 | Ignore-this: 2207e80449943ebd4ed50cea57c43143 |
---|
2845 | ] |
---|
2846 | [docs: install.html: instruct Debian users to use this document and not to go find the DownloadDebianPackages page, ignore the warning at the top of it, and try it |
---|
2847 | zooko@zooko.com**20090804123840 |
---|
2848 | Ignore-this: 49da654f19d377ffc5a1eff0c820e026 |
---|
2849 | http://allmydata.org/pipermail/tahoe-dev/2009-August/002507.html |
---|
2850 | ] |
---|
2851 | [docs: relnotes.txt: reflow to 63 chars wide because google groups and some web forms seem to wrap to that |
---|
2852 | zooko@zooko.com**20090802135016 |
---|
2853 | Ignore-this: 53b1493a0491bc30fb2935fad283caeb |
---|
2854 | ] |
---|
2855 | [docs: about.html: fix English usage noticed by Amber |
---|
2856 | zooko@zooko.com**20090802050533 |
---|
2857 | Ignore-this: 89965c4650f9bd100a615c401181a956 |
---|
2858 | ] |
---|
2859 | [docs: fix mis-spelled word in about.html |
---|
2860 | zooko@zooko.com**20090802050320 |
---|
2861 | Ignore-this: fdfd0397bc7cef9edfde425dddeb67e5 |
---|
2862 | ] |
---|
2863 | [TAG allmydata-tahoe-1.5.0 |
---|
2864 | zooko@zooko.com**20090802031303 |
---|
2865 | Ignore-this: 94e5558e7225c39a86aae666ea00f166 |
---|
2866 | ] |
---|
2867 | Patch bundle hash: |
---|
2868 | b8174b5e869654c7a2692f660b0b14fb22102888 |
---|