[tahoe-lafs-trac-stream] [tahoe-lafs] #1653: mutable-retrieve should accept good shares from servers with bad shares
tahoe-lafs
trac at tahoe-lafs.org
Mon Mar 12 19:12:36 UTC 2012
#1653: mutable-retrieve should accept good shares from servers with bad shares
------------------------------+-------------------------------------------
Reporter: warner | Owner:
Type: defect | Status: new
Priority: major | Milestone: eventually
Component: code-mutable | Version: 1.9.0
Resolution: | Keywords: preservation mutable retrieve
Launchpad Bug: |
------------------------------+-------------------------------------------
Changes (by davidsarah):
* keywords: => preservation mutable retrieve
Old description:
> retrieve.py is currently rather punitive when it detects corrupt shares:
> it stops using *any* shares from the same server. This doesn't have much
> effect when there are plenty of servers and the shares are spread thinly,
> but in small personal grids, a single corrupt share could disqualify
> enough shares that the retrieve fails.
>
> I think this behavior was copied from the odld immutable downloader. The
> new immutable downloader doesn't do this: it treats each share
> independently.
>
> To fix this, {{{_mark_bad_share}}} and {{{_remove_reader}}} should remove
> just the one share from {{{self.remaining_sharemap}}}, instead of all
> shares with the same server. We also need to look for other calls to
> {{{_remove_reader()}}}, since I think there may be some which *are*
> supposed to stop all shares from the same server.
New description:
retrieve.py is currently rather punitive when it detects corrupt shares:
it stops using *any* shares from the same server. This doesn't have much
effect when there are plenty of servers and the shares are spread thinly,
but in small personal grids, a single corrupt share could disqualify
enough shares that the retrieve fails.
I think this behavior was copied from the old immutable downloader. The
new immutable downloader doesn't do this: it treats each share
independently.
To fix this, {{{_mark_bad_share}}} and {{{_remove_reader}}} should remove
just the one share from {{{self.remaining_sharemap}}}, instead of all
shares with the same server. We also need to look for other calls to
{{{_remove_reader()}}}, since I think there may be some which *are*
supposed to stop all shares from the same server.
--
--
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1653#comment:1>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage
More information about the tahoe-lafs-trac-stream
mailing list