[tahoe-lafs-trac-stream] [Tahoe-LAFS] #3540: allmydata.mutable.publish.Publish.publish has unreliably covered bad shares handling code
Tahoe-LAFS
trac at tahoe-lafs.org
Mon Nov 30 21:05:16 UTC 2020
#3540: allmydata.mutable.publish.Publish.publish has unreliably covered bad shares
handling code
-------------------------+-----------------------
Reporter: exarkun | Owner:
Type: defect | Status: new
Priority: normal | Milestone: undecided
Component: unknown | Version: n/a
Resolution: | Keywords:
Launchpad Bug: |
-------------------------+-----------------------
Comment (by exarkun):
It's hard to tell what the point of this loop is. Nothing in the test
suite fails if I just delete it.
The `self.update_goal()` call that follows immediately afterwards
discovers the bad shares are homeless and adds them to `self.goal` itself
so this loop does not seem to be important to cause bad shares to be re-
uploading before the `publish` operation is considered successful.
The `bad_share_checkstrings` thing might be the purpose. If values are
found there later then the writer is told about the checkstrings. Perhaps
this avoid uncoordinated repairs?
So ...
1. node0 decides that share0 is bad and has sequence number seq0.
1. it records a checkstring including seq0 and gets ready to repair it.
1. node1 decides that share0 is bad and has sequence number seq0.
1. it records a checkstring including seq0 and gets ready to repair it.
1. node1 uploads a new share0 with sequence number seq1 against
checkstring seq0.
1. the share is now repaired and contains a new content version.
1. node0 tries to upload a new share0 with sequence number seq1 against
checkstring seq0.
1. the upload fails because the checkstring doesn't match.
maybe?
--
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/3540#comment:1>
Tahoe-LAFS <https://Tahoe-LAFS.org>
secure decentralized storage
More information about the tahoe-lafs-trac-stream
mailing list