Opened at 2012-02-10T07:27:40Z
Last modified at 2021-03-30T18:40:19Z
#1670 assigned defect
KeyError in mutable read-modify-write
Reported by: | vikarti | Owned by: | zooko |
---|---|---|---|
Priority: | critical | Milestone: | soon |
Component: | code-mutable | Version: | 1.9.1 |
Keywords: | mutable retrieve error tahoe-backup regression test-needed preservation blocks-release | Cc: | vikarti@…, zooko |
Launchpad Bug: |
Description (last modified by daira)
Mac OS X 10.7 gateway,
mutable.format is set to mdmf
sometimes tahoe backup process several files fine and later stops if it stops - it's always same place like this
skipping "/Users/vikarti/Calibre Library/Zooko Wilcox-O'Hearn_ Brian Warner/Tahoe - The Least-Authority Filesystem (1347)/Tahoe - The Least-Authority Filesystem - Zooko Wilcox-O'Hearn_ Brian Warner.pdf".. re-using old directory for "/Users/vikarti/Calibre Library/Zooko Wilcox-O'Hearn_ Brian Warner/Tahoe - The Least-Authority Filesystem (1347)" re-using old directory for "/Users/vikarti/Calibre Library/Zooko Wilcox-O'Hearn_ Brian Warner" re-using old directory for '/Users/vikarti/Calibre Library' Traceback (most recent call last): File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/support/bin/tahoe", line 9, in <module> load_entry_point('allmydata-tahoe==1.9.1', 'console_scripts', 'tahoe')() File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/runner.py", line 113, in run rc = runner(sys.argv[1:], install_node_control=install_node_control) File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/runner.py", line 99, in runner rc = cli.dispatch[command](so) File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/cli.py", line 569, in backup rc = tahoe_backup.backup(options) File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/tahoe_backup.py", line 325, in backup return bu.run() File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/tahoe_backup.py", line 123, in run put_child(archives_url, now, new_backup_dircap) File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/tahoe_backup.py", line 57, in put_child raise HTTPError("Error during put_child", resp) allmydata.scripts.common_http.HTTPError: Error during put_child: 500 Internal Server Error "Traceback (most recent call last): File \"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/base.py\", line 793, in runUntilCurrent call.func(*call.args, **call.kw) File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/support/lib/python2.7/site-packages/foolscap-0.6.3-py2.7.egg/foolscap/eventual.py\", line 26, in _turn cb(*args, **kwargs)\x0a File \"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py\", line 361, in callback self._startRunCallbacks(result)\x0a File \"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py\", line 455, in _startRunCallbacks self._runCallbacks() --- <exception caught here> --- File \"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py\", line 542, in _runCallbacks current.result = callback(current.result, *args, **kw) File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/filenode.py\", line 855, in <lambda>\x0a self._modify_once(modifier, first_time))\x0a File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/filenode.py\", line 881, in _modify_once d = self._try_to_download_data() File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/filenode.py\", line 959, in _try_to_download_data d = self._read(c, fetch_privkey=True) File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/filenode.py\", line 980, in _read d = r.download(consumer, offset, size) File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/retrieve.py\", line 237, in download self._setup_download() File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/retrieve.py\", line 277, in _setup_download shares = versionmap[self.verinfo]\x0aexceptions.KeyError: (5, '\\x11c\\x1c\\xf7\\xc9\\xae1e\\xa2\\xdaF\\xaf\\xb6S\\x03IH\\x84\\x11#\\xb5\\xf2TS\\xfb\\xde\\x08\\x81\\x01\\xf0\\xd3\\xec', None, 131073, 1080, 3, 10, '\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x05\\x11c\\x1c\\xf7\\xc9\\xae1e\\xa2\\xdaF\\xaf\\xb6S\\x03IH\\x84\\x11#\\xb5\\xf2TS\\xfb\\xde\\x08\\x81\\x01\\xf0\\xd3\\xec\\x03\\n\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x048', (('EOF', 2575), ('verification_key', 1729), ('enc_privkey', 123), ('share_data', 2167), ('signature', 1473), ('block_hash_tree', 2543), ('share_hash_chain', 1337), ('verification_key_end', 2021))) "
Attachments (7)
Change History (59)
Changed at 2012-02-10T07:40:50Z by vikarti
comment:1 Changed at 2012-02-11T00:36:58Z by davidsarah
- Description modified (diff)
reformat description
comment:2 Changed at 2012-02-17T18:43:24Z by davidsarah
- Component changed from unknown to code-mutable
- Description modified (diff)
- Keywords mutable retrieve error tahoe-backup added
- Summary changed from tahoe backup sometimes fails to KeyError in mutable MDMF retrieve
comment:3 Changed at 2012-02-17T18:43:39Z by davidsarah
- Keywords mdmf added
comment:4 Changed at 2012-02-19T00:59:47Z by davidsarah
- Keywords mdmf removed
- Milestone changed from undecided to soon
- Owner changed from nobody to vikarti
- Summary changed from KeyError in mutable MDMF retrieve to KeyError in mutable retrieve
#1656 is a probable duplicate with a KeyError on the same line. Based on that, the bug doesn't seem to be specific to MDMF; vikarti, can you confirm that by trying to reproduce it with SDMF?
comment:5 Changed at 2012-02-19T01:52:39Z by zooko
- Priority changed from major to critical
Changed at 2012-02-19T02:28:29Z by davidsarah
Incident file for 'tahoe deep-check --repair' from #1656
comment:6 Changed at 2012-02-19T04:43:17Z by vikarti
- Cc vikarti@… added
- Owner changed from vikarti to davidsarah
davidsarah, I can't reproduce this one on SDMF, it just either happens or not. If it will happen with SDMF - I will report here.
comment:7 Changed at 2012-02-19T06:32:13Z by vikarti
Davidsarah, possible related to this: SDMF directory, 2 tahoe backup sessions from different gateways, 2nd one got:
Unable to create target directory: 500 Internal Server Error "Traceback (most recent call last):\x0a File \"C:\\Tahoe\\allmydata-tahoe-1.9.1\\support\\Lib\\site-packages\\twisted-10.1.0-py2.6-win-amd64.egg\\twisted\\inte rnet\\base.py\", line 796, in runUntilCurrent\x0a call.func(*call.args, **call.kw)\x0a File \"C:\\Tahoe\\allmydata-tahoe-1.9.1\\support\\Lib\\site-packages\ \foolscap-0.6.3-py2.6.egg\\foolscap\\eventual.py\", line 26, in _turn\x0a cb(*args, **kwargs)\x0a File \"C:\\Tahoe\\allmydata-tahoe-1.9.1\\support\\Lib\\sit e-packages\\twisted-10.1.0-py2.6-win-amd64.egg\\twisted\\internet\\defer.py\", line 318, in callback\x0a self._startRunCallbacks(result)\x0a File \"C:\\Taho e\\allmydata-tahoe-1.9.1\\support\\Lib\\site-packages\\twisted-10.1.0-py2.6-win-amd64.egg\\twisted\\internet\\defer.py\", line 424, in _startRunCallbacks\x0a self._runCallbacks()\x0a--- <exception caught here> ---\x0a File \"C:\\Tahoe\\allmydata-tahoe-1.9.1\\support\\Lib\\site-packages\\twisted-10.1.0-py2.6-win-amd 64.egg\\twisted\\internet\\defer.py\", line 441, in _runCallbacks\x0a self.result = callback(self.result, *args, **kw)\x0a File \"c:\\tahoe\\allmydata-tahoe -1.9.1\\src\\allmydata\\mutable\\filenode.py\", line 855, in <lambda>\x0a self._modify_once(modifier, first_time))\x0a File \"c:\\tahoe\\allmydata-tahoe-1.9 .1\\src\\allmydata\\mutable\\filenode.py\", line 881, in _modify_once\x0a d = self._try_to_download_data()\x0a File \"c:\\tahoe\\allmydata-tahoe-1.9.1\\src\ \allmydata\\mutable\\filenode.py\", line 959, in _try_to_download_data\x0a d = self._read(c, fetch_privkey=True)\x0a File \"c:\\tahoe\\allmydata-tahoe-1.9.1 \\src\\allmydata\\mutable\\filenode.py\", line 980, in _read\x0a d = r.download(consumer, offset, size)\x0a File \"c:\\tahoe\\allmydata-tahoe-1.9.1\\src\\al lmydata\\mutable\\retrieve.py\", line 237, in download\x0a self._setup_download()\x0a File \"c:\\tahoe\\allmydata-tahoe-1.9.1\\src\\allmydata\\mutable\\retr ieve.py\", line 277, in _setup_download\x0a shares = versionmap[self.verinfo]\x0aexceptions.KeyError: (1, '@E\\xe0H\\x8a\\xbc\\xa6rQA\\xfa\\xab\\xfd&B\\xf6td \"jv\\xae\\x96\\xccel\\xcf\\xc2\\x9f\\x8aG)', '\\xde\\xdeyKb\\xcc\\x8f-\\xed\\xea\\xa7\\xcc\\xe1a\\xfe\\x10', 0, 0, 4, 13, '\\x00\\x00\\x00\\x00\\x00\\x00\\x00\ \x00\\x01@E\\xe0H\\x8a\\xbc\\xa6rQA\\xfa\\xab\\xfd&B\\xf6td\"jv\\xae\\x96\\xccel\\xcf\\xc2\\x9f\\x8aG)\\xde\\xdeyKb\\xcc\\x8f-\\xed\\xea\\xa7\\xcc\\xe1a\\xfe\\x 10\\x04\\r\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', (('enc_privkey', 823), ('EOF', 2038), ('share_data', 823), ('signat ure', 399), ('block_hash_tree', 791), ('share_hash_chain', 655)))\x0a"
I think this is possible related to this ticket so reporting there. also, there is inciden file for this one(attaching it)
comment:8 Changed at 2012-02-19T15:10:05Z by davidsarah
vikarti: thanks, that confirms this problem is not specific to MDMF.
comment:9 follow-ups: ↓ 15 ↓ 41 Changed at 2012-02-20T00:55:11Z by kevan
It looks like the mutable filenode modification code in MutableFileVersion reacts to an UncoordinatedWriteError by performing a map update, then trying the operation again. That map update will update the MutableFileVersion's internal servermap, but not its internal verinfo tuple. If the UncoordinatedWriteError is due to a concurrent update operation that updated all or most (enough for the version to be unrecoverable, anyway) of the shares associated with the MutableFileVersion's version of the mutable file, then we would see the KeyError reported by vikarti and others. Someone (likely me, if no one gets around to it before next weekend) should look over the incident logs to see if they support or refute this theory.
comment:10 Changed at 2012-02-20T01:52:33Z by zooko
- Cc zooko added
comment:11 Changed at 2012-03-05T22:45:58Z by zooko
- Keywords regression added
- Milestone changed from soon to 1.9.2
This is a regression in 1.8→1.9, isn't it? I propose we put it into the "1.9.2" Milestone.
comment:12 Changed at 2012-04-12T01:18:11Z by zooko
huang jun provided some detailed debugging:
https://tahoe-lafs.org/pipermail/tahoe-dev/2012-April/007285.html
You have to read carefully to find where huang jun inserted comments and evidence from other log files. Look for the lines beginning with * or >.
comment:13 Changed at 2012-06-04T17:14:39Z by davidsarah
Kyle Markley reported a bug with a similar stacktrace:
As I went about gathering information to submit a ticket, I discovered something interesting. This probably has something to do with the health of the target directories. I discovered one 2/4 directory that works cleanly, and another 2/4 directory that gives an error (after *successfully* creating the link). I haven't created a ticket yet because I'm not sure what's supposed to happen here. :)
Working directory:
$ tahoe check --raw kyle: { "results": { "needs-rebalancing": true, "count-shares-expected": 4, "healthy": false, "count-unrecoverable-versions": 0, "count-shares-needed": 2, "sharemap": { "seq52-2x5m-sh3": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq52-2x5m-sh2": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq52-2x5m-sh1": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq52-2x5m-sh0": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq49-cqp3-sh2": [ "juwmgssmwnhrhfdcpxxmrz3bghh37esx" ], "seq49-cqp3-sh3": [ "vjqcroalrgmft66mgiwfjug667fl6qjd" ] }, "count-recoverable-versions": 2, "servers-responding": [ "vjqcroalrgmft66mgiwfjug667fl6qjd", "juwmgssmwnhrhfdcpxxmrz3bghh37esx", "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx", "47cslusczp3uu2kygodi3nlalcruscif", "eqtprtidz5emkvzlqt27dylgocdf3f77" ], "count-good-share-hosts": 1, "count-wrong-shares": 2, "count-shares-good": 4, "count-corrupt-shares": 0, "list-corrupt-shares": [], "recoverable": true }, "storage-index": "rsi6ge4hmbzhxplyqjzkmd254e", "summary": "Unhealthy: multiple versions are recoverable" }
Gives an error:
$ tahoe check --raw share: { "results": { "needs-rebalancing": true, "count-shares-expected": 4, "healthy": false, "count-unrecoverable-versions": 1, "count-shares-needed": 2, "sharemap": { "seq45-7bxx-sh3": [ "juwmgssmwnhrhfdcpxxmrz3bghh37esx" ], "seq46-jiym-sh1": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq46-jiym-sh0": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq56-aiot-sh0": [ "vjqcroalrgmft66mgiwfjug667fl6qjd" ], "seq56-aiot-sh1": [ "vjqcroalrgmft66mgiwfjug667fl6qjd" ], "seq56-aiot-sh2": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq56-aiot-sh3": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ] }, "count-recoverable-versions": 2, "servers-responding": [ "vjqcroalrgmft66mgiwfjug667fl6qjd", "juwmgssmwnhrhfdcpxxmrz3bghh37esx", "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx", "47cslusczp3uu2kygodi3nlalcruscif", "eqtprtidz5emkvzlqt27dylgocdf3f77" ], "count-good-share-hosts": 2, "count-wrong-shares": 3, "count-shares-good": 4, "count-corrupt-shares": 0, "list-corrupt-shares": [], "recoverable": true }, "storage-index": "lb2mpyg4fnznnfebfayevcdpki", "summary": "Unhealthy: some versions are unrecoverable multiple versions are recoverable" }
My error looks something like this. The exact message has changed; it used to say there was an UncoordinatedWriteError, but my experimentation seems to have changed things a bit, and right now I only see that error mentioned in the incident report.
$ tahoe ln foo: share:foo Error: 500 Internal Server Error "Traceback (most recent call last):\x0a File \"/usr/local/lib/python2.7/site-packages/twisted/internet/base.py\", line 800, in runUntilCurrent\x0a call.func(*call.args, **call.kw)\x0a File \"/usr/local/lib/python2.7/site-packages/foolscap-0.6.3-py2.7.egg/foolscap/eventual.py\", line 26, in _turn\x0a cb(*args, **kwargs)\x0a File \"/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py\", line 368, in callback\x0a self._startRunCallbacks(result)\x0a File \"/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py\", line 464, in _startRunCallbacks\x0a self._runCallbacks()\x0a--- <exception caught here> ---\x0a File \"/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py\", line 551, in _runCallbacks\x0a current.result = callback(current.result, *args, **kw)\x0a File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/filenode.py\", line 855, in <lambda>\x0a self._modify_once(modifier, first_time))\x0a File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/filenode.py\", line 881, in _modify_once\x0a d = self._try_to_download_data()\x0a File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/filenode.py\", line 959, in _try_to_download_data\x0a d = self._read(c, fetch_privkey=True)\x0a File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/filenode.py\", line 980, in _read\x0a d = r.download(consumer, offset, size)\x0a File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/retrieve.py\", line 237, in download\x0a self._setup_download()\x0a File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/retrieve.py\", line 277, in _setup_download\x0a shares = versionmap[self.verinfo]\x0aexceptions.KeyError: (57, '\\x16i\\xdb\\xa8\\xbc\\xd7\\xabrY\\xcdpv\\xa4I\\x82\\xfe\\xa5i\\xed\\x82;\\xca\\xe8\\xcaL\\xf7\\xdav\\xa9\\xf2O\\t', '\\x19f_S!&\\xb0\\xa1\\xeb\\x94\\x81F)\\xbb\\x89q', 336, 335, 2, 4, '\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x009\\x16i\\xdb\\xa8\\xbc\\xd7\\xabrY\\xcdpv\\xa4I\\x82\\xfe\\xa5i\\xed\\x82;\\xca\\xe8\\xcaL\\xf7\\xdav\\xa9\\xf2O\\t\\x19f_S!&\\xb0\\xa1\\xeb\\x94\\x81F)\\xbb\\x89q\\x02\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x01P\\x00\\x00\\x00\\x00\\x00\\x00\\x01O', (('enc_privkey', 923), ('EOF', 2138), ('share_data', 755), ('signature', 399), ('block_hash_tree', 723), ('share_hash_chain', 655)))\x0a"
From the incident report:
local#11372110 21:30:59.251: current goal: before update: , sh0 to [vjqcroal], sh0 to [xxaj2tgm], sh1 to [vjqcroal], sh1 to [xxaj2tgm], sh2 to [xxaj2tgm], sh3 to [juwmgssm], sh3 to [xxaj2tgm] local#11372111 21:30:59.251: we are planning to push new seqnum=#58 local#11372112 21:30:59.252: Starting push local#11372113 21:30:59.252: Pushing segment 1 of 1 local#11372114 21:30:59.275: storage: slot_writev lb2mpyg4fnznnfebfayevcdpki local#11372115 21:30:59.277: storage: slot_writev lb2mpyg4fnznnfebfayevcdpki local#11372116 21:30:59.281: _got_write_answer from xxaj2tgm, share 2 local#11372117 21:30:59.281: found the following surprise shares: set([0, 1]) local#11372118 21:30:59.281: they had shares [0, 1] that we didn't know about [INCIDENT-TRIGGER] local#11372119 21:31:00.433: wrote successfully: adding new share to servermap local#11372120 21:31:00.435: _got_write_answer from xxaj2tgm, share 3 local#11372121 21:31:00.435: found the following surprise shares: set([0, 1]) local#11372122 21:31:00.435: they had shares [0, 1] that we didn't know about local#11372123 21:31:00.435: wrote successfully: adding new share to servermap local#11372124 21:31:00.436: _got_write_answer from vjqcroal, share 0 local#11372125 21:31:00.436: found the following surprise shares: set([]) local#11372126 21:31:00.436: wrote successfully: adding new share to servermap local#11372127 21:31:00.437: _got_write_answer from vjqcroal, share 1 local#11372128 21:31:00.437: found the following surprise shares: set([]) local#11372129 21:31:00.437: wrote successfully: adding new share to servermap local#11372130 21:31:00.437: Publish failed with UncoordinatedWriteError What's up with these surprise shares?
comment:14 Changed at 2012-06-04T17:25:12Z by davidsarah
- Owner changed from davidsarah to kevan
Kevan, is Kyle's report consistent with comment:9?
comment:15 in reply to: ↑ 9 Changed at 2012-06-14T15:32:24Z by zooko
Replying to kevan:
It looks like the mutable filenode modification code in MutableFileVersion reacts to an UncoordinatedWriteError by performing a map update, then trying the operation again.
Hm, that's interesting. Is this... intentional? Is this intended by someone? :-) I don't know right now whether I would intend that behavior.
A simpler behavior in response to UncoordinatedWriteError would be to abort the current operation and inform the user. That simpler behavior would presumably not have this bug in it, and maybe it would also avoid other potential problems caused by re-attempting an operation when there is evidence that someone else is uncoordinatedly changing the same file.
comment:16 follow-up: ↓ 18 Changed at 2012-06-18T15:37:06Z by zooko
- Owner changed from kevan to zooko
- Status changed from new to assigned
This looks like a similar error to #1772, so I'm investigating them both right now. Kevan, Brian, David-Sarah: if you are interested, please consider the question of whether we should simplify handling of UncoordinatedWriteError as suggested in comment:15. It seems to me like we currently handle UCWE complicatedly and buggily, and we should maybe change it to simpler and less buggy before attempting to change it to more complicated and featureful without being buggy. Just a thought. Anyway, I'll report what I learn about this...
comment:17 Changed at 2012-06-18T15:37:49Z by zooko
Sorry, I didn't mean #1772 (in comment:16), I meant #1669.
comment:18 in reply to: ↑ 16 Changed at 2012-06-18T18:21:42Z by davidsarah
Replying to zooko:
please consider the question of whether we should simplify handling of UncoordinatedWriteError as suggested in comment:15.
Well, it looks like there is quite a lot of code to support the retrying of mutable operations, so I assumed that was certainly intentional (it was there before I joined the project, and I was mildly surprised since it seemed not entirely consistent with the "prime coordination directive"). Let's redesign it as part of designing two-phase commit, but not before 1.11. I really want to get 1.10 out with all the changes already on trunk, without making any more significant design changes.
comment:19 Changed at 2012-06-21T19:30:41Z by zooko
- Status changed from assigned to new
Fixed #1669 and it is different from this -- this isn't a duplicate.
By inspecting the bug reports and incident files I've confirmed what Kevan said, that the error happens during retry after a UCWE. One of the incident files shows the following:
02:26:28.641 [1920941]: current goal: before update: , sh0 to [fbil5app], sh1 to [d4f7ssed], sh2 to [lxur2oq3], sh3 to [rczho2ut], sh4 to [xi6zcwi2], sh5 to [zpm5ziqh], sh6 to [e5l3augd], sh7 to [yf6ahya4], sh8 to [7xdwosoe], sh9 to [t3zmoeba], sh10 to [lfc4wmmk], sh11 to [wasuxoux], sh12 to [iz3pbtdv] 02:26:28.641 [1920942]: we are planning to push new seqnum=#2 02:26:28.642 [1920943]: Starting push 02:26:28.642 [1920944]: Pushing segment 1 of 1 02:26:28.950 [1920945]: _got_write_answer from d4f7ssed, share 1 02:26:28.950 [1920947]: WEIRD our testv failed, so the write did not happen [INCIDENT-TRIGGER] 02:26:33.803 [1920948]: somebody modified the share on us: shnum=1: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:28.950 [1920946]: found the following surprise shares: set([]) 02:26:33.805 [1920949]: _got_write_answer from rczho2ut, share 3 02:26:33.805 [1920951]: WEIRD our testv failed, so the write did not happen 02:26:33.805 [1920952]: somebody modified the share on us: shnum=3: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:33.805 [1920950]: found the following surprise shares: set([]) 02:26:33.806 [1920953]: _got_write_answer from t3zmoeba, share 9 02:26:33.806 [1920955]: WEIRD our testv failed, so the write did not happen 02:26:33.806 [1920956]: somebody modified the share on us: shnum=9: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:33.806 [1920954]: found the following surprise shares: set([]) 02:26:33.812 [1920957]: _got_write_answer from fbil5app, share 0 02:26:33.812 [1920959]: WEIRD our testv failed, so the write did not happen 02:26:33.812 [1920960]: somebody modified the share on us: shnum=0: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:33.812 [1920958]: found the following surprise shares: set([]) 02:26:33.813 [1920961]: _got_write_answer from lfc4wmmk, share 10 02:26:33.813 [1920963]: WEIRD our testv failed, so the write did not happen 02:26:33.813 [1920964]: somebody modified the share on us: shnum=10: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:33.813 [1920962]: found the following surprise shares: set([]) 02:26:33.815 [1920965]: _got_write_answer from wasuxoux, share 11 02:26:33.815 [1920967]: WEIRD our testv failed, so the write did not happen 02:26:33.815 [1920968]: somebody modified the share on us: shnum=11: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:33.815 [1920966]: found the following surprise shares: set([]) 02:26:33.816 [1920969]: _got_write_answer from lxur2oq3, share 2 02:26:33.816 [1920971]: WEIRD our testv failed, so the write did not happen 02:26:33.818 [1920972]: somebody modified the share on us: shnum=2: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:33.816 [1920970]: found the following surprise shares: set([]) 02:26:33.819 [1920973]: _got_write_answer from 7xdwosoe, share 8 02:26:33.819 [1920975]: WEIRD our testv failed, so the write did not happen 02:26:33.819 [1920976]: somebody modified the share on us: shnum=8: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:33.819 [1920974]: found the following surprise shares: set([]) 02:26:33.819 [1920977]: _got_write_answer from iz3pbtdv, share 12 02:26:33.819 [1920979]: WEIRD our testv failed, so the write did not happen 02:26:33.821 [1920980]: somebody modified the share on us: shnum=12: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:33.819 [1920978]: found the following surprise shares: set([]) 02:26:33.822 [1920981]: _got_write_answer from e5l3augd, share 6 02:26:33.822 [1920983]: WEIRD our testv failed, so the write did not happen 02:26:33.822 [1920984]: somebody modified the share on us: shnum=6: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:33.822 [1920982]: found the following surprise shares: set([]) 02:26:33.882 [1920985]: _got_write_answer from zpm5ziqh, share 5 02:26:33.882 [1920987]: WEIRD our testv failed, so the write did not happen 02:26:33.882 [1920988]: somebody modified the share on us: shnum=5: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:33.882 [1920986]: found the following surprise shares: set([]) 02:26:33.884 [1920989]: _got_write_answer from xi6zcwi2, share 4 02:26:33.884 [1920991]: WEIRD our testv failed, so the write did not happen 02:26:33.884 [1920992]: somebody modified the share on us: shnum=4: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:33.884 [1920990]: found the following surprise shares: set([]) 02:26:33.885 [1920993]: _got_write_answer from yf6ahya4, share 7 02:26:33.885 [1920995]: WEIRD our testv failed, so the write did not happen 02:26:33.885 [1920996]: somebody modified the share on us: shnum=7: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t���� ܘvҐ:%��l�i�tA2 02:26:33.885 [1920994]: found the following surprise shares: set([]) 02:26:33.885 [1920997]: Publish failed with UncoordinatedWriteError 02:26:37.075 [1920998]: doing modify 02:26:37.075 [1920999]: SharemapUpdater(wmnt2): starting (MODE_CHECK) 02:26:37.076 [1921000]: sending query to [iz3pbtdv], readsize=1000 02:26:37.078 [1921001]: sending query to [rczho2ut], readsize=1000 02:26:37.079 [1921002]: sending query to [lfc4wmmk], readsize=1000 02:26:37.081 [1921003]: sending query to [yf6ahya4], readsize=1000 02:26:37.082 [1921004]: sending query to [d4f7ssed], readsize=1000 02:26:37.084 [1921005]: sending query to [t3zmoeba], readsize=1000 02:26:37.085 [1921006]: sending query to [fbil5app], readsize=1000 02:26:37.085 [1921007]: sending query to [xi6zcwi2], readsize=1000 02:26:37.086 [1921008]: sending query to [wasuxoux], readsize=1000 02:26:37.088 [1921009]: sending query to [7xdwosoe], readsize=1000 02:26:37.088 [1921010]: sending query to [zpm5ziqh], readsize=1000 02:26:37.089 [1921011]: sending query to [lxur2oq3], readsize=1000 02:26:37.098 [1921012]: sending query to [e5l3augd], readsize=1000 02:26:37.131 [1921013]: got result from [d4f7ssed], 1 shares 02:26:37.131 [1921014]: _got_results done 02:26:37.134 [1921019]: _got_results: got shnum #1 from peerid d4f7ssed 02:26:37.134 [1921020]: found valid version 2-tmzg from d4f7ssed-sh1: 4-13/352/349 02:26:37.131 [1921015]: got result from [rczho2ut], 1 shares 02:26:37.132 [1921016]: _got_results done 02:26:37.137 [1921025]: _got_results: got shnum #3 from peerid rczho2ut 02:26:37.137 [1921026]: found valid version 2-tmzg from rczho2ut-sh3: 4-13/352/349 02:26:37.134 [1921017]: got result from [t3zmoeba], 1 shares 02:26:37.134 [1921018]: _got_results done 02:26:37.138 [1921031]: _got_results: got shnum #9 from peerid t3zmoeba 02:26:37.138 [1921032]: found valid version 2-tmzg from t3zmoeba-sh9: 4-13/352/349 02:26:37.135 [1921021]: _check_for_done, mode is 'MODE_CHECK', 12 queries outstanding, 0 extra peers available, 12 'must query' peers left, need_privkey=False 02:26:37.135 [1921022]: 12 'must query' peers left 02:26:37.135 [1921023]: _check_for_done, mode is 'MODE_CHECK', 12 queries outstanding, 0 extra peers available, 12 'must query' peers left, need_privkey=False 02:26:37.135 [1921024]: 12 'must query' peers left 02:26:37.137 [1921027]: _check_for_done, mode is 'MODE_CHECK', 11 queries outstanding, 0 extra peers available, 11 'must query' peers left, need_privkey=False 02:26:37.137 [1921028]: 11 'must query' peers left 02:26:37.137 [1921029]: _check_for_done, mode is 'MODE_CHECK', 11 queries outstanding, 0 extra peers available, 11 'must query' peers left, need_privkey=False 02:26:37.137 [1921030]: 11 'must query' peers left 02:26:37.138 [1921033]: _check_for_done, mode is 'MODE_CHECK', 10 queries outstanding, 0 extra peers available, 10 'must query' peers left, need_privkey=False 02:26:37.138 [1921034]: 10 'must query' peers left 02:26:37.138 [1921035]: _check_for_done, mode is 'MODE_CHECK', 10 queries outstanding, 0 extra peers available, 10 'must query' peers left, need_privkey=False 02:26:37.138 [1921036]: 10 'must query' peers left 02:26:37.269 [1921037]: got result from [7xdwosoe], 1 shares 02:26:37.270 [1921038]: _got_results done 02:26:37.272 [1921039]: _got_results: got shnum #8 from peerid 7xdwosoe 02:26:37.273 [1921040]: found valid version 2-tmzg from 7xdwosoe-sh8: 4-13/352/349 02:26:37.273 [1921041]: _check_for_done, mode is 'MODE_CHECK', 9 queries outstanding, 0 extra peers available, 9 'must query' peers left, need_privkey=False 02:26:37.273 [1921042]: 9 'must query' peers left 02:26:37.273 [1921043]: _check_for_done, mode is 'MODE_CHECK', 9 queries outstanding, 0 extra peers available, 9 'must query' peers left, need_privkey=False 02:26:37.273 [1921044]: 9 'must query' peers left 02:26:37.319 [1921045]: got result from [lfc4wmmk], 1 shares 02:26:37.319 [1921046]: _got_results done 02:26:37.321 [1921047]: _got_results: got shnum #10 from peerid lfc4wmmk
And the user then reported the following:
File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\filenode.py", line 855, in <lambda> self._modify_once(modifier, first_time)) File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\filenode.py", line 881, in _modify_once d = self._try_to_download_data() File "c:\tahoe\allmydata-tahoe-1.9.1\srcllmydata\mutable\filenode.py", line 959, in _try_to_download_data d = self._read(c, fetch_privkey=True) File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\filenode.py", line 980, in _read d = r.download(consumer, offset, size) File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\retrieve.py", line 237, in download self._setup_download() File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\retrieve.py", line 277, in _setup_download shares = versionmap[self.verinfo] exceptions.KeyError: (1, '@E\xe0H\x8a\xbc\xa6rQA\xfa\xab\xfd&B\xf6td"jv\xae\x96\xccel\xcf\xc2\x9f\x8aG)', '\xde\xdeyKb\xcc\x8f-\xed\xea\xa7\xcc\xe1a\xfe\x10', 0, 0, 4, 13, '\x00\x00\x00\x00\x00\x00\x00\x01@E\xe0H\x8a\xbc\xa6rQA\xfa\xab\xfd&B\xf6td"jv\xae\x96\xccel\xcf\xc2\x9f\x8aG)\xde\xdeyKb\xcc\x8f-\xed\xea\xa7\xcc\xe1a\xfe\x10\x04\r\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', (('enc_privkey', 823), ('EOF', 2038), ('share_data', 823), ('signature', 399), ('block_hash_tree', 791), ('share_hash_chain', 655)))
comment:20 Changed at 2012-06-24T03:25:41Z by zooko
- Version changed from 1.9.1 to 1.9.0-s3branch
Hm, actually this may be the same underlying problem as in #1669. In #1669, we found that:
- SDMFSlotWriteProxy.get_verinfo and MDMFSlotReadProxy.get_verinfo return the same shape of tuple, but MDMFSlotWriteProxy.get_verinfo returns a different shape.
So, could this ticket (#1670) be caused by MDMFSlotWriteProxy.get_verinfo returning a differently-shaped verinfo, which gets used as a key in the Retrieve, and then later when MDMFSlotReadProxy.get_verinfo returns what ought to be the same thing (but isn't due to the different shape), and then the Retrieve looks it up in the versionmap, it gets a key error?
comment:21 Changed at 2012-06-24T03:39:09Z by zooko
Yes, I've looked at how the auto-retry functionality shown in the stack trace (comment:19) works, and it looks like the earlier attempt to publish would indeed update the ServerMap object's self.servermap dict to have a verinfo returned by the MDMFSlotWriteProxy.get_verinfo. So I'm increasingly confident that the fix to #1669 also fixed this bug. The next step is to write a unit test that exercises the auto-retry functionality with MDMF, which should show the bug present in 1.9.1 and absent in 1.9.2a1.
(Also, by the way, I remain pretty interested in the idea of removing the auto-retry functionality completely.)
comment:22 follow-up: ↓ 29 Changed at 2012-06-24T21:47:34Z by davidsarah
- Version changed from 1.9.0-s3branch to 1.9.1
Did we fix the binary SIs (if that's what they are) in the log, BTW?
comment:23 follow-up: ↓ 24 Changed at 2012-06-25T00:57:02Z by zooko
I haven't committed that patch yet. Wasn't planning to do for 1.9.2. Shall I?
comment:24 in reply to: ↑ 23 Changed at 2012-06-25T15:07:43Z by davidsarah
Replying to zooko:
I haven't committed that patch yet. Wasn't planning to do for 1.9.2. Shall I?
Please attach the patch here so I can decide whether to commit it for 1.9.2. It should be low-risk I think.
comment:25 Changed at 2012-07-02T16:24:28Z by davidsarah
- Summary changed from KeyError in mutable retrieve to KeyError in mutable read-modify-write
I think this bug can only happen for a read-modify-write.
comment:26 Changed at 2012-07-03T20:28:43Z by davidsarah
- Keywords test-needed added
- Milestone changed from 1.9.2 to 1.10.0
comment:27 Changed at 2012-09-04T16:36:32Z by zooko
- Summary changed from KeyError in mutable read-modify-write to add tests for KeyError in mutable read-modify-write
comment:28 Changed at 2012-09-04T16:39:03Z by zooko
- Status changed from new to assigned
Will look at adding a test.
comment:29 in reply to: ↑ 22 Changed at 2012-09-04T23:33:18Z by davidsarah
Replying to davidsarah:
Did we fix the binary SIs (if that's what they are) in the log, BTW?
Split to #1800.
comment:30 Changed at 2012-11-13T23:28:41Z by zooko
- Priority changed from critical to normal
comment:31 Changed at 2012-12-20T16:33:25Z by davidsarah
- Milestone changed from 1.10.0 to 1.11.0
comment:32 follow-up: ↓ 37 Changed at 2013-01-14T08:35:30Z by zooko
News flash! joepie91 from IRC reported a bug which looks exactly like this one. However, he is using Tahoe-LAFS v1.9.2, with the patch from #1669 (5524/1.9.2) in place! This means that, contrary to what I thought, that patch did not fix this issue — #1670.
comment:33 Changed at 2013-01-14T09:39:50Z by zooko
Here is a stack trace from joepie91:
ata/mutable/filenode.py", line 885, in _modify_once d = self._try_to_download_data() File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 963, in _try_to_download_data d = self._read(c, fetch_privkey=True) File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 985, in _read d = r.download(consumer, offset, size) File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/retrieve.py", line 244, in download self._setup_download() File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/retrieve.py", line 284, in _setup_download shares = versionmap[self.verinfo] exceptions.KeyError: (42, "@'<F\x98\xbdY\xdf\x98\xc5\xa6\x885\xd7U\xe2\xb3U\x83\xdcn\xfa\x0fW2\xe7\xbc\xc1\\%r\xbe", '4\xeb\xc5- \x19\x8b)\xe1.@-z|\x1c\xb2', 668, 666, 4, 8, "\x00\x00\x00\x00\x00\x00\x00\x00*@'<F\x98\xbdY\xdf\x98\xc5\xa6\x885\xd7U\xe2\xb3U\x83\xdcn\xfa\x0fW2\xe7\xbc\xc1\\%r\xbe4\xeb\xc5- \x19\x8b)\xe1.@-z|\x1c\xb2\x04\x08\x00\x00\x00\x00\x00\x00\x02\x9c\x00\x00\x00\x00\x00\x00\x02\x9a", (('enc_privkey', 956), ('EOF', 2172), ('share_data', 789), ('signature', 399), ('block_hash_tree', 757), ('share_hash_chain', 655)))
comment:34 Changed at 2013-01-14T09:41:31Z by zooko
Attaching an incident report file from joepie91 that I manually filtered to remove potentially sensitive information.
Changed at 2013-01-14T09:42:33Z by zooko
comment:35 Changed at 2013-01-14T09:54:12Z by zooko
In attachment:incident-2013-01-14--07-54-09Z-2ymyxfi.flog.bz2.dump.txt.manually-filtered.txt, you can see that the KeyError was preceded by a mysteriously changed version number, probably due to a different gateway modifying the directory at the same time as this gateway was doing so:
local#197564 2013-01-14_05:30:14.753657Z: found valid version 42-iatt from qenkbcot-sh3: 4-8/668/666
…
local#197629 2013-01-14_05:30:18.187664Z: we are planning to push new seqnum=#43
…
local#197655 2013-01-14_05:31:15.323720Z: our testv failed, so the write did not happen local#197656 2013-01-14_05:31:17.737595Z: somebody modified the share on us: shnum=3: I thought they had #42:R=iatt, but testv reported #44:R=ÙUQ$ÌJè)CâvDí[·²±ºNE#ÖG\Ö local#197657 2013-01-14_05:31:17.737826Z: Publish failed with UncoordinatedWriteError
So then it does the automatic-merge-and-retry thing (which I would still like to remove, for simplicity):
local#197658 2013-01-14_05:31:21.260199Z: doing modify local#197659 2013-01-14_05:31:21.260870Z: SharemapUpdater(i2sen): starting (MODE_CHECK)
…
local#197745 2013-01-14_05:31:22.504839Z: got result from [qenkbcot], 1 shares local#197748 2013-01-14_05:31:22.509699Z: found valid version 44-a3mv from qenkbcot-sh3: 4-8/668/665
But then it somehow uses a cached verinfo which has the old "42" in it, and gets the KeyError:
local#197778 2013-01-14_05:31:27.311038Z: all queries are retired, no extra servers: done local#197779 2013-01-14_05:31:27.311780Z: servermap: 8*seq44-a3mv local#197782 2013-01-14_05:31:27.312745Z: Retrieve(i2sen): starting local#197783 2013-01-14_05:31:27.312889Z: got seqnum 42 local#197784 2013-01-14_05:31:27.585402Z: Unhandled Error Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/twisted/internet/base.py", line 800, in runUntilCurrent call.func(*call.args, **call.kw) File "/home/occupy/allmydata-tahoe-1.9.2/support/lib/python2.7/site-packages/foolscap-0.6.4-py2.7.egg/foolscap/eventual.py", line 26, in _turn cb(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 368, in callback self._startRunCallbacks(result) File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 464, in _startRunCallbacks self._runCallbacks() --- <exception caught here> --- File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 551, in _runCallbacks current.result = callback(current.result, *args, **kw) File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 859, in <lambda> self._modify_once(modifier, first_time)) File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 885, in _modify_once d = self._try_to_download_data() File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 963, in _try_to_download_data d = self._read(c, fetch_privkey=True) File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 985, in _read d = r.download(consumer, offset, size) File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/retrieve.py", line 244, in download self._setup_download() File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/retrieve.py", line 284, in _setup_download shares = versionmap[self.verinfo] exceptions.KeyError: (42, "@'<F\x98\xbdY\xdf\x98\xc5\xa6\x885\xd7U\xe2\xb3U\x83\xdcn\xfa\x0fW2\xe7\xbc\xc1\\%r\xbe", '4\xeb\xc5- \x19\x8b)\xe1.@-z|\x1c\xb2', 668, 666, 4, 8, "\x00\x00\x00\x00\x00\x00\x00\x00*@'<F\x98\xbdY\xdf\x98\xc5\xa6\x885\xd7U\xe2\xb3U\x83\xdcn\xfa\x0fW2\xe7\xbc\xc1\\%r\xbe4\xeb\xc5- \x19\x8b)\xe1.@-z|\x1c\xb2\x04\x08\x00\x00\x00\x00\x00\x00\x02\x9c\x00\x00\x00\x00\x00\x00\x02\x9a", (('enc_privkey', 956), ('EOF', 2172), ('share_data', 789), ('signature', 399), ('block_hash_tree', 757), ('share_hash_chain', 655)))
comment:36 Changed at 2013-01-14T09:54:38Z by zooko
- Summary changed from add tests for KeyError in mutable read-modify-write to KeyError in mutable read-modify-write
comment:37 in reply to: ↑ 32 Changed at 2013-01-15T03:26:06Z by davidsarah
Replying to zooko:
News flash! joepie91 from IRC reported a bug which looks exactly like this one. However, he is using Tahoe-LAFS v1.9.2, with the patch from #1669 (5524/1.9.2) in place! This means that, contrary to what I thought, that patch did not fix this issue — #1670.
This is why tests are important. I'm always rather skeptical that something has been fixed if there is no test.
comment:38 Changed at 2013-06-17T23:22:56Z by daira
- Description modified (diff)
AF saw this bug -- it happened nondeterministically:
time tahoe cp /tmp/zeros-100m tahoe:zeros3 Error during PUT: 500 Internal Server Error "Traceback (most recent call last): File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/base.py\", line 824, in runUntilCurrent call.func(*call.args, **call.kw) File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/foolscap-0.6.4-py2.7.egg/foolscap/eventual.py\", line 26, in _turn cb(*args, **kwargs) File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py\", line 380, in callback self._startRunCallbacks(result) File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py\", line 488, in _startRunCallbacks self._runCallbacks() --- <exception caught here> --- File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py\", line 575, in _runCallbacks current.result = callback(current.result, *args, **kw) File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/filenode.py\", line 854, in <lambda> self._modify_once(modifier, first_time)) File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/filenode.py\", line 880, in _modify_once d = self._try_to_download_data() File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/filenode.py\", line 958, in _try_to_download_data d = self._read(c, fetch_privkey=True) File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/filenode.py\", line 980, in _read d = r.download(consumer, offset, size) File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/retrieve.py\", line 244, in download self._setup_download() File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/retrieve.py\", line 284, in _setup_download shares = versionmap[self.verinfo] exceptions.KeyError: (9, '\\x9b\\xb9\\xaa\\xa5\\xfa\\x88\\x9bX\\x03\\x99HK\\x06\\xed\\xa5V\\x12\\xd6f\\x97l^\\x9f\\x06\\x98\\xac\\xe7\\xb4\\xcdn\\x82\\xa8', 'V\\x97\\x15\\\\\\xf8[7\\rZ\\x9bZ\\xf1\\x93\\xe2\\xdb\\xc2', 2145, 2144, 3, 10, '\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\t\\x9b\\xb9\\xaa\\xa5\\xfa\\x88\\x9bX\\x03\\x99HK\\x06\\xed\\xa5V\\x12\\xd6f\\x97l^\\x9f\\x06\\x98\\xac\\xe7\\xb4\\xcdn\\x82\\xa8V\\x97\\x15\\\\\\xf8[7\\rZ\\x9bZ\\xf1\\x93\\xe2\\xdb\\xc2\\x03\\n\\x00\\x00\\x00\\x00\\x00\\x00\\x08a\\x00\\x00\\x00\\x00\\x00\\x00\\x08\`', (('enc_privkey', 1538), ('EOF', 2753), ('share_data', 823), ('signature', 399), ('block_hash_tree', 791), ('share_hash_chain', 655))) "
(Presumably it is happening on the update of the tahoe:zeros3 directory.)
Notice that this is using Tahoe-LAFS v1.10.
comment:39 Changed at 2013-07-05T16:58:22Z by zooko
Ed Kapitein posted to the tahoe-dev list a bug report that matches this stack trace.
comment:40 Changed at 2013-07-05T17:00:58Z by zooko
- Keywords preservation added
- Priority changed from normal to major
Upgrading "Priority" from "normal" to "major", because this seems like a bad bug. It also apparently led to data loss in Ed Kapitein's case, so I'm adding the preservation keyword.
comment:41 in reply to: ↑ 9 Changed at 2013-07-08T00:36:59Z by daira
Replying to kevan:
It looks like the mutable filenode modification code in MutableFileVersion reacts to an UncoordinatedWriteError by performing a map update, then trying the operation again. That map update will update the MutableFileVersion's internal servermap, but not its internal verinfo tuple. If the UncoordinatedWriteError is due to a concurrent update operation that updated all or most (enough for the version to be unrecoverable, anyway) of the shares associated with the MutableFileVersion's version of the mutable file, then we would see the KeyError reported by vikarti and others.
So, isn't the correct fix for this, without making any design changes, just for MutableFileVersion to make sure that its verinfo tuple is also updated when it does a mapupdate in this case?
comment:42 Changed at 2013-08-28T15:36:41Z by zooko
- Milestone changed from soon to 1.11.0
comment:43 Changed at 2013-10-31T15:50:46Z by zooko
- Priority changed from major to critical
comment:44 Changed at 2013-10-31T15:53:01Z by daira
- Keywords blocks-release added
comment:45 Changed at 2013-11-14T23:08:14Z by zooko
We investigated this during the Tahoe-LAFS Summit. It seems likely that directory-modification operations have been accidentally using the original version of the directory, even when there was a write-collision and a newer version of the directory was discovered. I didn't 100% confirm this, but I suspect that means that in those (rare‽) cases where there are write-collisions on a directory, the retrying code would blow away the other person's write, by re-applying the earlier version (plus the current modification). This would be a data loss bug and very much not the kind of thing we tolerate. ☹
Now, there are no bug reports that I am aware of that could point to this having caused a directory modification to get thrown out in practice. There are several reports of this bug leading to an internal KeyError, but as far as I recall, nobody reported that a change they had made to their directory was subsequently discovered to have been lost.
Nonetheless, the possibility of this happening seems to be present, from code inspection.
comment:46 Changed at 2013-11-14T23:11:12Z by zooko
We agreed at the Tahoe-LAFS Summit to remove the "retry" feature instead of fixing it, for v1.11.0. I have a branch which does this: https://github.com/zooko/tahoe-lafs/commits/1670-KeyError-in-mutable-read-modify-write
This branch is not yet ready to merge into trunk because:
- Although it has unit tests, I'm not yet sure they are correct tests.
- The history of patches needs to be rewritten into a nice readable story.
comment:47 Changed at 2014-09-02T16:20:17Z by daira
- Milestone changed from 1.11.0 to 1.12.0
comment:48 Changed at 2014-09-02T16:20:59Z by daira
We have no fix for this, nor do we understand it well.
comment:49 Changed at 2016-03-22T05:02:25Z by warner
- Milestone changed from 1.12.0 to 1.13.0
Milestone renamed
comment:50 Changed at 2016-06-28T18:17:14Z by warner
- Milestone changed from 1.13.0 to 1.14.0
renaming milestone
comment:51 Changed at 2020-06-30T14:45:13Z by exarkun
- Milestone changed from 1.14.0 to 1.15.0
Moving open issues out of closed milestones.
comment:52 Changed at 2021-03-30T18:40:19Z by meejah
- Milestone changed from 1.15.0 to soon
Ticket retargeted after milestone closed
possible related incident file 1