#2034 closed defect (fixed)

Update for mutable files is sensitive to the number of servers

Reported by: markberger Owned by: warner
Priority: normal Milestone: 1.10.1
Component: code-mutable Version: 1.10.0
Keywords: mutable publish review-needed Cc:
Launchpad Bug:

Description (last modified by zooko)

When the number of servers used in a grid is greater than or equal to N + k, there is a race condition when mutable files are updated. While waiting for responses, ServermapUpdater will hit a threshold after EPSILON servers do not have any shares (currently EPSILON = k). This causes the process to finish and reject any remaining responses.

However, this can occur before the ServermapUpdater has added the server to the servermap, causing the client to reject the server's response. Since the server is not added to the servermap, a writer for that server is not created. This causes a key error when the updater attempts to retrieve the corresponding writer for each share.

This race condition can be replicated by setting the number of servers for allmydata.test.test_mutable.Update to 13.

===============================================================================
[FAIL]
Traceback (most recent call last):
  File "/Users/markberger/Code/tahoe-lafs/src/allmydata/test/test_mutable.py", line 3663, in <lambda>
    self.failUnlessEqual(results, new_data))
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/trial/_synctest.py", line 356, in assertEqual
    % (msg, pformat(first), pformat(second)))
twisted.trial.unittest.FailTest: not equal:
a = 'test datatest datatest datatest datatest datatest datatest datatest datatest datatest data'
b = 'test datatest datatest datatest datatest datatest datatest datatest datatest datatest dataappended'


allmydata.test.test_mutable.Update.test_update_sdmf
===============================================================================
[ERROR]
Traceback (most recent call last):
Failure: allmydata.mutable.common.NotEnoughServersError: ("Publish ran out of good servers, last failure was: [Failure instance: Traceback: <type 'exceptions.KeyError'>: 0\n/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/publish.py:642:_push\n/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/publish.py:662:push_segment\n/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py:304:addCallback\n/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py:293:addCallbacks\n--- <exception caught here> ---\n/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py:575:_runCallbacks\n/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/publish.py:767:_push_segment\n]", None)

allmydata.test.test_mutable.Update.test_append
allmydata.test.test_mutable.Update.test_append_power_of_two
allmydata.test.test_mutable.Update.test_multiple_segment_replace
allmydata.test.test_mutable.Update.test_replace_beginning
allmydata.test.test_mutable.Update.test_replace_segstart1
===============================================================================
[ERROR]
Traceback (most recent call last):
Failure: allmydata.mutable.common.NotEnoughServersError: ("Publish ran out of good servers, last failure was: [Failure instance: Traceback: <type 'exceptions.KeyError'>: 1\n/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/publish.py:642:_push\n/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/publish.py:662:push_segment\n/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py:304:addCallback\n/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py:293:addCallbacks\n--- <exception caught here> ---\n/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py:575:_runCallbacks\n/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/publish.py:767:_push_segment\n]", None)

allmydata.test.test_mutable.Update.test_replace_and_extend
allmydata.test.test_mutable.Update.test_replace_in_last_segment
allmydata.test.test_mutable.Update.test_replace_locations
allmydata.test.test_mutable.Update.test_replace_middle
allmydata.test.test_mutable.Update.test_replace_zero_length_middle
===============================================================================
[ERROR]
Traceback (most recent call last):
  File "/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/filenode.py", line 1177, in _build_uploadable_and_finish
    return p.update(u, offset, segments_and_bht[2], self._version)
  File "/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/publish.py", line 341, in update
    self._push()
  File "/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/publish.py", line 642, in _push
    return self.push_segment(self._current_segment)
  File "/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/publish.py", line 659, in push_segment
    return self._push()
  File "/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/publish.py", line 645, in _push
    return self.push_everything_else()
  File "/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/publish.py", line 778, in push_everything_else
    self.push_blockhashes()
  File "/Users/markberger/Code/tahoe-lafs/src/allmydata/mutable/publish.py", line 806, in push_blockhashes
    writers = self.writers[shnum]
exceptions.KeyError: 0

allmydata.test.test_mutable.Update.test_replace_zero_length_beginning
allmydata.test.test_mutable.Update.test_replace_zero_length_segstart1
-------------------------------------------------------------------------------
Ran 14 tests in 58.570s

Change History (24)

comment:1 Changed at 2013-07-23T13:22:21Z by markberger

  • Description modified (diff)

comment:2 Changed at 2013-07-23T16:20:37Z by markberger

  • Description modified (diff)
  • Summary changed from Mutable update tests are sensitive to the number of servers to Update for mutable files is sensitive to the number of servers

comment:3 Changed at 2013-07-23T16:26:49Z by markberger

Possible related tickets: #549, #1042

Last edited at 2013-07-23T16:27:04Z by markberger (previous) (diff)

comment:4 Changed at 2013-07-23T17:14:25Z by markberger

  • Keywords review-needed added
  • Milestone undecided deleted
  • Owner set to markberger

comment:5 Changed at 2013-07-23T22:55:32Z by markberger

This patch is necessary in order to have tests for #1057 with a reasonably large number of servers.

Last edited at 2013-07-24T15:06:39Z by markberger (previous) (diff)

comment:6 Changed at 2013-08-05T23:13:46Z by markberger

  • Description modified (diff)

comment:7 Changed at 2013-08-28T16:00:01Z by zooko

  • Milestone set to 1.11.0

comment:8 Changed at 2013-08-28T16:02:57Z by daira

  • Keywords blocks-release added

comment:9 Changed at 2013-09-20T19:35:20Z by zooko

I just had a look at https://github.com/markberger/tahoe-lafs/commit/08f70b511345c9ae267ae90fe31adf5cee1fd723, but I wasn't sure I understood the issue. I thought maybe looking at a unit test would help me understand what this patch is fixing, but there is no unit test included in the patch. Then I came back and read this ticket's original message and started to understand it better.

However, I still don't really understand it.

However, this can occur before the ServermapUpdater? has added the server to the servermap, causing the client to reject the server's response. Since the server is not added to the servermap, a writer for that server is not created.

When can this happen? Is this if there are > K servers in existence, and K servers have responded to queries, saying that they have no shares (or, perhaps some servers fail instead of responding "I have no shares"), and then… what happens? Is this server that "is not added to the servermap", mentioned above, one of the servers that has not yet been queried? Or it has been queried but it has not yet responded?

Ideas:

  • Should we patch the unit tests, as described in the original message::

"This race condition can be replicated by setting the number of servers for allmydata.test.test_mutable.Update to 13."

?

  • Should the resulting unit test have a comment giving this ticket # and URL to this ticket?

comment:10 Changed at 2013-09-20T19:36:33Z by zooko

  • Description modified (diff)

comment:11 follow-up: Changed at 2013-09-20T19:57:00Z by zooko

Wild stab in the dark, but this couldn't have anything to do with #1670 could it?

comment:12 Changed at 2013-09-20T20:15:44Z by markberger

If I am remembering this correctly, ServermapUpdater starts querying servers and it will stop after epsilon (k) servers have responded with no shares. However, the querying process can end while we know a server has shares, but we haven't finished communicating with it yet.

So lets say our epsilon is 2, and we have three servers: A, B, and C. ServermapUpdater will send queries to servers A, B, and C. Server A responds that it has shares, so ServermapUpdater starts making more requests to get info about each share on server A. But before we can finish our transactions with server A, servers B and C respond that they have no shares. After servers B and C respond, ServermapUpdater checks to see if it hit the epsilon limit. Since our limit is 2, the limit has been hit, and we start rejecting all responses from server A. Since we never finished our transaction with server A, a writer for server A was not created.

This is an issue when the number of servers is greater than or equal to N + k because the querying process will not end until it receives feedback from N + k servers during a share write.

comment:13 in reply to: ↑ 11 Changed at 2013-09-20T20:21:43Z by markberger

Replying to zooko:

Wild stab in the dark, but this couldn't have anything to do with #1670 could it?

Because it is a key error, this ticket could be causing #1670. If I had to guess though, I don't think this is causing it because one of the error reports has k = 2, N = 4, with 5 servers.

comment:14 follow-up: Changed at 2013-11-13T23:00:52Z by daira

  • Keywords mutable publish added; blocks-release removed
  • Milestone changed from 1.11.0 to 1.12.0

I'm bumping this out of the 1.11.0 release because it depends on #1057 which has also been bumped out.

Last edited at 2013-11-13T23:01:32Z by daira (previous) (diff)

comment:15 in reply to: ↑ 14 Changed at 2014-10-02T14:56:03Z by daira

  • Owner changed from markberger to daira
  • Status changed from new to assigned

Replying to daira:

I'm bumping this out of the 1.11.0 release because it depends on #1057 which has also been bumped out.

Actually I think I was mistaken, this *doesn't* depend on #1057. We should re-review it to see whether it can go into 1.11.

comment:16 Changed at 2014-10-02T14:56:12Z by daira

  • Milestone changed from 1.12.0 to 1.11.0

comment:17 Changed at 2015-01-20T17:40:28Z by warner

  • Owner changed from daira to warner
  • Status changed from assigned to new

I'll review markberger's patch.

comment:18 Changed at 2015-01-27T09:00:20Z by warner

Ok, I think the patch is a step forward. It basically changes the point at which we say "yes, this server gave us a good share". It used to set this mark as soon as the first response came back (saying "I *think* I have a share, fetch it if you want"). With the patch, it sets the mark later, after enough of the share has been fetched to verify its signature. In between those two points, the client is grabbing more data (the pubkey, maybe more), so there's a chance for other code to run, including the "are we done yet" checker. The problem here occurred when that other code ran too early.

I have a feeling that the entire mutable tree is due for an overhaul. Although I can't currently think of any better way to organize it. Mostly I'm bothered by the large Deferred chains in there, and maybe a state machine (like the "new" immutable downloader) would provide fewer hiding places for concurrency/interleaving hazards to lurk.

Landed in db12f1c.

comment:19 Changed at 2015-01-27T09:00:40Z by warner

  • Keywords review-needed removed

comment:20 Changed at 2015-01-27T18:42:35Z by daira

Do we need a regression test for this bug, or can this ticket be closed?

comment:21 Changed at 2015-01-29T12:09:04Z by zooko

I would feel better if we had an accompanying regression test…

comment:22 Changed at 2015-01-29T16:02:55Z by daira

  • Owner changed from warner to markberger

markberger, can you write a test?

comment:23 Changed at 2015-03-26T01:54:10Z by warner

  • Keywords review-needed added
  • Owner changed from markberger to warner
  • Status changed from new to assigned

I tried Mark's original advice (increasing the number of servers in allmydata.test.test_mutable.Update from the default of 10 to 13). Both 10 and 13 pass with current trunk (including the db12f1c fix). If I back out db12f1c, then num_servers=13 causes the original failure listed above, and num_servers=10 does not.

So I think that simply changing num_servers to 13 is enough to test this. https://github.com/tahoe-lafs/tahoe-lafs/pull/149 does that.

comment:24 Changed at 2015-03-31T16:48:00Z by warner

  • Resolution set to fixed
  • Status changed from assigned to closed

Landed in 2d3c805.

Note: See TracTickets for help on using tickets.