Opened at 2011-06-29T08:14:08Z
Last modified at 2014-07-30T16:42:12Z
#1426 new enhancement
re-key (write-enabler) protocol
Reported by: | warner | Owned by: | |
---|---|---|---|
Priority: | major | Milestone: | eventually |
Component: | code-mutable | Version: | 1.8.2 |
Keywords: | preservation anti-censorship rekey write-enabler mutable | Cc: | |
Launchpad Bug: |
Description (last modified by daira)
Capturing some discussion from the 2011 Tahoe Summit:
Share migration (moving shares from one server to another by copying the backing store from one drive to another) is currently limited by the embedded "write-enablers": secret tokens, shared between writecap holders and each server, which clients must present to authorize changes to shares for mutable files.
The write-enabler is basically HASH(writecap + serverid): each server gets a different value, which means that server 1 cannot use it's copy to cause damage to a share (for the same file) on server 2. As a result, when a share is moved from server 1 to server 2, the embedded write-enabler will be wrong, and writecap holders will no longer be able to modify the share. This is a drag.
So we want a "re-key" protocol, to update the write-enabler after the share has been migrated. The writecap signing key is normally used to validate a signature on the share's 'roothash', telling readcap holders that the share was created by an actual writecap-holder. The basic idea is that the re-keying client uses the writecap key to sign a "please change the write-enabler to XYZ" message, and delivers this over a confidential channel to the share's new home. The server extracts the public verifying key from the share to verify this signature. This tells the server that the re-key request was approved by someone who knows the writecap.
The actual message needs to be:
writecap.key.sign([tag, new-write-enabler, storage-index, serverid])
and servers must only accept requests that have a matching serverid (otherwise once server 1 receives a client's re-key message, it could echo it to server 2 and gain control over the share). The "tag" value prevents this signature from being confused with a normal share signature.
Mutation requests must be accompanied by the correct write-enabler. If the WE is wrong, the server should return a distinctive error message, and the client should perform the re-key protocol, then try the mutation again. This incurs a CPU cost of N pubkey signatures for the client (one per server) and one pubkey verification on each server. But it only needs to be done once per file per migration, and can be done lazily as the file is being modified anyways, so the delay is not likely to be significant.
The original mutable-share creation message can either provide the initial write-enabler, or it can leave it blank (meaning all writes should be denied until the share is re-keyed). When a rebalancer runs without a writecap, or when a readcap-only repairer creates a new share, they can use the blank write-enabler, and clients will re-key as soon as they try to modify the un-keyed share. (readcap-only repairers who want to replace an existing share are out of luck: only a validate-the-whole-share form of mutation-authorization can correctly allow mutations without proof of writecap-ownership).
Allowing write-enablers to be updated also reduces our dependence upon long-term stable serverids. If we switched from Foolscap tubid-based serverids to e.g. ECDSA pubkey-based serverids, all the servers' write-enablers would be invalidated after upgrading to the new code. But if all clients can re-key the WEs on demand, this is merely a performance hit, not a correctness failure.
Change History (7)
comment:1 in reply to: ↑ description Changed at 2011-06-29T20:03:58Z by davidsarah
comment:2 follow-up: ↓ 3 Changed at 2011-07-10T14:23:38Z by zooko
Hm, it is worth adding protection against replay attack? This attack would be a denial of service in which the attacker stores an old writecap.key.sign([tag, new-write-enabler, storage-index, serverid]) and every time you try to set a new new write-enabler the attacker replays this old new write-enabler to reset it.
One good defense would be to include the one-way hash of the previous write-enabler in the message. As davidsarah mentioned in comment:1, it might be convenient anyway for the server to send this one-way hash of the current write-enabler to the client anyway, in order to inform the client about whether they need to rekey.
comment:3 in reply to: ↑ 2 ; follow-up: ↓ 4 Changed at 2011-07-12T14:48:29Z by davidsarah
Replying to zooko:
Hm, it is worth adding protection against replay attack? This attack would be a denial of service in which the attacker stores an old writecap.key.sign([tag, new-write-enabler, storage-index, serverid]) and every time you try to set a new new write-enabler the attacker replays this old new write-enabler to reset it.
The impact of that is an extra signature and round-trip (for each server whose write-enabler has been attacked, in parallel) when an honest client later tries to modify the file. Is it worth the attacker's cost (a round-trip to set the enabler) to perform that attack?
One good defense would be to include the one-way hash of the previous write-enabler in the message.
Oh, good idea.
As davidsarah mentioned in comment:1, it might be convenient anyway for the server to send this one-way hash of the current write-enabler to the client, in order to inform the client about whether they need to rekey.
Yes, I now think we should do this.
comment:4 in reply to: ↑ 3 Changed at 2011-07-12T15:06:11Z by zooko
Replying to davidsarah:
The impact of that is an extra signature and round-trip (for each server whose write-enabler has been attacked, in parallel) when an honest client later tries to modify the file. Is it worth the attacker's cost (a round-trip to set the enabler) to perform that attack?
I find it difficult to evaluate cost/benefit for denial-of-service attacks and defenses. I guess the question is how much is it worth to the attacker to deny the user the ability to update this file. Obviously we have no idea--that depends on the attacker, the user, and what's in that file!
A better question--a tighter bound--is whether this defense would increase the cost of this attack so that it is not the cheapest way to deny service. For example, the attacker could accomplish the same goal by preventing all of the user's packets from reaching the servers, or by compromising the servers. Would this attack be cheaper than that?
In at least some cases, injecting false packets is a lot cheaper (more doable) than censoring out all real packets or compromising all servers. In that case, this attack would be the cheapest way to accomplish the goal and for that case this defense would help.
Yes, I now think we should do this.
Cool. :-)
comment:5 Changed at 2011-07-12T15:38:25Z by davidsarah
Replying to zooko:
I guess the question is how much is it worth to the attacker to deny the user the ability to update this file.
But it isn't doing that, it's only forcing the user to perform another signature and round-trip.
... the attacker could accomplish the same goal by preventing all of the user's packets from reaching the servers, or by compromising the servers.
That would achieve a stronger denial of service.
Replying to davidsarah:
Replying to zooko:
One good defense would be to include the one-way hash of the previous write-enabler in the message.
Oh, good idea.
As davidsarah mentioned in comment:1, it might be convenient anyway for the server to send this one-way hash of the current write-enabler to the client, in order to inform the client about whether they need to rekey.
Yes, I now think we should do this.
Wait; for the above defence, the client would need to know the hash of the previous write-enabler in order to calculate the current one. So the server would have to store the previous hash and send both hashes. This is getting a bit complicated -- is it really important to prevent this attack?
comment:6 Changed at 2011-07-12T22:36:39Z by davidsarah
- Keywords preservation anti-censorship added
- Milestone changed from undecided to eventually
comment:7 Changed at 2014-07-30T16:42:12Z by daira
- Description modified (diff)
- Keywords rekey write-enabler mutable added
Replying to warner:
As a small optimization, retrieving a share could also return a bit saying whether the share has a non-blank write-enabler. Since most updates of mutable objects are read-then-write, this means that the client will usually know whether it needs to re-key without incurring an extra round-trip.
(The server could alternatively return a one-way hash of the write-enabler, to allow the client to determine whether it is correct, but that probably isn't a significant win. It's sufficient to optimize out the round-trip in the common case where the share has only been written by honest clients.)
If new mutable objects always start with a blank write-enabler, then the extra cost of the rekeying protocol (N signatures by the client and a verification by each server) will only be paid for objects that are actually modified, i.e. when the second version is written.