[tahoe-lafs-trac-stream] [Tahoe-LAFS] #1426: re-key (write-enabler) protocol
Tahoe-LAFS
trac at tahoe-lafs.org
Wed Jul 30 16:42:12 UTC 2014
#1426: re-key (write-enabler) protocol
-------------------------+-------------------------------------------------
Reporter: warner | Owner:
Type: | Status: new
enhancement | Milestone: eventually
Priority: major | Version: 1.8.2
Component: code- | Keywords: preservation anti-censorship rekey
mutable | write-enabler mutable
Resolution: |
Launchpad Bug: |
-------------------------+-------------------------------------------------
Changes (by daira):
* keywords: preservation anti-censorship => preservation anti-censorship
rekey write-enabler mutable
Old description:
> Capturing some discussion from the 2011 Tahoe Summit:
>
> Share migration (moving shares from one server to another by copying the
> backing store from one drive to another) is currently limited by the
> embedded "write-enablers": secret tokens, shared between writecap
> holders and each server, which clients must present to authorize changes
> to shares for mutable files.
>
> The write-enabler is basically HASH(writecap + serverid): each server
> gets a different value, which means that server 1 cannot use it's copy
> to cause damage to a share (for the same file) on server 2. As a result,
> when a share is moved from server 1 to server 2, the embedded
> write-enabler will be wrong, and writecap holders will no longer be able
> to modify the share. This is a drag.
>
> So we want a "re-key" protocol, to update the write-enabler after the
> share has been migrated. The writecap signing key is normally used to
> validate a signature on the share's 'roothash', telling readcap holders
> that the share was created by an actual writecap-holder. The basic idea
> is that the re-keying client uses the writecap key to sign a "please
> change the write-enabler to XYZ" message, and delivers this over a
> confidential channel to the share's new home. The server extracts the
> public verifying key from the share to verify this signature. This tells
> the server that the re-key request was approved by someone who knows the
> writecap.
>
> The actual message needs to be:
>
> {{{
> writecap.key.sign([tag, new-write-enabler, storage-index, serverid])
> }}}
>
> and servers must only accept requests that have a matching serverid
> (otherwise once server 1 receives a client's re-key message, it could
> echo it to server 2 and gain control over the share). The "tag" value
> prevents this signature from being confused with a normal share
> signature.
>
> Mutation requests must be accompanied by the correct write-enabler. If
> the WE is wrong, the server should return a distinctive error message,
> and the client should perform the re-key protocol, then try the mutation
> again. This incurs a CPU cost of N pubkey signatures for the client (one
> per server) and one pubkey verification on each server. But it only
> needs to be done once per file per migration, and can be done lazily as
> the file is being modified anyways, so the delay is not likely to be
> significant.
>
> The original mutable-share creation message can either provide the
> initial write-enabler, or it can leave it blank (meaning all writes
> should be denied until the share is re-keyed). When a rebalancer runs
> without a writecap, or when a readcap-only repairer creates a new share,
> they can use the blank write-enabler, and clients will re-key as soon as
> they try to modify the un-keyed share. (readcap-only repairers who want
> to replace an existing share are out of luck: only a
> validate-the-whole-share form of mutation-authorization can correctly
> allow mutations without proof of writecap-ownership).
>
> Allowing write-enablers to be updated also reduces our dependence upon
> long-term stable serverids. If we switched from Foolscap tubid-based
> serverids to e.g. ECDSA pubkey-based serverids, all the servers'
> write-enablers would be invalidated after upgrading to the new code. But
> if all clients can re-key the WEs on demand, this is merely a
> performance hit, not a correctness failure.
New description:
Capturing some discussion from the 2011 Tahoe Summit:
Share migration (moving shares from one server to another by copying the
backing store from one drive to another) is currently limited by the
embedded "write-enablers": secret tokens, shared between writecap
holders and each server, which clients must present to authorize changes
to shares for mutable files.
The write-enabler is basically HASH(writecap + serverid): each server
gets a different value, which means that server 1 cannot use it's copy
to cause damage to a share (for the same file) on server 2. As a result,
when a share is moved from server 1 to server 2, the embedded
write-enabler will be wrong, and writecap holders will no longer be able
to modify the share. This is a drag.
So we want a "re-key" protocol, to update the write-enabler after the
share has been migrated. The writecap signing key is normally used to
validate a signature on the share's 'roothash', telling readcap holders
that the share was created by an actual writecap-holder. The basic idea
is that the re-keying client uses the writecap key to sign a "please
change the write-enabler to XYZ" message, and delivers this over a
confidential channel to the share's new home. The server extracts the
public verifying key from the share to verify this signature. This tells
the server that the re-key request was approved by someone who knows the
writecap.
The actual message needs to be:
{{{
writecap.key.sign([tag, new-write-enabler, storage-index, serverid])
}}}
and servers must only accept requests that have a matching serverid
(otherwise once server 1 receives a client's re-key message, it could
echo it to server 2 and gain control over the share). The "tag" value
prevents this signature from being confused with a normal share
signature.
Mutation requests must be accompanied by the correct write-enabler. If
the WE is wrong, the server should return a distinctive error message,
and the client should perform the re-key protocol, then try the mutation
again. This incurs a CPU cost of N pubkey signatures for the client (one
per server) and one pubkey verification on each server. But it only
needs to be done once per file per migration, and can be done lazily as
the file is being modified anyways, so the delay is not likely to be
significant.
The original mutable-share creation message can either provide the
initial write-enabler, or it can leave it blank (meaning all writes
should be denied until the share is re-keyed). When a rebalancer runs
without a writecap, or when a readcap-only repairer creates a new share,
they can use the blank write-enabler, and clients will re-key as soon as
they try to modify the un-keyed share. (readcap-only repairers who want
to replace an existing share are out of luck: only a
validate-the-whole-share form of mutation-authorization can correctly
allow mutations without proof of writecap-ownership).
Allowing write-enablers to be updated also reduces our dependence upon
long-term stable serverids. If we switched from Foolscap tubid-based
serverids to e.g. ECDSA pubkey-based serverids, all the servers'
write-enablers would be invalidated after upgrading to the new code. But
if all clients can re-key the WEs on demand, this is merely a
performance hit, not a correctness failure.
--
--
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1426#comment:7>
Tahoe-LAFS <https://Tahoe-LAFS.org>
secure decentralized storage
More information about the tahoe-lafs-trac-stream
mailing list