[tahoe-dev] two reasons not to use semi-private keys in our new cap design

Brian Warner warner at lothar.com
Sun Jul 19 21:31:51 PDT 2009


Zooko O'Whielacronx wrote:

> the use of semi-private keys (or actually of semi-semi-private keys)
> would greatly simplify the implementation of hierarchies of diminished
> caps with more levels

For reference, the semi-private ECDSA-based mutable file scheme we've
been considering (hereafter abbreviated SP-DSA) would have a DSA
hierarchy with two intermediate keys, and symmetric-encryption secrets
(used for data confidentiality) assigned as follows:

 1 = DSA signing key
 2 = first intermediate key (derived with math from 1)
 3 = second intermediate key (derived with math from 2)
 4 = verifying key (derived with math from 3)

 A = writekey (hash of 1), encrypts child writecaps in a dirnode
 B = readkey (hash of A), encrypts filenames and child readcaps in
     dirnode
 C = traversalkey/deep-verify-key (hash of B), encrypts child
     traversalcaps

Then the actual filecaps would be:

 writecap = 1  (allows full access)
 readcap = 2 (no access to child writecaps or forming new signatures)
 traversalcap = 3 (verify and access to traversalcaps only, i.e.
                   deep-verify)
 verifycap = 4 or hash of 4
 storage-index = verifycap = 4 or hash of 4

The server-side shares would hold a copy of "4" (the verifying key).
This scheme is great: it gives us all the properties I've ever wanted
for mutable files, including:

 * one crypto value for all caps (which I think is K bits for the
   writekey, and 2K bits for the rest)

 * servers can verify every bit of the share, all the way up to the
   storage-index (i.e. servers can detect valid shares stored under the
   wrong storage-index)

 * full offline attenuation (aka "diminish computation"): given any cap,
   you can compute any weaker cap without talking to a server

 * related to that, given a writecap you can create new shares without
   talking to a server, which would let you replace a file that was
   completely wiped out (our current RSA-based scheme doesn't allow
   this: if you lose all shares, you'll never get the signing key back,
   so you'll never be able to create new shares, and basically you have
   to give up on that slot)

> However, I really don't think we should rely on semi-private keys in
> our next cap design. There is no formal proof of their security,

I fully agree with this argument. (although I am reluctant about it.. I
think it's a neat idea, and I really like the simple design described
above, and all the properties it has. If I weren't so paranoid about
crypto tools, I'd say go for it. Curses! If only this trick were
invented twenty years ago and thoroughly reviewed by now. Next time I
get access to a time machine, I'll go back and explain it to zooko's
teenage self so he can invent it earlier :-).

> There is another, more technical, reason not to use semi-private keys,

Ok, here's where I get confused.

> In contrast, if we could use our current approach which doesn't rely
> on semi-private keys and somehow compress together the "read key" part
> of the cap and the "verify hash" part of the cap, then we could have
> 96-bit security with just one 96-bit value. However, as we recently
> discussed [2], short bitstrings such as hashes can be targetted by an
> attacker "at once". In contrast, this paper [3] seems to say that
> elliptic curve public keys can't be targetted en masse like that.

There are too many negatives in that paragraph for me to follow. (I must
admit that my time is very fragmented right now.. I'm only getting to
read the mailing list in 10-minute bursts, and I don't have a nice quiet
place to think about things, and won't until I get back from this trip
in a few weeks. It's entirely possible that I would understand what
you're saying if I were at home instead of travelling).

I think you are comparing SP-DSA (above) against an as-yet-undescribed
scheme which depends upon some not-yet-invented compression trick. I
don't think I can respond completely meaningfully until you invent
and/or describe this other scheme :-).

I think much of your text is stating a lower bound (of maybe 128 or 160
bits) on the crypto strings to avoid birthday attacks, which I
completely agree with.

You've mentioned your compression trick to me once or twice before, and
although I've forgotten the details, I think I remember concluding that
it didn't provide all the properties we needed. Let me list those
properties here:

* offline derivation of storage-index from any cap: necessary to get any
  share data, including full pubkeys or other values which were too
  large to store in the cap

* full server-side verification of shares, up to the storage-index: the
  server should be able to detect any bitflip in the share, and should
  also be able to detect a valid share for SI=1 being mistakenly placed
  in a file meant to hold SI=2. (this basically requires the
  storage-index to equal the verifycap or H(verifycap))

* I think it's important, but not absolutely critical, to be able to
  perform offline attenuation from writecap to readcap, without first
  having to fetch e.g. the full pubkey from at least one server. While
  we store both child writecap+readcap in our dirnodes (so many
  use-cases will simply be able to snarf the pre-attenuated readcap from
  the dirnode), we must also populate both writecap and readcap in those
  dirnodes when we fill them. dirnode.set_children() (which is given a
  bunch of child writecaps) will be very expensive if we must first
  fetch each child's pubkey. For the same reason, I think we need
  offline attenuation from writecap/readcap to traversalcap.

* I also think it's important (but less so) to have offline attenuation
  from any cap to a verifycap. This would let us build a repair manifest
  by reading all the dirnodes (without this property, we would also have
  to touch all the filenodes, which would be considerably more work).
  Fortunately, any scheme that uses verifycap=SI will automatically
  provide this property too.

* it would be nice to be able to generate the strongest secret (probably
  the signing key) from the writecap without talking to a server. (note
  that our current RSA scheme doesn't provide this property). This would
  let you fix a lost-all-shares situation, as described above. Again, I
  could live without this.

So, excluding your as-yet-undescribed compression trick, the best
non-semiprivate DSA mutable file scheme that I've been able to come up
with would look like:

 1 = signing key
 2 = verifying key (derived with math from 1)
 3 = storage-index (either 2 or hash of 2, depending upon length)

 A = writekey (hash of 1), encrypts child writecaps in a dirnode
 B = readkey (hash of A), encrypts filenames and child readcaps in
                          dirnode
 C = traversalkey/deep-verify-key (hash of B), encrypts child
                                               traversalcaps

writecap = 1
readcap = B+3
traversalcap = C+3
verifycap = 3

The confidentiality comes from the A/B/C chain, and the integrity comes
from the 1/2/3 chain. Excepting the master secret (1), there's no way to
get a single bit to perform both jobs (and still be able to derive a
storage-index from any cap). I care about short writecaps and (to a
lesser extent) short readcaps.. traversalcaps and verifycaps can be
long. With X-bit EC-DSA signing keys and Y-bit hashes (perhaps X=128 and
Y=128), this would give us the following cap lengths:

 writecap: X (128)
 readcap: 2*Y (256)
 traversalcap: 2*Y (256)
 verifycap: Y (128)

I think I'm ok with those lengths, although if we raise X or Y to get a
more comfortable margin then they might get a bit unwieldy.

(Note that the length of the verifying key is irrelevant here, if we
store the full verifying key in the share and merely put its hash in the
readcap/traversalcap/verifycap. This is actually very close to what we
do with RSA keys, except that the RSA signing key is also too long to
put in the cap, so we use writecap=A instead of writecap=1 and have a
scheme to encrypt the signing key and store it in the share.)

Since ECDSA signing keys are short enough to use directly as writecaps,
it would allow regeneration of slots in which all shares were lost.
Likewise, since writecaps hold the full signing key (from which the
verifying key can be computed), we get full offline attenuation.

I'll wait to say more until Zooko describes his compression trick. In
the brief picture he showed me earlier tonight (which was actually for
immutable files), it looked like the readcap was derived partially from
the hash of the verifycap, and if SI=verifycap or SI=H(verifycap), that
would mean it's impossible to derive the SI from the readcap, which
would make it very difficult to actually locate the right shares. On the
other hand, if the storage-index is something else, then the server
can't completely verify the share.

Hm, you know, what would the cap lengths be for the SP-DSA scheme
anyways? If the intermediate keys need to be 2K bits long, and if a
K-bit hash gives us K bits of integrity (this may be false), then the
SP-DSA scheme and the non-SP DSA scheme I described may give us similar
cap lengths, and we can achieve our goals without relying upon SP-DSA.
Or would Y need to equal 2K (i.e. 256 bits), making the non-SP-DSA
scheme have 384-bit readcaps versus SP-DSA's 256-bit readcaps?

cheers,
 -Brian


More information about the tahoe-dev mailing list