[tahoe-lafs-trac-stream] [tahoe-lafs] #1924: NetBSD < 6.0 /dev/random appears to break RSA keygen in test suites
tahoe-lafs
trac at tahoe-lafs.org
Thu Apr 11 06:39:53 UTC 2013
#1924: NetBSD < 6.0 /dev/random appears to break RSA keygen in test suites
-------------------------------+------------------------------------
Reporter: midnightmagic | Owner:
Type: defect | Status: new
Priority: major | Milestone: undecided
Component: code | Version: 1.9.2
Resolution: | Keywords: netbsd random cryptopp
Launchpad Bug: |
-------------------------------+------------------------------------
Comment (by zooko):
Replying to [comment:13 zooko]:
>
> Is that the bug? Should we add some conditions to config.h so that it
won't define {{{NONBLOCKING_RNG_AVAILABLE}}} on NetBSD?
Well, no, the only effect of defining {{{NONBLOCKING_RNG_AVAILABLE}}} (on
unix) is to define a class that reads from {{{/dev/urandom}}}:
[//trac/pycryptopp/browser/git/src-
cryptopp/osrng.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L44
osrng.cpp]
So, hold on, what's the behavior on NetBSD again? Reconsider all of the
above in light of the fact that pycryptopp has been reading exclusively
from {{{/dev/urandom}}} on NetBSD and never from {{{/dev/random}}} all
this time.
So in that case, midnightmagic's observations imply that when the entropy
pool has been sucked dry, then something about reading from
{{{/dev/urandom}}} causes Crypto++ to generate inconsistent internal
values. This is doubly weird, because:
(a) reading from {{{/dev/urandom}}} should not be detectably (to the
Crypto++ code) different whether the entropy pool is brimming or dry,
right? Or is there something really different about NetBSD 5.x
{{{/dev/urandom}}} than Linux {{{/dev/urandom}}} here?,
and
(b) no matter what the result of reading from {{{/dev/urandom}}} (in
[//trac/pycryptopp/browser/git/src-
cryptopp/osrng.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L44
osrng.cpp]), this shouldn't cause Crypto++ to generate internally
inconsistent values for its RSA digital signatures.
Note that [https://tahoe-lafs.org/trac/pycryptopp/browser/git/src-
cryptopp/osrng.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L77
the reads from /dev/urandom] check whether the OS returned the expected
number of bytes as the return value from {{{read()}}}.
I don't see how any possible behavior of the OS'es {{{read()}}} call could
cause the observed failure in Crypto++. The only thing that I can imagine
causing this result would be if {{{read()}}} returned (returning the
expected "number of bytes read" -- {{{size}}}) and then later the output
buffer {{{output}}} got **overwritten** by the kernel in the middle of
Crypto++'s computations using that output buffer.
Samuel Neves suggested something on IRC to the effect that stack
corruption could also explain the observed fault. Oh, I wonder if the
kernel could sometimes be buffer-overrunning {{{output}}}? Copying more
than {{{size}}} bytes into it?
midnightmagic: could you please run the tests with the
https://github.com/zooko/pycryptopp/commits/debug-netbsd-rsa-2 patches
applied? Thanks!
--
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1924#comment:14>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage
More information about the tahoe-lafs-trac-stream
mailing list