[tahoe-dev] some questions about tahoe-lafs

Callme Whatiwant nejucomo at gmail.com
Tue Jul 23 18:57:39 UTC 2013


On Wed, Jul 17, 2013 at 8:56 PM, Avi Freedman <freedman at freedman.net> wrote:
>
> Hideo, I am a padawan in the LAFS realm but re #1, that was one of my
> first questions - why not have the client be able to talk to webdav,
> S3, SWIFT, and other systems directly so no logic was needed server side?
>

I like this idea, and I've heard Zooko and Brian Warner discussing an
even more basic "simple REST storage interface" which would go along
with those other standard interfaces.

However, there's a crucial service that storage nodes provide: They
verify the mapping between the capability and the contents.  I'm
glossing over something I don't understand well, which is the
intermediate data which connects a Cap, which is a secret, to the
stored ciphertext in a verifiably indexed manner.

So in general there's a map from { Cap -> plaintext } and that mapping
is made out of a combination of magical crypto fairy dust plus storage
node indexing and storage.

A storage node which does not validate the intermediate indexing which
is derived from the Cap secret and maps to the ciphertext would not be
able to prevent anyone from overwriting the ciphertext with junk.  For
immutable caps, the required verification is that the ciphertext
matches an indexing value derived from the immutable Cap.  For
mutables, the required verification is that the ciphertext matches an
indexing value derived from the mutable *write* cap, *and* that the
updates to the ciphertext are versioned by the holder of the write
cap.  (This prevents rollback attacks where someone remembers an old
ciphertext and cannot decrypt it, but can replay it later to convince
a storage node to overwrite a newer revision.)

So I believe that integrity and confidentiality could *still* be
provided by "dumb" storage nodes, like a webdav server, because the
client always verifies the integrity of the ciphertext, but we would
lose some availability properties and become more vulnerable to
attacks such as DOS by overwriting with garbage or rollback by
replaying ciphertexts.


The last I heard about the "HTTP plan" is the desire to simplify the
storage node requirements as much as possible and define an HTTP
standard interface.  This standard would require some cryptographic
verifications, so it couldn't simply be webdav, S3, and the like.  I
still believe this would be beneficial, however, because it would
start to break out the many useful features of Tahoe-LAFS into
decoupled reusable components, and it would make alternative
implementations easier.


> Right now I think you have to do FUSE mounts of S3 or webdav or something
> else and run the storage node on top of that but longer term I think
> being able to support no-software-install standard-protocol backend
> nodes as an option would be great for LAFS.

That would probably have poor performance, *but* it would preserve the
important storage-side integrity checks I described above.

Of course, Zooko just sent email about the "cloud branch" which will
be much more efficient than this approach.  I'd say the storage node
on top of FUSE connected to the cloud architecture is potentially
viable in the short term (at the obvious cost of efficiency and
integration complexity).

>
> Avi
>

Nathan

>> Hi,
>
> <snip>
>
>> 1) which clouds are supported to act as nodes? (mega.co.nz? skydrive?
>> gdrive? dropbox?)
>
>> Thank you for your time.
>>
>> Cheers
>
> _______________________________________________
> tahoe-dev mailing list
> tahoe-dev at tahoe-lafs.org
> https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


More information about the tahoe-dev mailing list