TC&C 2014-10-02, Synopsis

Zancas Dearana zancas at leastauthority.com
Thu Oct 2 20:49:10 UTC 2014


attending:  amiller, daira, juan, vitalik, nathan, za, zooko

Zooko started with a Layered Architecture perspective on tahoe using bits of
this presentation:

https://tahoe-lafs.org/~zooko/RealWorldCrypto2014/build/slides/#15

Layers:
~~~~~~~

 0.  Storage
 1.  Network
 2.  Routing
 3.  Agoric (Social/Economic)

note: agoric layer at the top which is not implemented

Layer 0, Data Storage and Security:
-----------------------------------

 The guiding principle for this layer is that references and authorities are
 unified as "capabilities".

 Data stored in tahoe is either "mutable" or "immutable".

 Both types are associated with a set of 'diminishing' capabilities, where
 for "mutable" data there exists a "write capability" a "read capability"
and
 a "verifying capability".  These capabilities are orderd from strongest to
 weakest and weaker capabilities are derivable from stronger ones, but not
 vice versa.

 Immutable data does not have write capabilities.

Immutable Capability Implementation Details:
''''''''''''''''''''''''''''''''''''''''''''

The following is mainly a summary of the detail in this reference:

 http://eprint.iacr.org/2012/524.pdf

 0. verify capability contents:

    * a sha256d of:

      0. the root of a merkle tree over the cyphertext
      1. the root of a merkle tree over the erasure coded shares

 1. read capability contents:

    0. the verify capability
    1. the symmetric encryption key used to encrypt the file


Mutable Capability Implementation Details:
''''''''''''''''''''''''''''''''''''''''''

 0. verify capability contents:

    0. a sha256d of the public verification key from the signing key-pair
    1. a merkle tree over the shares

 1. read capability contents:

    0. the verify capability
    1. sha256d of the private signing key from the signing key-pair (see
next
      capability) (you can derive the encryption key from this)

 2. read-write capability contents:

    0. the verify capability
    1. sha256d of the private signing key from the signing key-pair
    2. the AES-CTR encrypted signing key from the signing key-pair


 An interesting property is that one RSA authenticates the clear text after
 decryption, to prove the write secret was held by the encrypter.

  Daira notes: "Use a discrete log system should be used for this because of
  its efficiency."


Layer 1, Network Layer:
-----------------------

 An overview of the fundamental topology of a tahoe grid:


https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/network-and-reliance-topology.svg

 The most common interface to tahoe:


https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/frontends/webapi.rst


* Nathan: Note that CAPS in tahoe depend on the network layer in some cases?
  There is/Is there layering conflation/violation?

* Daira:  Current interfaces between the crypto and network layers are not
  cleanly separated. Write enablers formerly derived from storage_server
  TUBID, now it's simply a shared secret.  Also formerly lease renewal IDs.

I am unsure whether the answer to Nathan's question was:

  * "No, capabilities are now generated independently of networking layer
data."

OR:

  * "Yes, though we've removed the conflation from write-enablers, some
    issues still remain."

https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/frontends/webapi.rst

There are plenty of problems with the tahoe network protocol.

Layer 1a, Cloud Backend:
------------------------

Daira: The bottom 2 nodes depicted in the network-and-reliance-topology.svg
are part of the cloud-backend, which interfaces with standard cloud storage
provider APIs, to use them as tahoe storage nodes.

https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/specifications/backends/raic.rst

Note the bottom two nodes in the network-and-reliance-topology.svg.

Layer 2, DHT/routing:
---------------------

All-to-all:
'''''''''''

every client attempts to be aware of every servers' state.

Server selection:
'''''''''''''''''

is based on a random permutation of the servers which is a
function of the data being transferred. <-- This is called rendezvous
hashing.  Brian starting using it 7 years ago.


Layer 3, Agoric:
----------------

This term was coined by Mark Samuel Miller

Amiller: Are there examples of projects using only Layer 0?
Zooko:   No. People should/could innovate on the DHT layer, and keep the
network layer.


Accounting:
'''''''''''

 Problem:

  When a client asks to up/down-load, there's no current tracking of
  transactions.

 Solution:

  Each storageserver keeps a record of which requests have been made.
  As a function of requests (or other factors) storageservers implement
  economic policies

  Nathan:   There is certainly not a clean separation from layer 2 here.

  Daira:  How was this done at MojoNation?
  Zooko:  Believed that the Agoric layer was the failing of MojoNation.
Had a
          rolling second-price auction.

           - on request:  start a timer
           - if timer runs out:   honor request
           - elif other request comes in: do something else

  Nathan: What about simpler accounting just for simple use-cases?

   https://tahoe-lafs.org/trac/tahoe-lafs/wiki/Ostrom


How does IPFS actually work?
----------------------------

Name:
'''''
Juan: Originally galactic filesystem.  But GFS was taken.

So now it's the InterPlanetary File System (IPFS)

Motivation:
'''''''''''

Juan wanted to come up with a different way of soing static file
distribution.

Specifically he wanted to share really large scientific datasets across
network:

 git+bittorrent

The files of interest were too large for git, so it seemed that rolling
checksums, or splitting large file into smaller chunks would be a good
approach.

a bunch of things fell out:   merkle-DAG, merkle-tree

Address every file using unique addresses, content addressable.

Designed to be used by web-browsers.

A globally viewable resource runs afoul of browser same-origin-polcies.

Solution: Only execute files that are linked to from a prefix of the path.

Content Addressed P2P filesystem, connect all computers on planet.

DNS(name) --> IP..   not content addressed.

bittorrent -- git -- IPFS

network -- routing -- transport (TCPfirst) --

naming--mdag <--> same a LASS

IPFS routing:
'''''''''''''

PUT and GET values, and peers

Disorganized Snippets:
''''''''''''''''''''''

bittorrent east to start sharing sdata and try to

Nathan:  I really like the name.
Daira:   How does IPFS do mutability, versioning...  ?


Next Week, Proof of Stake Algorithms in Cryptocurrencies, 2014-10-09:
---------------------------------------------------------------------

Links for Proof of Stake

http://eprint.iacr.org/2014/452.pdf
https://docs.google.com/a/buterin.com/document/d/1Bwmy-WZxSXNTaPBgY92kJi2hi_Aa7ro5HfkgUYoRc-Y/edit
https://download.wpsoftware.net/bitcoin/alts.pdf
https://docs.google.com/a/leastauthority.com/document/d/1irOyVlKll6XDKp_oOx1UZGNaqI8ao7ETRgEIepUBh4c/edit#heading=h.q84gnbxqtt6x


Week After That, IPFS 2014-10-16:
---------------------------------

IPFS Link

http://static.benet.ai/t/ipfs.pdf


-- 
-- ظ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://tahoe-lafs.org/pipermail/tahoe-dev/attachments/20141002/30ba95cd/attachment.html>


More information about the tahoe-dev mailing list