[tahoe-lafs-trac-stream] [tahoe-lafs] #959: tahoe-lafs objects

tahoe-lafs trac at tahoe-lafs.org
Thu Jul 4 17:21:22 UTC 2013


#959: tahoe-lafs objects
-------------------------+-------------------------------------------------
     Reporter:  warner   |      Owner:  nobody
         Type:           |     Status:  new
  enhancement            |  Milestone:  2.0.0
     Priority:  major    |    Version:  1.6.0
    Component:  unknown  |   Keywords:  objects validation backward-
   Resolution:           |  compatibility forward-compatibility revocation
Launchpad Bug:           |
-------------------------+-------------------------------------------------

Old description:

> When Zooko and I did a run-through of our upcoming RSA talk at the
> "friam" captalk meeting (12-feb-2010), Carl Hewitt asked the
> question "what would it take to turn this Tahoe file/directory
> graph into a graph of '''objects'''?". We generally understood
> "objects" to mean "bundle of state and behavior", like in
> object-oriented programming, whereas Tahoe's current file/directory
> objects are just inert state (with any behavior coming from the
> Tahoe client node that's processing it)..
>
> This question prompted a lot of deep thinking around the table.
> There is a very juicy idea lurking in this, but we all
> metaphorically went off to separate corners to meditate on it.
>
> Norm Hardy expressed his subsequent thoughts here:
> http://cap-lore.com/BigStore/Tahoe.html .
>
> Zooko, when asked a day later on IRC, mentioned these:
>
>  1. we should make tahoe dirs extensible as suggested by someone
>  2. we should have a meeting of the minds with friam especially
>     Norm to understand how "opaque object" stuff can be implemented
>     just by making the gateway be the security (and availability ?)
>     domain for your opaque object.
>
> The idea that came to me (Brian) was:
>
>  * suppose we stored three things in a Tahoe file
>   * a numerically-indexed list of childcaps (the "C-list")
>   * an arbitrary chunk of serialized state
>   * a chunk of code written in some confineable language (E or
>     secure javascript), or perhaps an immutable reference to some
>     external code file, share between lots of objects
>  * Some subset of these three things might be mutable, or maybe
>    they'd all be immutable. Some filecap points to this collection.
>  * when a Tahoe client node loads this object, it runs the code and
>    gives it access to:
>    * the serialized state
>    * the objects referenced by the childcaps (but not the caps
>      themselves)
>  * the object receives any webapi request messages aimed at its
>    filecap, processes those requests itself, then can update its
>    state and/or return a response
>
> Much of the post-Carl's-question discussion was about how to
> implement an "opaque boundary", which I interpreted to mean hiding
> the childcaps from the confined code that gets run. The code would
> be able to reference {{{childcap[0]}}} and send it messages, but
> it wouldn't be allowed to know the actual childcap string (thus
> helping the child maintain its own privacy).
>
> I'm not sure where to go with these ideas, but they smell powerful.
> One direction is a forwards-compatibility thing: with a
> sufficiently general runtime environment for the bundled code, it
> could be used to implement dirnodes, add-only collections,
> revocable forwarders, all sorts of stuff that we haven't invented
> yet. Those fancy things could work on Tahoe clients that were
> written before the fancy thing was invented because they'd be
> implemented by portable code that would come along with the object
> being stored.
>
> Our current dirnode actions (get child, add child, rename, list,
> delete) could probably be implemented this way (with some
> additional layer to hide new childcaps from the embedded code,
> maybe an extra webapi service which adds childcaps to the C-list
> and only informs the code about the new index).
>
> This whole thing falls into the category of "mobile code", except
> that instead of a behavior-laden object moving directly from one
> machine to another, it's being stored in the grid and waking up
> again later (in one or many places). These objects would have
> control over their internal state (subject to the behavior of any
> client node that was allowed to host one of them). Isolation
> between these objects would be provided by the client nodes.
>
> Something to brainstorm about, at any rate..

New description:

 When Zooko and I did a run-through of our upcoming RSA talk at the
 "friam" captalk meeting (12-feb-2010), Carl Hewitt asked the
 question "what would it take to turn this Tahoe file/directory
 graph into a graph of '''objects'''?". We generally understood
 "objects" to mean "bundle of state and behavior", like in
 object-oriented programming, whereas Tahoe's current file/directory
 objects are just inert state (with any behavior coming from the
 Tahoe client node that's processing it)..

 This question prompted a lot of deep thinking around the table.
 There is a very juicy idea lurking in this, but we all
 metaphorically went off to separate corners to meditate on it.

 Norm Hardy expressed his subsequent thoughts here:
 http://cap-lore.com/BigStore/Tahoe.html .

 Zooko, when asked a day later on IRC, mentioned these:

  1. we should make tahoe dirs extensible as suggested by someone
  2. we should have a meeting of the minds with friam especially
     Norm to understand how "opaque object" stuff can be implemented
     just by making the gateway be the security (and availability ?)
     domain for your opaque object.

 The idea that came to me (Brian) was:

  * suppose we stored three things in a Tahoe file
   * a numerically-indexed list of childcaps (the "C-list")
   * an arbitrary chunk of serialized state
   * a chunk of code written in some confineable language (E or
     secure javascript), or perhaps an immutable reference to some
     external code file, share between lots of objects
  * Some subset of these three things might be mutable, or maybe
    they'd all be immutable. Some filecap points to this collection.
  * when a Tahoe client node loads this object, it runs the code and
    gives it access to:
    * the serialized state
    * the objects referenced by the childcaps (but not the caps
      themselves)
  * the object receives any webapi request messages aimed at its
    filecap, processes those requests itself, then can update its
    state and/or return a response

 Much of the post-Carl's-question discussion was about how to
 implement an "opaque boundary", which I interpreted to mean hiding
 the childcaps from the confined code that gets run. The code would
 be able to reference {{{childcap[0]}}} and send it messages, but
 it wouldn't be allowed to know the actual childcap string (thus
 helping the child maintain its own privacy).

 I'm not sure where to go with these ideas, but they smell powerful.
 One direction is a forwards-compatibility thing: with a
 sufficiently general runtime environment for the bundled code, it
 could be used to implement dirnodes, add-only collections,
 revocable forwarders, all sorts of stuff that we haven't invented
 yet. Those fancy things could work on Tahoe clients that were
 written before the fancy thing was invented because they'd be
 implemented by portable code that would come along with the object
 being stored.

 Our current dirnode actions (get child, add child, rename, list,
 delete) could probably be implemented this way (with some
 additional layer to hide new childcaps from the embedded code,
 maybe an extra webapi service which adds childcaps to the C-list
 and only informs the code about the new index).

 This whole thing falls into the category of "mobile code", except
 that instead of a behavior-laden object moving directly from one
 machine to another, it's being stored in the grid and waking up
 again later (in one or many places). These objects would have
 control over their internal state (subject to the behavior of any
 client node that was allowed to host one of them). Isolation
 between these objects would be provided by the client nodes.

 Something to brainstorm about, at any rate..

--

Comment (by nejucomo):

 **Summary:** A "C-list + blob" feature is a useful stepping-stone for both
 "live objects" and "arbitrary dag structures".

 Note: As I write this, I haven't read all of the comments yet...

 Responding to warner:

 > suppose we stored three things in a Tahoe file
 >        * a numerically-indexed list of childcaps (the "C-list")
 >        * an arbitrary chunk of serialized state
 >        * a chunk of code written in some confineable language (E or
 secure javascript), or perhaps an immutable reference to some external
 code file, share between lots of objects

 With only the first two of these bullets, the storage model becomes
 "arbitrary DAGs (Directed Acyclic Graphs)" instead of only "files or
 directories".  This is an interesting direction, even outside of a "live
 object code" feature.

 The C-list would have the same confidentiality properties as current
 directory references, except they are unassociated with a filename, but
 they have a unique index for a given version of the cap contents.  (Thus
 verification and repair would work in a similar fashion, although I'm less
 certain about how updates are implemented.)

 Simple "monolithic" directories (similar to SDMF directories) could be
 implemented on top of this "C-list + blob" format to promote "our own dog
 food" cuisine, which would help work out bugs and usage issues.  This
 implementation would have a blob which links file names and other link
 metadata to the C-list.

 It's easy to imagine other data structures: sets, queues, trees whose edge
 labels are not filenames, etc...  (In fact, a queue could have an empty
 blob, if we trust the writer to maintain the C-list order "correctly".  A
 set is even simpler and has an empty blob and the C-list order is
 ignored.)

 Of course, those data structure examples don't address distributed
 consistency issues, so I don't mean to imply desirable properties such as
 append-only or "eventually consistent ordering" or the like.  See #796 for
 a discussion of some of those features.

 Consider also, these proposed distinct (abstract) webapi calls:

 * {{{get_raw(readcap) → C_list_and_blob_bytes}}}
 * {{{get_blob(readcap) → blob_bytes}}}
 * {{{get_clist(readcap) → C_list}}}

 Now, the blob could be {{{html}}} and {{{javascript}}}, so that a
 {{{GET}}} to the {{{get_blob}}} api will load that into a browser, and the
 javascript can then use the {{{get_clist}}} api to access references.
 Some pros/cons of this particular example:

 * Pros:
   * This is approaching a "live capabilities objects" feature.
   * It keeps the layering fairly distinct - lafs doesn't know about "live
 object code" very much, outside of standard http+browser tech.
   * Because the "C-list + blob" feature is a separate layer, this does not
 preclude other approaches to live objects, and it does not complicate
 other uses of "C-list + blob".
 * Cons:
   * This example is inefficient compared to a more direct implementation
 of "live objects".
     * Example: a webapi which knows the blob is javascript with a special
 interface could inject the C-list into the script in a well defined
 convention prior to responding to an http request.  Thus, there'd only be
 one http request/response.  (This would not use the proposed API above.)

 I haven't figured out yet how to quarantine the javascript to have object-
 capabilities-like access control, in this scheme.  However that problem
 seems similar to object-capabilities-restriction for javascript in current
 lafs.

-- 
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/959#comment:12>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage


More information about the tahoe-lafs-trac-stream mailing list