devchat notes 18-Oct-2016

Brian Warner warner at lothar.com
Tue Oct 18 21:40:28 UTC 2016


Attendees: warner, meejah, liz, dawuud, str4d

Main topics:

* Summit venue reserved (Nov 8+9 in San Francisco): details at:
  https://tahoe-lafs.org/trac/tahoe-lafs/wiki/Summit2016
  * if you can make it (or maybe make it), please add your name to the
    wiki, and add your favorite topics for discussion
* 1.12:
 * warner will tag alpha1 today
 * --listen=i2p might not make it, str4d will look
 * should announce 1.12a1 to tor-dev too, maybe twisted-dev
* Brainstorming about more use cases and applications we can build on
  top of tahoe technology


Detailed notes:

  * Summit Venue reserved: Mechanics Institute Library
  * should tag 1.12a1 today
  * things that might not make 1.12:
    * --listen=i2p
    * tor client+server node: connect to self directly, not through tor
  * tor integration tests?
    * meejah will look into Tor failures
    * mark "sometimes fail" tests as "expected failures"
  * magic-folder integration test failures?
    * maybe move integration tests to a separate travis job
  * should we stop doing OS-X on travis? rely on the buildbot for OS-X
coverage?
  * PR368: dawuud will close, doesn't really work on OS-X yet
  * PR363 and 365 are the same thing, rebased-to-master vs
merge-in-from-master
    * needs more work, implementation is not properly deterministic
  * PR361: meejah will update test
  * announce a1 to tor-dev too, advertise the new tor/i2p support
    * maybe twisted-python list too
  * applications on top of tahoe: think of ideas! we'll discuss at summit
  * https://tahoe-lafs.org/trac/tahoe-lafs/wiki/Summit2016
    * add topics!


Started Brainstorming etc.

  * tor use-case for tahoe is basically drop-box (i.e. share stuff w/
each other)
    * they set up introducer + storage nodes
    * have tool to create invitation code which gives them:
      * introducer etc
      * write-cap for some shared directory (and/or magic-folder?)
    * for server-side: a "set up a grid" GUI (e.g. on S3/whatever)
    * instead of ^ then basically leastauthority S4 (but w/ invite codes)
  * disallow/allow storage servers (ties in to above too)
  * allow/disallow clients too (e.g. if they stop paying, leave org, etc)
  * "accounting stuff" too (e.g. if a user is using too much space)
  * other deployment application:
    * "windows registry key" (for example) might say "hey, if you're
tahoe, go to this authorization server"
    * i.e. for corporate env.
    * gets back from authorization server: config information basically
  * also: might want admin-er to be able to provide an installer w/
embedded JSON (i.e. the stuff you get from the wormhole)
  * perhaps we want notion of "grid administrator"
    * chooses servers (i.e. which storage-servers can store shares)
    * authorizes clients (i.e. which clients can connect to which servers)
  * existing (~5ya) accounting branch
    * built in accounts in tahoe
    * client can connect, ask to be associated w/ account
    * if they have an account object
      * can do basic API calls
      * account-object can reject reads, writes etc
      * leases are tied to the accounts
      * clients can talk the storage-servers, asking about accounts
  * write down use-cases/applications
    * e.g. "the Tor use-case"
    * "enterprise" use-case
    * "S4" use-case
    * etc.
  * warner: "auto setup" type tool (i.e. can create "Stuff" on AWS,
given your creds)
    * so: is there value in having tahoe talk "directly" to S3,
backblaze, etc (i.e. no storage servers)
    * means: less admin work (i.e. don't have to have "a storage server"
running out there somewhere)
    * means: lose all the accounting-stuff
    * (also implies you lose agoric stuff)
    * str4d asks: why not just have a storage-node running locally, too
(i.e. and *it* talks to S3, etc)
    * leasedb implications
  * "native storage server": tried to maintain interface boundary so
that an "s3 storage server" or "backblaze storage server" can use this
  * (i.e. where the "tahoe native storage server" API becomes just *one*
of the options)
  * review of the "3 populations" (from 2 weeks ago):
    * people who will pay for the storage (no ideological cares). Will
pay S3, backblaze, etc. Not going to use personal storage servers nor
"agoric stuff"
    * "self sufficient" people, who don't-really-like cloud providers.
Will use friendnets, might use "agoric stuff"
    * "free-loaders", people who will only ever use a free service
  * (meejah) why not have your client run a storage-node that knows how
to S3/backblaze etc?
    * only works for a single node system?
    * if storage server burns down, what do you need to "get stuff back"?
    * (i.e. re-install tahoe in single-storage mode, give it AWS creds +
readcap for your backup)
    * write-enablers won't work
    * store all data "in the cloud" (str4d) as in, maybe have a second
bucket that you use for "tahoe purposes" (i.e. put private key,
lease-db, etc)
    * (meejah) what if you store it all "in the grid" -> i.e. can you
get all your config back with only an S3 cred and readcap for the
"storage server config" (originally mutable dir)
    * str4d does something essentially similar for i2p grids
  * essentially, the "storage server-less" thing is an attempt to have
everything "in the cloud" *without* putting the actual storage server in
the cloud.
    * talking about leases and where/how to back up the lease db in
above cases
    * e.g. troll /s3/leases/<share> when deciding "shall I delete this
share?"
    * would be expensive to ask question "total of all shares Alice is
storing?"
      * i.e. could this be cached? that is, "real" leases are on S3 but
you have a local one too
    * in "fire burns computer down" case you can still get the lease-db
back (but slow-ish)
  * "how can we get rid of the storage servers?" or "why *not* just use
S3 etc?"
    * counterpoint: running a computer in the cloud is easy
    * is only Amazon "flaky servers that might go away"
    *



More information about the tahoe-dev mailing list