devchat 23-Aug-2016
Brian Warner
warner at lothar.com
Wed Aug 24 00:28:48 UTC 2016
Attendees: daira, dawuud, liz, meejah, warner, zooko
* cloud-backend branch: Daira hasn't had time to get it rebased/updated
yet. We decided to give it one more day, then if it isn't ready to
land, we won't hold up the release for it.
* Meejah was able to make the magic-folder tests faster and more
reliable (1.5s instead of 30s), via #2806 and PR313. In the process,
we noticed a `defer.setDebugging(True)` call that was accidentally
left in place, which is known to slow tests down by about 2x. We've
added a tox check to make sure it doesn't accidentally come back.
* We discussed a "watch-mutable" feature, which would enable things like
magic-folders to efficiently monitor a mutable file for changes. After
sketching out a good API and sequence of implementation steps
(starting with polling, then using some pubsub mechanism), Daira
discovered that we already wrote up a ticket for it last year, #2555.
* "tox -e code-checks" was added, to run the checks that were previously
only available as a Makefile target. Travis now runs this on each
commit. It does not yet signal an error when check-interfaces.py
reports classes which do not implement every method of their claimed
interfaces: we need to fix the code to do that first, then we can mark
violations as errors.
* The magic-folders (manual) smoke test needs a way to wait for a tahoe
client (in a subprocess) to establish connections to storage servers.
Meejah said it currently parses log files to watch for connections. We
discussed adding a JSON-parseable WAPI interface to report
storage-server connections. #2133 is related. Warner thought maybe it
shouldn't be specifically a representation of the Welcome page: rather
we should add a /servers?t=json endpoint that would only talk about
servers. Perhaps a CLI command like "tahoe wait-until-connected" would
be useful.
* PR302 (connections.yaml static announcements): we're going to try to
land the existing patch, *then* go through the code and stop using
TubIDs as server IDs. We still need it for get_lease_seed(), because
even though we removed remote_cancel_lease in #1528, we still need the
renew_lease secret. Accounting will replace this with a client
signature.
* What's left for the next release (other than maybe cloud-backend
work)? With a few days of work we can probably get Tor working by
default. Other than that it's mostly magic-folders.
* We discussed accounting use cases, and what it would take to make the
economics of a "one-true-grid" system work (aka the "agoric" model).
* Zooko wants to minimize the decisions that the humans must make.
Prices, server selection, and client acceptance should all be decided
by the storage or client agent. Ideally the server operator just
decides how much space to allocate to tahoe, and watches their
BTC/ETH/ZEC balance go up, and runs it until it's no longer fun.
Ideally clients just supply some funds and then ask to store files,
and the client agent finds a good price (or apologizes and gives up).
* We need to punish bad servers just enough so there will be sufficient
good servers
* Maybe clients should pay only a small amount at upload, and pay more
at download time. Zooko noted that this highlights a disconnect
between cost and value: it costs the server X to accept a share, Y to
maintain it, and the client realizes value Z when downloading it. The
costs are borne well before the value is obtained, making it
non-trivial to decide on a suitable price to charge.
* As a development strategy, we could refrain from predicting
scalability problems, and just build something. Then wait to see how
it fails, then we'd be motivated to fix it. Start by assuming servers
are greedy but 1: honest/reliable (they'll give back the share when
you ask for it), 2: non-colluding on prices, and 3: rational.
* Maybe make clients be willing to pay up to the 75th percentile of all
prices it sees, and limit the growth rate over time. Any server which
advertises too high a price just won't get used. The idea is that
servers might crank up their prices, but they have to do it slowly,
and clients (humans) will have time to see the changes and react.
* To retain the downloader's ability to find shares efficiently, the
share-placement algorithm is: compute the permuted ring, then filter
out servers above the 75% percentile price. If we assume that most
servers will advertise acceptable prices, most will be eligible, so
the permuted-ring is mostly accurate.
* Maybe be willing to pay slightly more for servers at the start of the
permuted ring, in the "ideal" locations.
* Maybe servers should increase their price each time they make a sale,
then decrease it when time passes without a sale, like TCP window
management.
* We may need some randomness to provide stability: if prices are
quantized, and everybody converges on a single value, then there will
be a hard edge against further prices changes
* We could make the economic agent into a plugin, or an external
program. Tahoe would ask it about each storage request, and it could
return a message (to be sent to the client, to their economic agent)
to ask for payment. The goal would be for Tahoe to know nothing about
bitcoin/etc, and to enable rapid experimentation with pricing
approaches.
The existing accounting work includes some performance assumptions. We
use public keys to identify clients, because we figured it'd be
necessary to aggregate any payments or permissions by user. There might
be ways to make payments be so cheap that we could just include a coin
with every remote (RIStorageServer) API call. If that's true, we
wouldn't really need accounts ("I don't care who you are, I just care
that I get paid").
cheers,
-Brian
More information about the tahoe-dev
mailing list