Tahoe LAFS + cryptocurrency compensation system
bin.echo at gmail.com
bin.echo at gmail.com
Mon Apr 28 08:21:37 UTC 2014
Here is my simple idea to tie a zero knowledge automated
cryptocurrency compensation system into Tahoe LAFS.
Say Alice has a blob of data she wants to store. Alice advertises to
the network and finds a partner willing to take her blob of data. (The
network is a grid of Tahoe LAFS Storage Servers)
Bob excepts the blob of data and stores it, knowing that we will get
paid in the future for the space he wasn't using in the first place.
The blob of data has no identifying information other than a UUID,
which is something simple like the SHA256 hash of it's data chunk. Bob
forgets about who he got the data from and technically, Alice could
forget who she sent it to also. It really doesn't matter as long as
Alice remembers the UUID for the data she might want back some day.
To stop Alice from spamming the network with bogus advertisements,
Alice must pay a nominal fee to be able to advertise her blob of data.
The interesting thing is that Bob doesn't get the tokens from that.
They get consumed by the block chain and redistributed as transaction
fees. Alice also has to transmit her blobs of data, and since she is
most likely on a consumer connection, sending data is expensive for
her. She has no rational reason to be a bad actor.
A day passes. Alice checks in on her data by offering a bounty to the
block chain to pay for the last days storage: "If you have this data
blob, grab this bounty." The challenge is issued as a cryptocoin
payment where the private address of the corresponding public address
is nothing more than a hash of the data chunk of the blob concatenated
with a nonce. The cryptocurrency would be extended to allow for
transaction comments detailed enough to carry the extra metadata
required by this scheme. In this case, the transaction comment would
name the UUID of the blob the payment is meant for and the nonce,
which is the challenge.
Bob sees the bounty on the block chain and realizes he has the named
blob of data. He uses the nonce to hash the data chunk of the blob to
derive the private key. If he has been a good steward, the data is
still intact and his key is correct. He claims the bounty by sweeping
the entire balance of the bounty out of the address.
Alice sees that her bounty was claimed from the block chain, so she
increments "confidence" for that blob and waits a week.
Alice checks in on her data again by offering a bounty to the block
chain to pay for the last WEEKS storage.
and we repeat. This could be tuned so that each time Alice's
confidence in a particular blobs steward increases, the payments get
further apart up until they are about a month apart. That would reduce
chatter on the block chain. And since LAFS uses 3-of-10 encoding for
data by default, Alice can afford to be a bit lazy checking up on her
erasure blocks.
After some time, if Alice sees the bounty was not claimed, she revokes
the payment and re-transmits the blob (and I would suggest, modify the
UUID, perhaps by salting the payload or something). The old data UUID
could be marked as MIA.
If Bob doesn't get payments from Alice on the expected schedule, after
some reasonable amount of time he just deletes her data.
Alice has no reason to get behind on payments because re-uploading her
data would be a hassle for her. Bob has no reason to delete the data
too fast because he'd rather get paid for storing it than have to
re-download it.
I realize this could have problems with scalability, which is why it
needs a new block chain optimized for high volumes of transactions
(fast block time and large block size) and hopefully self pruning,
like the Mini Rolling Block Chain concept.
After the initial distribution via mining, the tokens would be
generated using the "Proof of Stake" scheme with an inflation roughly
matching Kryder's Law. Proof of Stake makes sense as most of the
clients would always be on line anyway. And their CPUs will have
plenty of real work to do actually verifying data integrity via the
hash challenges.
The clients and servers would need to be able to address erasure
blocks on disk individually for this scheme to work. I'm not familiar
enough with the internals of Tahoe LAFS to know if it can do that
already.
The token system and the storage are loosely coupled. The "Controller"
could be run as a separate process, as it's job is simply to make sure
that accounts are settled.
Tokens do not have a direct relationship to storage. By that I mean, a
token does not equal a fixed amount of storage. Price finding would
need to happen through some mechanism. My suggestion is that the
Controller software have some form of bidding mechanism, where all
the peers could participate in price finding. The challenge would be
to balance parameters to avoid any sort of "race to the bottom" or
other tragedy of the commons.
One interesting side effect of this system is that price really
doesn't matter too much for casual users who are "trading" disk space
as they will always get receive the same amount of tokens they are
paying themselves.
In theory, every single blob could have it's own individual contracted
storage price and the Controller would track it all. Again, the trick
will be to balance things so as not to not lock users into contracts
for too long, while allowing renegotiation that doesn't end up
seeming like data being held hostage by stewards. (preferably in some
kind of automated way)
Obviously there are a lot of details missing but those are the broad strokes.
There are a lot of really lofty DAO ideas out there. The idea here is
to adapt "good enough" known technologies into something that is
greater than the sum of it's parts.
I'm interested to hear what people think. Is there anything obvious I
have missed?
More information about the tahoe-dev
mailing list