Welcome to the Tahoe-LAFS Weekly News (TWN). Tahoe-LAFS is a secure, distributed storage system. View TWN on the web or subscribe to TWN. If you would like to view the "new and improved" TWN, complete with pictures; please take a look.
"On behalf of the entire team, I'm pleased to announce the 1.12.0 release of Tahoe-LAFS.
Tahoe-LAFS is a reliable encrypted decentralized storage system, with "provider independent security", meaning that not even the operators of your storage servers can read or alter your data without your consent. See http://tahoe-lafs.readthedocs.org/en/latest/about.html for a one-page explanation of its unique security and fault-tolerance properties.
With Tahoe-LAFS, you distribute your data across multiple servers. Even if some of the servers fail or are taken over by an attacker, the entire file store continues to function correctly, preserving your privacy and security. You can easily share specific files and directories with other people.
The 1.12.0 code is available from the usual places:
All tarballs, and the Git release tag, are signed by the Tahoe-LAFS Release Signing Key (fingerprint E34E 62D0 6D0E 69CF CA41 79FF BDE0 D31D 6866 6A7A), available for download from https://tahoe-lafs.org/downloads/tahoe-release-signing-gpg-key.asc
Full installation instructions are available at:
http://tahoe-lafs.readthedocs.io/en/tahoe-lafs-1.12.0/INSTALL.html
1.12.0 improves Tor/I2P support, enables multiple introducers (or no introducers), allows static server definitions, and adds "Magic Folders", an experimental two-way directory-synchronization tool. It removes some little-used features like the "key-generator" node and the old v1 introducer protocol (v2 has been available since 1.10). Many smaller fixes and changes were made: see the NEWS file for details:
https://github.com/tahoe-lafs/tahoe-lafs/blob/0cea91d73706e20dddad13233123375ceeaa7f0a/NEWS.rst
Many thanks to Least Authority Enterprises for sponsoring developer time and contributing of the new Magic Folders feature.
This is the sixteenth release of Tahoe-LAFS to be created solely as a labor of love by volunteers. Thank you very much to the team of "hackers in the public interest" who make Tahoe-LAFS possible. Contributors are always welcome to join us at https://tahoe-lafs.org/ and https://github.com/tahoe-lafs/tahoe-lafs .
Brian Warner on behalf of the Tahoe-LAFS team
December 17, 2016 San Francisco, California, USA"
Tahoe-LAFS devchat 13-Dec-2016
I tagged beta1 last night. The plan is to tag the final 1.12.0 release next weekend.
Docker: We automatically generate a Docker image with each commit, which makes it easier for folks (in Docker-capable environments) to run Tahoe without compiling anything. However the current image tries to keep all its NODEDIR state inside the container, which is not good Docker practice (containers are ephemeral, so it's easy to lose your rootcaps or stored shares). Exarkun will file some PRs to improve this, by keeping the state on the host filesystem (mounted by, but not living inside, the container).
He'll also take a look at our DockerHub presence (https://hub.docker.com/r/tahoelafs/) and make sure we're providing something useful.
This might be aided by landing the PR for #2848, which adds arguments to "tahoe create-client" that sets the shares.needed/required/happy in the generated tahoe.cfg (as opposed to editing tahoe.cfg after node creation). It's kind of last-minute, but the PR is pretty small, so I think we can safely land it.
OS-X: our buildbot creates OS-X .dmg packages with each commit (see https://tahoe-lafs.org/source/tahoe-lafs/tarballs/OS-X-packages/), which put a binary in /usr/bin/tahoe (but maybe you need to be an admin to run it?). The package includes a .app application (with icon and stuff), but it doesn't actually do anything. So these "packages" aren't exactly useful.
We're going to leave this builder in place for now and let it create a 1.12 package, but then we'll dismantle it after 1.12 is done and replace it with cypher's "frozen binaries" tooling. He's got a buildbot (https://buildbot.gridsync.io/waterfall) which generates single-file executables for both OS-X and windows, which sounds like the preferred way to distribute Tahoe until we get a full real GUI application (which he is also working on). After 1.12 is done, we'll work to merge this buildbot in with our main one (#2729), possibly taking over the worker machines too (having the Tahoe org host them, instead of using Cypher's personal machines, and/or using our Appveyor config to build some). Then we'll distribute these executables on the main web site next to the source tarballs. We might also manually generate executables for 1.12 and copy them into place.
Windows: We've got no packaging changes for Windows: I think we're only offering "pip install" and some extra instructions. Post-1.12 we'll add frozen binaries.
We need to remember to send the final release announcement to tor-dev, or maybe tor-talk, to let the Tor community know of our new integration features, and solicit feedback. We know of some Tor and I2P "community grids", and we need to make sure their maintainers know about the release, but they probably do already.
We noticed that GitHub automatically generates source-tree tarballs (via "git archive"), and on other projects this sometimes causes confusion. We declared the signed sdist tarballs/wheels on PyPI to be the "official" release artifact, rather than GitHub's auto-tarballs. But our release checklist will include copying the official tarballs to GitHub's "releases" page, so anyone who sees the auto-tarball will also see the (signed) real tarballs, to reduce confusion.
We talked about more "productized" deployments, catching Cypher and Corbin up on discussions we had at the Summit in November. Cypher is working on a deployable GUI as a Least Authority project, and Corbin is building a commercial service around Tahoe, so both are really interested in where we go with Accounting and node provisioning.
Some use cases we discussed:
Cypher's prototype uses a Magic-Wormhole -based provisioning flow: clients launch the app, which asks them to get a wormhole code from their admin. The payload delivered via this code provides the introducer.furl and encoding settings. In the future, this could also transfer pubkeys or certificates that authorize the new client to consume space on the storage servers (which might be locally-hosted machines, or cloud storage, but are under the control of the same admin).
Corbin's work is likely to depend on a better Helper, to reduce cost and improve performance. We currently only have a Helper for immutable uploads, and it's been neglected for several years. In 2017 we hope to give some love to the Helper, adding the immutable-download side, and then their mutable counterparts.
One interesting question is how storage authority should be handled: in one approach, all storage authority comes from the client, which delegates some small portion (restricted to a specific file, for a limited time) to the helper. In another approach, the Helper can pose as anyone they like, but notifies storage servers of the account label that should be applied to any shares being uploaded.
At the Summit we discussed a "full agoric" approach, in which clients learn about servers from some sort of "yellowpages" directory, decide individually which ones they like, establish accounts with them, deposit some BTC to get started, and then upload shares. I still think that's a cool thing, but most of the use cases we looked at today wouldn't use it (they'd want more curated selection of storage servers, and in many of them the payment is coming from a central admin rather than individual clients).
#2848: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2848 #2729: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2729 Magic-Wormhole: http://magic-wormhole.io/
The Tahoe-LAFS Weekly News is published once a week by The Tahoe-LAFS Software Foundation, President and Treasurer: Peter Secor . Scribes: Patrick "marlowe" McDonald , Zooko Wilcox-O'Hearn , Editor Emeritus: .
Send your news stories to marlowe@antagonism.org - submission deadline: Monday night.