<div dir="ltr">Hi Avi!<div><br><div class="gmail_extra"><div class="gmail_quote">On Fri, Jun 28, 2013 at 9:56 PM, Avi Freedman <span dir="ltr"><<a href="mailto:freedman@freedman.net" target="_blank">freedman@freedman.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Dear Comrade Nathan,<br>
<br>
My first thought was that one could publish hashes of readcaps into DNS,<br>
where the DNS response would be the introducer for the cluster. But...<br>
With the introducer furl I think they could upload as well as retrieve.<br>
<br></blockquote><div><br></div><div style>Oh, I like this idea!</div><div style><br></div><div style>I believe it's true that introducer's have only a single furl and that grants access to both register a node as well as to retrieve the list of nodes.</div>
<div style><br></div><div style>Having two separate furls for registration versus lookup seems like low hanging fruit to me, although I wonder how that interacts with the accounting feature set. This wouldn't prevent uploads, because the storage servers make that decision (and currently they always grant new uploads, AFAIK).</div>
<div style><br></div><div style><br></div><div style>So a complementary feature to that strategy is to make a "multi-introducer-aware" web interface, and perhaps to have "global caps" which include an introducer furl in the filesystem cap. If there's also a "registration separate from lookup" two furl feature, then this new "global cap" scheme would only rely on the lookup furl (regardless of if a cap is read or read/write).</div>
<div><br></div><div style>I recall Brian Warner describing something like this, or something similar involving separate federated grids, as one of the various alternatives to the "global grid". Brian, can you clarify your design brainstorms along these lines?</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
We've been looking at something related for Havenco, which is getting<br>
ready to launch LAFS and S3 bucketed storage using private nodes per<br>
customer (to solve the lack of accounting).<br></blockquote><div><br></div><div style>Exciting! I'll keep an eye out for future announcements.</div><div style><br></div><div style> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
One question that's come up is how users could share LAFS-stored<br>
data without giving away the keys to their cluster (for uploads).<br>
<br>
We haven't implemented it yet but it'd seem pretty simple to have an<br>
nginx proxy that sat on a public port, accepted caps using the same URL<br>
as the tahoe-lafs web server:<br>
<br>
<a href="http://x.y.z.q:3456/file/URI%3ACHK%3Acbb4d3bb6dgiqwiygidqolabve%3Ag6jf2rutbf3pzeltxytm5tbf3f3xu2hhj2yrbnn4vcw2nvrrs4va%3A3%3A10%3A4720/@@named=/tahoe-test" target="_blank">http://x.y.z.q:3456/file/URI%3ACHK%3Acbb4d3bb6dgiqwiygidqolabve%3Ag6jf2rutbf3pzeltxytm5tbf3f3xu2hhj2yrbnn4vcw2nvrrs4va%3A3%3A10%3A4720/@@named=/tahoe-test</a><br>
<br>
That just proxies to a local tahoe-lafs web server bound to localhost.<br>
<br>
Then you wind up sharing a URL instead of a cap.<br>
<br></blockquote><div><br></div><div style>As Leif mentioned, that's the goal of lafs-rpg (which is basically just an nginx configuration template).</div><div style><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Adding in basic auth would be pretty simple as well if desired, though<br>
in the LAFS religion that would be heresey (sorry, not sure if you believe<br>
in religion, Comrade).<br>
<br></blockquote><div><br></div><div style>Not at all. The beauty of caps is that it's convenient to implement other access control mechanisms on top of them. For example, a "Web Drive" product might be a thin layer between users and LAFS storage which maps their user credentials to LAFS caps internally to express particular access controls.</div>
<div style><br></div><div> </div><div style>[snip...]</div><div style><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Complexity could be added with having a DNS db of cap <-> cluster<br>
public-facing web sever options. If there was interest we could<br>
build and run something like that, at least to the level of millions<br>
of caps. Doing so for billions+ would need some of the economic<br>
incentives to which you were referring.<br>
<br></blockquote><div><br></div><div style>One issue I see with this DNS service is that it's centrally administered (or else the utility is lessened). I like how it could be very usable though, if done right.</div>
<div style> </div><div style>[snip...]</div><div style> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Avi<br>
(a part-time Havenco bit janitor)<br>
<div class="im HOEnZb"><br>
> The time has come to shed our conspiratorial pretense of being nothing but<br>
> small disparate bands of neighborly do gooders sharing storage with their<br>
> friends. It is time to reveal to the world our true conquest of world<br>
> domination and announce our intent to create The One Grid to Rule Them All!<br>
<br>
</div><div class="im HOEnZb">> I personally want to be able to email or tweet or inscribe on papyrus a URL<br>
> containing a read cap, and anyone who sees that and has Tahoe-LAFS version<br>
> Glorious Future installed should have a reasonable chance to retrieve the<br>
> content.<br>
><br>
<br>
</div><div class="HOEnZb"><div class="h5">> Regards,<br>
> Comrade Nathan<br>
> Grid Universalist<br>
<br>
</div></div></blockquote></div><br></div></div></div>