[tahoe-dev] Questions

Jason Wood jwood275 at googlemail.com
Thu May 13 06:30:12 PDT 2010


>
> Message: 3
> Date: Wed, 12 May 2010 15:32:06 -0600
> From: "Zooko O'Whielacronx" <zookog at gmail.com>
> Subject: Re: [tahoe-dev] Questions
> To: tahoe-dev at allmydata.org
> Message-ID:
>        <h2xcd6401a1005121432x811b6706tc5a5c3b8f375965e at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> You ask good questions. First you asked for some features that we
> already have, then you asked for a couple of things that we don't have
> but hopefully will soon...
>

That's good news, the features I mentioned would be great to have but not
100% necessary but the answers have lead me to a couple of further
questions...


>
> On Wed, May 12, 2010 at 2:24 PM, Jason Wood <jwood275 at googlemail.com>
> wrote:
> > This keeps getting better and better! That is exactly what I was hoping
> to
> > hear!
> > Ok, so it seems to do everything I need, now for a couple of questions
> about
> > "nice to have" features...
> > I believe I can set up a grid to consist of nodes on a LAN and WAN as
> part
> > of the same storage cluster. So, if I had 3 locations each with 5 storage
> > nodes, could I configure the grid to ensure a file is written to each
> > location so that I could handle all servers at a particular location
> going
> > down?
>
> This is #467 and #573. In my mind these tickets are super-important:
> they are one of the most frequently requested features from new users
> or from people considering using Tahoe-LAFS. I imagine they could open
> up new use cases for Tahoe-LAFS. On the other hand, these users who
> request this so far haven't stuck around and kept requesting it with
> increasingly precise comments on the tickets, so I'm not 100% sure if
> we would be doing it right. Or maybe it turns out to be something that
> people don't need as much as they thought they would.
>

That would be extremely useful, if there is anything I can do to help with
this in testing or working out use cases please let me know. It would be an
instant sell for me at this point!


>
> > And finally, is it possible to modify a mutable file by "patching" it?
> So,
> > if I have a file stored and I want to update a section of the file in the
> > middle, is that possible or would be file need to be downloaded, patched
> and
> > re-uploaded? I think I'm asking a lot here and I already have a plan to
> work
> > around it but as the system seems to do everything else I need I figured
> it
> > was worth asking.
>
> It currently downloads the whole mutable file, holds the whole thing
> in your RAM, patches it, then uploads the whole thing again. Bleah.
>
> Documentation of this performance issue:
>
> http://tahoe-lafs.org/trac/tahoe-lafs/browser/docs/performance.txt
>
> Ticket: #393
>
> This is a GSoC project featuring the excellent Kevan Carstensen in the
> role of student and the excellent Brian Warner in the role of mentor!
>
> So I fully expect this to be greatly improved by the Tahoe-LAFS v1.8
> release in August of this year. :-)
>

That's not a huge issue as I would only be storing reasonably small files
but it would be very useful to have.

>
> > Ok, so it seems to do everything I need, now for a couple of questions
> > about "nice to have" features...
> >
> > I believe I can set up a grid to consist of nodes on a LAN and WAN as
> > part of the same storage cluster. So, if I had 3 locations each with 5
> > storage nodes, could I configure the grid to ensure a file is written to
> > each location so that I could handle all servers at a particular
> > location going down?
>
> Ah, no, not directly. We have a ticket about that one (#467, #302), but
> it's deeper than it looks and we haven't come to a conclusion on how to
> build it.
>
> The current system will try to distribute the shares as widely as
> possible, using a different pseudo-random permutation for each file, but
> it is completely unaware of server properties like "location". If you
> have more free servers than shares, it will only put one share on any
> given server, but you might wind up with more shares in one location
> than the others.
>
> For example, if you have 15 servers in three locations A:1/2/3/4/5,
> B:6/7/8/9/10, C:11/12/13/14/15, and use the default 3-of-10 encoding,
> your worst case is winding up with shares on 1/2/3/4/5/6/7/8/9/10, and
> not use location C at all. The most *likely* case is that you'll wind up
> with 3 or 4 shares in each location, but there's nothing in the system
> to enforce that: it's just shuffling all the servers into a ring,
> starting at 0, and assigning shares to servers around and around the
> ring until all the shares have a home.
>
> There's some math we could do to estimate the probability of things like
> this, but I'd have to dust off a stats textbook to remember what it is.
> (actually, since 15-choose-10 is only 3003, I'd write a little python
> program to simulate all the possibilities, and then count the results).
>
> [brian spends 5 minutes writing the script, attached]
>
> Ok, so the possibilities are:
>
>  (3, 3, 4) 1500
>  (2, 4, 4) 750
>  (2, 3, 5) 600
>  (1, 4, 5) 150
>  (0, 5, 5) 3
>  sum =    3003
>
> So you've got a 50% chance of the ideal distribution, and a 1/1000
> chance of the worst-case distribution.
>

Thanks for the explanation, it's unlikely a whole location would go down but
would be fantastic to know that everything would still work if the worst
happened.

Does this negate the advantage of having the storage nodes use RAID-5/6?
Would it make sense to just use RAID-0 and let Tahoe-LAFS deal with the
redundancy?


> > And finally, is it possible to modify a mutable file by "patching" it?
> > So, if I have a file stored and I want to update a section of the file
> > in the middle, is that possible or would be file need to be downloaded,
> > patched and re-uploaded? I think I'm asking a lot here and I already
> > have a plan to work around it but as the system seems to do everything
> > else I need I figured it was worth asking.
>
> Not at present. We've only implemented "Small Distributed Mutable Files"
> (SDMF) so far, which have the property that the whole file must be
> downloaded or uploaded at once. We have plans for "medium" MDMF files,
> which will fix this. MDMF files are broken into segments (default size
> is 128KiB), and you only have to replace the segments that are dirtied
> by the write, so changing a single byte would only require the upload of
> N/k*128KiB or about 440KiB for the default 3-of-10 encoding.
>
> Kevan Carstensen is spending his summer implementing MDMF, thanks to the
> sponsorship of Google Summer Of Code. Ticket #393 is tracking this work.
>
>
If this does happen, I would be very interested in testing and helping out
in any way I can.

More questions:

Are links stored in the same way that files are? So if a storage node
containing a link to a file goes down, will that link exist on another node?

Can there be more than one introducer node? The documentation seems to
suggest there can be only one but this would be a single point of failure
wouldn't it?

Can there be more than one storage folder on a storage node? So if a storage
server contains 3 drives without RAID, can it use all 3 for storage?

And finally, are there any large companies relying on Tahoe-LAFS at present?
I'm trying to sell this to the powers that be and if I can drop some names I
would stand a much better chance !

Thanks,

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://allmydata.org/pipermail/tahoe-dev/attachments/20100513/6e83b8c8/attachment-0001.htm 


More information about the tahoe-dev mailing list