[tahoe-dev] Tahoe on large filesystems

Jan-Benedict Glaw jbglaw at lug-owl.de
Fri Feb 4 02:51:27 PST 2011


Hi!

In preparation for some nodes, I just thought a bit about the usage of
hugh disks and hugh filesystems with Tahoe. With today's technology,
you can cheaply buy a bunch of 2TB HDDs and stuff them into a box.
Even worse, you'd use external cabling (USB/Firewire), which could
easily (compared to internally placed cables) be ripped off.

However, with disks that large, you also hit statistical effects,
which even make it harder to put those into RAIDs. Consider a 2TB
filesystem on a 2TB disk. Sooner or later, you will face a read (or
even write) errors, which will at least easily result in a r/o
filesystem. For reading other shares, that's not much of a problem.
But you're instantly also loosing a hugh *writeable* area.

So with disks that large, do you use a small number of large
partitions/filesystems (or even only one), or do you cut it down to,
say, 10 filesystems of 200GB each, starting a separate tahoe node for
each filesystem. Or do you link the individual filesystems into the
storage directory?

Running like 10 tahoe nodes on one physical HDD would create another
problem: what if all (or most of all) shares get stored to that single
HDD, with all being lost with a single drive crash?

MfG, JBG

-- 
      Jan-Benedict Glaw      jbglaw at lug-owl.de              +49-172-7608481
Signature of:               The real problem with C++ for kernel modules is:
the second  :                                 the language just sucks.
                                                   -- Linus Torvalds
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 197 bytes
Desc: Digital signature
URL: <http://tahoe-lafs.org/pipermail/tahoe-dev/attachments/20110204/9eeb109c/attachment.pgp>


More information about the tahoe-dev mailing list