[tahoe-dev] Tahoe on large filesystems
Greg Troxel
gdt at ir.bbn.com
Fri Feb 4 05:23:05 PST 2011
Jan-Benedict Glaw <jbglaw at lug-owl.de> writes:
> However, with disks that large, you also hit statistical effects,
> which even make it harder to put those into RAIDs. Consider a 2TB
> filesystem on a 2TB disk. Sooner or later, you will face a read (or
> even write) errors, which will at least easily result in a r/o
> filesystem. For reading other shares, that's not much of a problem.
> But you're instantly also loosing a hugh *writeable* area.
>
> So with disks that large, do you use a small number of large
> partitions/filesystems (or even only one), or do you cut it down to,
> say, 10 filesystems of 200GB each, starting a separate tahoe node for
> each filesystem. Or do you link the individual filesystems into the
> storage directory?
Or do you use a filesystem that can avoid bad blocks natively, and then
maybe run tahoe on that.
I would think ZFS can do this, but I don't actually know that.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 194 bytes
Desc: not available
URL: <http://tahoe-lafs.org/pipermail/tahoe-dev/attachments/20110204/ef544092/attachment.pgp>
More information about the tahoe-dev
mailing list