[tahoe-dev] best practice for wanting to setup multiple tahoe instances on a single node
Greg Troxel
gdt at ir.bbn.com
Wed Jan 11 00:30:39 UTC 2012
Jimmy Tang <jcftang at gmail.com> writes:
> I was just wondering if there are best practice recommendations for
> setting up storage nodes? As far as I understand the recommended way
> is to setup one instance per node with one big partition on the node.
> What about setting up multiple instances of tahoe storage nodes per
> partition on one machine, in an possible scenario where I have 150tb
> of space on a machine but I can only make a bunch of 16tb partitions.
> I ask this because we have a few machines in work right now with this
> kind of setup and I'm kinda pushing for using tahoe-lafs as a possible
> backend storage system, possibly with irods sitting on top to manage
> the data (yet to be decided).
I think the key point is about the redundancy that you have vs the
redundancy that tahoe perceives - it seems dangerous to have 20 nodes
that appear independent but are all actually on the same box. If they
are on 20 physical disks that are independent enough that if the box
fails you can reconstitute all but of the nodes, it might be ok, but it
seems subject to correlated failures.
If there were a way to have the nodes express their correlation groups,
and the share placement be aware of this, then I think it might be ok.
But we don't have that yet, and thus it seems to me that if you want the
survivability properties tahoe gives you, you should run only one node
per computer. And perhaps only one node per site, if you want that kind
of redundancy.
Have you measured performance? I have the impression that if you don't
value the decentralized distributed resilient nature of tahoe, it's not
a good choice.
> As a side question, as we expand the number of nodes, I would probably
> want to change the k-of-n settings. would the migration method to
> newer k-of-n parameters be copy and delete within the grid to
> rebalance data? actually how does one rebalance the system as k-of-n
> changes, I think this feature has been discussed in the past but it
> hasn't really been looked at?
The N is part of the share and the cap (and maybe the k). So I think
it's hard to do what you want.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 194 bytes
Desc: not available
URL: <http://tahoe-lafs.org/pipermail/tahoe-dev/attachments/20120110/5e9df2c0/attachment.pgp>
More information about the tahoe-dev
mailing list