[tahoe-lafs-trac-stream] [tahoe-lafs] #699: rebalance during repair or upload
tahoe-lafs
trac at tahoe-lafs.org
Sun Oct 28 01:08:38 UTC 2012
#699: rebalance during repair or upload
-------------------------+-------------------------------------------------
Reporter: zooko | Owner:
Type: defect | Status: new
Priority: major | Milestone: eventually
Component: code- | Version: 1.4.1
peerselection | Keywords: upload repair preservation test
Resolution: | anti-censorship
Launchpad Bug: |
-------------------------+-------------------------------------------------
Comment (by amontero):
I can't help in why -i doesn't works for you. Anyway, you can just omit
the
{{{#!sh
-i "n"
}}}
part, since it's only meant to default to "no" when asking.
Thanks for the pyhton script. Will keep it at hand when comes my time to
learn Python :)
By now, any of those scripts can do a "dirty rebalancing".
As a workaround, I think that a "hold no more than Z shares" setting in
each server can make this easier. Just firing repairs at regular intervals
would eventually create shares on all servers. Any server having the
desired shares should just not accept any more shares for that file, so
there would be no need of pruning. This way, we can easily tune how much
"responsibility" each node would accept for each file. Arriving servers
would eventually catch up when repairing.
I'm interested in developers feedback about this path. In case it could be
easy enough for a Python novice, I could take a stab at it. I tried it
months ago, but could not find my way through the code, so implementation
advice is very welcome.
A future rebalancer would be way smarter, but meanwhile I think this
approach will solve some use cases.
--
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/699#comment:16>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage
More information about the tahoe-lafs-trac-stream
mailing list