Changes between Version 1 and Version 2 of Ticket #2123
- Timestamp:
- 2013-11-30T20:48:51Z (11 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Ticket #2123 – Description
v1 v2 26 26 27 27 === Proposed solution === 28 Add server-side configuration option in storage nodes to make them gently reject holding shares in excess of k. This would address space wasting. Also, since local node would refuse to store an extra/unneeded share. A new node arriving would get the remaining share at repair time to fulfill the desired N.28 Add server-side configuration option in storage nodes to make them gently reject holding shares in excess of k. This would address space wasting. Also, since local storage node would refuse to store an extra/unneeded share, a new storage node arriving would get the remaining share at repair time to fulfill the desired N, thus achieving/increasing replication. 29 29 30 Current/future placement improvements can't be relied on to be able to achieve this easily and, since look like it's more of a [storage] server-side policy, it's unlikely as I'm currenly able to understand share placement that could even be achieved with minimal/enough guarantee.30 Current/future placement improvements can't be relied on to be able to achieve this easily and, since look like it's more of a [storage] server-side policy, it's unlikely. At least, as far as I'm currenly able to understand share placement now or how that could even be achieved with minimal/enough guarantee (sometimes this gets quantum-physics to me). I think it's too much to rely on upload client behavior/decisions and they will have very limited knowledge window of the whole grid, IMO. 31 31 32 32 Apart from the described use case, this setting would be useful in other scenarios where the storage node operator should exercise some control for other reasons.