Changes between Version 1 and Version 2 of Ticket #2123


Ignore:
Timestamp:
2013-11-30T20:48:51Z (11 years ago)
Author:
amontero
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #2123 – Description

    v1 v2  
    2626
    2727=== Proposed solution ===
    28 Add server-side configuration option in storage nodes to make them gently reject holding shares in excess of k. This would address space wasting. Also, since local node would refuse to store an extra/unneeded share. A new node arriving would get the remaining share at repair time to fulfill the desired N.
     28Add server-side configuration option in storage nodes to make them gently reject holding shares in excess of k. This would address space wasting. Also, since local storage node would refuse to store an extra/unneeded share, a new storage node arriving would get the remaining share at repair time to fulfill the desired N, thus achieving/increasing replication.
    2929
    30 Current/future placement improvements can't be relied on to be able to achieve this easily and, since look like it's more of a [storage] server-side policy, it's unlikely as I'm currenly able to understand share placement that could even be achieved with minimal/enough guarantee.
     30Current/future placement improvements can't be relied on to be able to achieve this easily and, since look like it's more of a [storage] server-side policy, it's unlikely. At least, as far as I'm currenly able to understand share placement now or how that could even be achieved with minimal/enough guarantee (sometimes this gets quantum-physics to me). I think it's too much to rely on upload client behavior/decisions and they will have very limited knowledge window of the whole grid, IMO.
    3131
    3232Apart from the described use case, this setting would be useful in other scenarios where the storage node operator should exercise some control for other reasons.