Changes between Version 1 and Version 10 of Ticket #671


Ignore:
Timestamp:
2013-07-04T17:21:16Z (11 years ago)
Author:
zooko
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #671

    • Property Status changed from new to assigned
    • Property Cc frederik.braun+tahoe@… added
    • Property Owner set to davidsarah
    • Property Milestone changed from 1.6.0 to 1.11.0
    • Property Keywords usability statistics sftp added
  • Ticket #671 – Description

    v1 v10  
    11We used to have a {{{sizelimit}}} option which would do a recursive examination of the storage directory at startup and calculate approximately how much disk space was used, and refuse to accept new shares if the disk space would exceed the limit.  #34 shows when it was implemented.  It was later removed because it took a long time -- about 30 minutes -- on allmydata.com storage servers, and the servers remained unavailable to clients during this period, and because it was replaced by the {{{reserved_space}}} configuration, which was very fast and which satisfied the requirements of the allmydata.com storage servers.
    22
    3 This ticket is to reintroduce {{{sizelimit}}} because [http://allmydata.org/pipermail/tahoe-dev/2009-March/001493.html some users want it].  This might mean that the storage server doesn't start serving clients until it finishes the disk space inspection at startup.
     3This ticket is to reintroduce {{{sizelimit}}} because [//pipermail/tahoe-dev/2009-March/001493.html some users want it].  This might mean that the storage server doesn't start serving clients until it finishes the disk space inspection at startup.
    44
    55Note that {{{sizelimit}}} would impose a maximum limit on the amount of space consumed by the node's {{{storage/shares/}}} directory, whereas {{{reserved_space}}} imposes a minimum limit on the amount of remaining available disk space. In general, {{{reserved_space}}} can be implemented by asking the OS for filesystem stats, whereas {{{sizelimit}}} must be implemented by tracking the node's own usage and accumulating the sizes over time.