[tahoe-lafs-trac-stream] [tahoe-lafs] #671: bring back sizelimit (i.e. max consumed, not min free)

tahoe-lafs trac at tahoe-lafs.org
Thu Oct 25 20:49:55 UTC 2012


#671: bring back sizelimit (i.e. max consumed, not min free)
--------------------------------+---------------------------------------
     Reporter:  zooko           |      Owner:
         Type:  defect          |     Status:  new
     Priority:  major           |  Milestone:  eventually
    Component:  code-nodeadmin  |    Version:  1.3.0
   Resolution:                  |   Keywords:  usability statistics sftp
Launchpad Bug:                  |
--------------------------------+---------------------------------------

Old description:

> We used to have a {{{sizelimit}}} option which would do a recursive
> examination of the storage directory at startup and calculate
> approximately how much disk space was used, and refuse to accept new
> shares if the disk space would exceed the limit.  #34 shows when it was
> implemented.  It was later removed because it took a long time -- about
> 30 minutes -- on allmydata.com storage servers, and the servers remained
> unavailable to clients during this period, and because it was replaced by
> the {{{reserved_space}}} configuration, which was very fast and which
> satisfied the requirements of the allmydata.com storage servers.
>
> This ticket is to reintroduce {{{sizelimit}}} because
> [http://allmydata.org/pipermail/tahoe-dev/2009-March/001493.html some
> users want it].  This might mean that the storage server doesn't start
> serving clients until it finishes the disk space inspection at startup.
>
> Note that {{{sizelimit}}} would impose a maximum limit on the amount of
> space consumed by the node's {{{storage/shares/}}} directory, whereas
> {{{reserved_space}}} imposes a minimum limit on the amount of remaining
> available disk space. In general, {{{reserved_space}}} can be implemented
> by asking the OS for filesystem stats, whereas {{{sizelimit}}} must be
> implemented by tracking the node's own usage and accumulating the sizes
> over time.
>
> To close this ticket, you do *not* need to implement some sort of
> interleaving of inspecting disk space and serving clients.
>
> To close this ticket, you MUST NOT implement any sort of automatic
> deletion of shares to get back under the sizelimit if you find yourself
> over it (for example, if the user has changed the sizelimit to be lower
> after you've already filled it to the max), but you SHOULD implement some
> sort of warning message to the log if you detect this condition.

New description:

 We used to have a {{{sizelimit}}} option which would do a recursive
 examination of the storage directory at startup and calculate
 approximately how much disk space was used, and refuse to accept new
 shares if the disk space would exceed the limit.  #34 shows when it was
 implemented.  It was later removed because it took a long time -- about 30
 minutes -- on allmydata.com storage servers, and the servers remained
 unavailable to clients during this period, and because it was replaced by
 the {{{reserved_space}}} configuration, which was very fast and which
 satisfied the requirements of the allmydata.com storage servers.

 This ticket is to reintroduce {{{sizelimit}}} because
 [http://allmydata.org/pipermail/tahoe-dev/2009-March/001493.html some
 users want it].  This might mean that the storage server doesn't start
 serving clients until it finishes the disk space inspection at startup.

 Note that {{{sizelimit}}} would impose a maximum limit on the amount of
 space consumed by the node's {{{storage/shares/}}} directory, whereas
 {{{reserved_space}}} imposes a minimum limit on the amount of remaining
 available disk space. In general, {{{reserved_space}}} can be implemented
 by asking the OS for filesystem stats, whereas {{{sizelimit}}} must be
 implemented by tracking the node's own usage and accumulating the sizes
 over time.

 To close this ticket, you do *not* need to implement some sort of
 interleaving of inspecting disk space and serving clients.

 To close this ticket, you MUST NOT implement any sort of automatic
 deletion of shares to get back under the sizelimit if you find yourself
 over it (for example, if the user has changed the sizelimit to be lower
 after you've already filled it to the max), but you SHOULD implement some
 sort of warning message to the log if you detect this condition.

--

Comment (by davidsarah):

 Our current plan is to support this using the
 [https://github.com/davidsarah/tahoe-
 lafs/blob/666-accounting/docs/specifications/leasedb.rst leasedb].

-- 
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/671#comment:5>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage


More information about the tahoe-lafs-trac-stream mailing list