#464 new task

evaluate different share-storage schemes

Reported by: warner Owned by:
Priority: major Milestone: undecided
Component: code-storage Version: 1.1.0
Keywords: performance statistics scalability backend Cc:
Launchpad Bug:

Description

Our current share-storage scheme is simple and easy to understand, but may cause performance problems at some point. It would be nice to know how fast it will run, and how fast some different schemes might be.

So the task is:

  • look at an existing storage server to get a count and size-distribution of files (just a histogram of filesizes)
  • look at the logs to get a traffic mix: what percentage of operations are reads vs writes, and what percentage of the reads are for shares that the server has (rather than ones that the server is missing)
  • use this information to create a tool that uses a StorageServer instance to create a similar share directory, in a configurable size. We should be able to create 1GB, 10GB, 100GB, or 1TB of shares in a similar ratio as a real store.
  • use the traffic-mix information to create a tool that queries the StorageServer instance with the same traffic characteristics as real servers do, with a configurable rate: simulate 10 average clients, 100 clients, 1000 clients, etc.
  • measure the performance of the server:
    • how long do the queries take (milliseconds per query, look at the mean, median, and 90th percentile)
    • kernel-level disk IO stats: blocks per second, see if we can count seeks per second
    • space consumed (as measured by 'df') vs the total size of the shares that were written: measure the fs overhead, including minimum block size and extra directories
    • the filesystem type (ext3, xfs, reiser, etc) must be recorded with each measurement, along with the storage scheme in use
  • evaluate other filesystem types
  • evaluate other storage schemes:
    • current is 2-level: ab/abcdef../SHNUM
    • try 3-level, maybe up to 10-level
    • pack small shares for different SIs into one file, use an offset table to locate the share

When we're done with this, we should have a good idea about how many simultaneous clients our existing scheme can handle before we run out of disk bandwidth (or seek bandwidth), at which point we'll need to switch to something more sophisticated.

Change History (2)

comment:1 Changed at 2010-02-11T03:42:38Z by davidsarah

  • Keywords performance statistics scalability added

comment:2 Changed at 2010-03-31T16:42:47Z by davidsarah

  • Keywords backend added
Note: See TracTickets for help on using tickets.