Version 2 (modified by warner, at 2007-09-09T00:09:46Z) (diff) |
---|
Some basic notes on performance:
Memory Footprint
The MemoryFootprint page has more specific information.
The [http://allmydata.org/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_memstats.html munin graph] shows our static memory footprint (starting a node but not doing anything with it) to be about 24MB. Uploading one file at a time gets the node to about 29MB. (we only process one segment at a time, so peak memory consumption occurs when the file is a few MB in size and does not grow beyond that). Uploading multiple files at once would increase this.
Network Speed
test results
Using a 3-server testnet in colo and an uploading node at home (on a DSL line that gets about 150kBps upstream and has a 14ms ping time to colo) using 0.5.1-34 takes 820ms-900ms per 1kB file uploaded (80-90s for 100 files, 819s for 1000 files).
'scp' of 3.3kB files (simulating expansion) takes 8.3s for 100 files and 79s for 1000 files, 80ms each.
Doing the same uploads locally on my laptop (both the uploading node and the storage nodes are local) takes 46s for 100 1kB files and 369s for 1000 files.
Roundtrips
The 0.5.1 release requires about 9 roundtrips for each share it uploads. The upload algorithm sends data to all shareholders in parallel, but these 9 phases are done sequentially. The phases are:
- allocate_buckets
- send_subshare (once per segment)
- send_plaintext_hash_tree
- send_crypttext_hash_tree
- send_subshare_hash_trees
- send_share_hash_trees
- send_UEB
- close
- dirnode update
We need to keep the send_subshare calls sequential (to keep our memory footprint down), and we need a barrier between the close and the dirnode update (for robustness and clarity), but the others could be pipelined. 9*14ms=126ms, which accounts for about 15% of the measured upload time.
Doing steps 2-8 in parallel (using the attached pipeline-sends.diff patch) does indeed seem to bring the time-per-file down from 900ms to about 800ms, although the results aren't conclusive.
Storage Servers
ext3 (on tahoebs1) refuses to create more than 32000 subdirectories in a single parent directory. In 0.5.1, this appears as a limit on the number of buckets (one per storage index) that any StorageServer? can hold. A simple nested directory structure will work around this.. the following code would let us manage 33.5G shares:
from idlib import b2a os.path.join(b2a(si[:2]), b2a(si[2:4]), b2a(si))
This limitation is independent of problems of memory use and lookup speed. Once the number of buckets is large, the filesystem may take a long time (and multiple disk seeks) to determine if a bucket is present or not. The provisioning page suggests how frequently these lookups will take place, and we can compare this against the time each one will take to see if we can keep up or not. If and when necessary, we'll move to a more sophisticated storage server design (perhaps with a database to locate shares).
Attachments (1)
-
pipeline-sends.diff
(1.5 KB) -
added by warner at 2007-09-09T00:10:23Z.
patch to pipeline the hash-sends during upload
Download all attachments as: .zip