Some basic notes on performance: == Memory Footprint == [wiki:MemoryFootprint The MemoryFootprint page] has more specific information. The [http://allmydata.org/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_memstats.html munin graph] shows our static memory footprint on a 32bit machine (starting a node but not doing anything with it) to be about 24MB. Uploading one file at a time gets the node to about 29MB. (we only process one segment at a time, so peak memory consumption occurs when the file is a few MB in size and does not grow beyond that). Uploading multiple files at once would increase this. We also have a [http://allmydata.org/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_memstats_64.html 64-bit graph], which currently shows a disturbingly large static footprint. We've determined that simply importing a few of our support libraries (such as Twisted) results in most of this expansion, before the node is ever even started. The cause for this is still being investigated: we can think of plenty of reasons for it to be 2x, but the results show something closer to 6x. == Network Speed == === Test Results === Using a 3-server testnet in colo and an uploading node at home (on a DSL line that gets about 78kBps upstream and has a 14ms ping time to colo) using 0.5.1-34 takes 820ms-900ms per 1kB file uploaded (80-90s for 100 files, 819s for 1000 files). 'scp' of 3.3kB files (simulating expansion) takes 8.3s for 100 files and 79s for 1000 files, 80ms each. Doing the same uploads locally on my laptop (both the uploading node and the storage nodes are local) takes 46s for 100 1kB files and 369s for 1000 files. Small files seem to be limited by a per-file overhead. Large files are limited by the link speed. The munin [/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_speedstats_delay.html delay graph] and [/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_speedstats_rate.html rate graph] show these Ax+B numbers for a node in colo and a node behind a DSL line. The [/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_speedstats_delay_rtt.html delay*RTT graph] shows this per-file delay as a multiple of the average round-trip time between the client node and the testnet. Much of the work done to upload a file involves waiting for message to make a round-trip, so expressing the per-file delay in units of RTT helps to compare the observed performance against the predicted value. === Roundtrips === The 0.5.1 release requires about 9 roundtrips for each share it uploads. The upload algorithm sends data to all shareholders in parallel, but these 9 phases are done sequentially. The phases are: 1. allocate_buckets 2. send_subshare (once per segment) 3. send_plaintext_hash_tree 4. send_crypttext_hash_tree 5. send_subshare_hash_trees 6. send_share_hash_trees 7. send_UEB 8. close 9. dirnode update We need to keep the send_subshare calls sequential (to keep our memory footprint down), and we need a barrier between the close and the dirnode update (for robustness and clarity), but the others could be pipelined. 9*14ms=126ms, which accounts for about 15% of the measured upload time. Doing steps 2-8 in parallel (using the attached pipeline-sends.diff patch) does indeed seem to bring the time-per-file down from 900ms to about 800ms, although the results aren't conclusive. With the pipeline-sends patch, my uploads take A+B*size time, where A is 790ms and B is 1/23.4kBps . 3.3/B gives the same speed that basic 'scp' gets, which ought to be my upstream bandwidth. This suggests that the main limitation to upload speed is the constant per-file overhead, and the FEC expansion factor. == Storage Servers == ext3 (on tahoebs1) refuses to create more than 32000 subdirectories in a single parent directory. In 0.5.1, this appears as a limit on the number of buckets (one per storage index) that any StorageServer can hold. A simple nested directory structure will work around this.. the following code would let us manage 33.5G shares (see #150). {{{ from idlib import b2a os.path.join(b2a(si[:2]), b2a(si[2:4]), b2a(si)) }}} This limitation is independent of problems of memory use and lookup speed. Once the number of buckets is large, the filesystem may take a long time (and multiple disk seeks) to determine if a bucket is present or not. The provisioning page suggests how frequently these lookups will take place, and we can compare this against the time each one will take to see if we can keep up or not. If and when necessary, we'll move to a more sophisticated storage server design (perhaps with a database to locate shares). I was unable to measure a consistent slowdown resulting from having 30000 buckets in a single storage server.