(See also copious notes and data about performance of older versions of Tahoe-LAFS, archived at Performance/Old.) In late 2010 Kyle Markley did some benchmarking of what were then the release candidates for Tahoe-LAFS v1.8.0. This helped us catch two major performance regressions in Brian's New Downloader and helped make Tahoe-LAFS v1.8.0 into an excellent new release (see epic ticket #1170 for mind-numbing details). Kyle also contributed code for his benchmarking scripts (in Perl), but nobody to my knowledge has yet tried to re-use that script. * http://tahoe-lafs.org/pipermail/tahoe-dev/2010-July/004776.html * http://tahoe-lafs.org/pipermail/tahoe-dev/2010-July/004781.html * http://tahoe-lafs.org/pipermail/tahoe-dev/2010-July/004782.html * http://tahoe-lafs.org/pipermail/tahoe-dev/2010-July/004838.html * http://tahoe-lafs.org/pipermail/tahoe-dev/2010-August/004853.html * http://tahoe-lafs.org/pipermail/tahoe-dev/2010-August/004985.html * http://tahoe-lafs.org/pipermail/tahoe-dev/2010-August/005007.html * http://tahoe-lafs.org/pipermail/tahoe-dev/2010-August/005011.html * http://tahoe-lafs.org/pipermail/tahoe-dev/2010-September/005155.html * http://tahoe-lafs.org/pipermail/tahoe-dev/2010-September/005162.html We also experimented with different segment sizes and immutable uploader pipeline depths, and the results tentatively confirmed that the current segment size (128 KiB) and immutable uploader pipeline depth (50,000 B) were better on both of Kyle's networks than any of the alternatives that Kyle tried. Along the way Terrell Russell did some benchmarking and contributed a bash script which I used several times during the process: * http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1170#comment:81 At about the same time Nathan Eisenberg of Atlas Networks did a couple of manual benchmarks: * http://tahoe-lafs.org/pipermail/tahoe-dev/2010-August/004929.html * http://tahoe-lafs.org/pipermail/tahoe-dev/2010-August/004929.html Also François Deppierraz has run a few benchmarks. (Can't find a link to his results.) What we ''really'' want, of course, is automated benchmarks that get executed at regularly scheduled intervals, or whenever a new patch is committed to revision control, or both. This would ideally run on some dedicated hardware or at least on some virtualized hardware which had a fairly consistent load of other tenants, so that the resulting measurements would not get too much noise from other people's behavior. You can see on Performance/Old that we used to have such an automated setup, including graphs of the resulting performance.