[tahoe-dev] segment size, RTT and throughput

Vladimir Arseniev vladimira at aport.ru
Wed Mar 28 22:37:12 UTC 2012


We're puzzled, and would appreciate comment.

Using the grid that I've described, we've looked at the impact of
multiple simultaneous uploads. We just forked multiple puts. Using
default segment size and pipeline, we get:

 #   Helper-Upload-KBps   Push-KBps  Shares-Pushed   Files-Linked
 1        70                200       10              1
 3       220                240       10 (per file)   3
 4       190                250       10 (per file)   4
>4      <190               <200       10 (per file)   not all

The client (Ubuntu VM with one CPU, 512MB memory and 1.3GB swap) does
not deal well with more than five simultaneous uploads. Although all
shares of all files do get uploaded, some of the files don't get linked
to the grid directory, and their upload operations are missing from
"Recent Uploads and Downloads". Still, even with 20 simultaneous 1MB
uploads, the client recovered, and completed the last 10 successfully.
That's impressive!

We've also started to look at the impact of increasing segment size
(client.py) and pipeline (layout.py), and we're puzzled. For 10MB files,
increasing segment size to 1MB and pipeline to 40MB doesn't accelerate
helper uploads (60KBps) but it does accelerate pushing to storage nodes
(250KBps). Why might that be?



More information about the tahoe-dev mailing list