[tahoe-dev] [tahoe-lafs] #1264: Performance regression for large values of K
tahoe-lafs
trac at tahoe-lafs.org
Sun Nov 28 08:59:58 UTC 2010
#1264: Performance regression for large values of K
------------------------------+---------------------------------------------
Reporter: francois | Owner: francois
Type: defect | Status: new
Priority: major | Milestone: soon
Component: code-network | Version: 1.8.0
Resolution: | Keywords: performane regression download
Launchpad Bug: |
------------------------------+---------------------------------------------
Comment (by zooko):
Replying to [comment:13 francois]:
>
> Oh! Sorry, the previous profile was indeed generated by {{{twistd
--debug}}}, my bad! Here's a new one.
Did this one take about half as much time to download the 10 MB file?
Because profiler reports 130 seconds (which it incorrectly calls CPU
seconds when it is really wall clock seconds) for this one, and it
reported about 240 for the last one. Hey, in fact, does this mean that
almost all of the reported performance regression is due to François
accidentally turning on debug mode? :-)
In any case, here are the total and cumulative times, showing all the ones
that took more than 1% of the total time:
{{{
8948704 function calls (8908705 primitive calls) in 129.678 CPU
seconds
Ordered by: cumulative time
List reduced from 1277 to 64 due to restriction <64>
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 129.678 129.678 base.py:1046(run)
1 0.032 0.032 129.678 129.678 base.py:1051(mainLoop)
1705 0.065 0.000 110.454 0.065
selectreactor.py:93(doSelect)
1705 101.247 0.059 101.373 0.059 {select.select}
1705 0.049 0.000 19.137 0.011 base.py:666(runUntilCurrent)
1173 0.055 0.000 18.963 0.016 eventual.py:18(_turn)
6428 0.270 0.000 17.554 0.003 share.py:188(loop)
6428 0.170 0.000 16.151 0.003 share.py:239(_do_loop)
6428 0.357 0.000 9.349 0.001 share.py:709(_send_requests)
51318/30610 0.784 0.000 9.008 0.000
defer.py:314(_runCallbacks)
4939 0.052 0.000 8.991 0.002 log.py:38(callWithLogger)
4939 0.061 0.000 8.932 0.002 log.py:33(callWithContext)
4939 0.033 0.000 8.804 0.002
context.py:58(callWithContext)
4939 0.054 0.000 8.766 0.002
context.py:32(callWithContext)
4939 0.047 0.000 8.695 0.002
selectreactor.py:144(_doReadOrWrite)
25616/20408 0.089 0.000 8.362 0.000 defer.py:233(callback)
25622/20414 0.105 0.000 8.247 0.000
defer.py:291(_startRunCallbacks)
5231/5177 0.043 0.000 7.703 0.001 defer.py:84(maybeDeferred)
5049 0.030 0.000 7.472 0.001
referenceable.py:406(callRemote)
5049 0.208 0.000 7.398 0.001
referenceable.py:418(_callRemote)
4828 0.022 0.000 7.087 0.001 share.py:741(_send_request)
2495 0.045 0.000 6.566 0.003 tcp.py:345(doRead)
5050 0.032 0.000 6.500 0.001 banana.py:183(send)
5050 0.087 0.000 6.468 0.001 root.py:92(send)
2423 0.026 0.000 6.351 0.003 tcp.py:116(doRead)
5121 0.787 0.000 6.090 0.001 banana.py:191(produce)
1891 0.037 0.000 5.503 0.003 banana.py:633(dataReceived)
1891 0.868 0.000 5.464 0.003 banana.py:701(handleData)
155707 1.606 0.000 3.148 0.000 abstract.py:161(write)
8028 0.144 0.000 3.058 0.000
share.py:268(_get_satisfaction)
23231 0.338 0.000 3.057 0.000
hashtree.py:298(needed_hashes)
51317/45429 0.245 0.000 2.934 0.000 defer.py:172(addCallbacks)
6428 0.239 0.000 2.831 0.000 share.py:560(_desire)
23231 0.518 0.000 2.716 0.000 hashtree.py:128(needed_for)
5629 0.105 0.000 2.706 0.000 banana.py:1049(handleClose)
40405 0.343 0.000 2.609 0.000 banana.py:485(sendToken)
5049 0.096 0.000 2.515 0.000 call.py:652(receiveClose)
44426 0.900 0.000 2.475 0.000 log.py:144(msg)
60873 0.329 0.000 2.220 0.000 banana.py:22(int2b128)
2263 0.087 0.000 1.890 0.001 abstract.py:90(doWrite)
5049 0.017 0.000 1.807 0.000 call.py:673(_done)
5049 0.041 0.000 1.790 0.000 call.py:55(complete)
150076 0.508 0.000 1.742 0.000 hashtree.py:118(sibling)
2263 0.019 0.000 1.732 0.001 tcp.py:364(writeSomeData)
2032 0.012 0.000 1.723 0.001 tcp.py:140(doWrite)
2032 0.011 0.000 1.581 0.001 tcp.py:152(writeSomeData)
2032 1.552 0.001 1.552 0.001 {built-in method send}
61062 0.312 0.000 1.392 0.000 spans.py:25(__init__)
155628 0.536 0.000 1.283 0.000 tcp.py:256(startWriting)
44426 0.848 0.000 1.117 0.000 log.py:227(add_event)
36748 0.421 0.000 1.091 0.000 spans.py:46(add)
}}}
{{{
8948704 function calls (8908705 primitive calls) in 129.678 CPU
seconds
Ordered by: internal time
List reduced from 1277 to 64 due to restriction <64>
ncalls tottime percall cumtime percall filename:lineno(function)
1705 101.247 0.059 101.373 0.059 {select.select}
155707 1.606 0.000 3.148 0.000 abstract.py:161(write)
2032 1.552 0.001 1.552 0.001 {built-in method send}
44426 0.900 0.000 2.475 0.000 log.py:144(msg)
}}}
We can see the share.py functions in the cumulative listing, showing that
we spent 13.5% of the total time inside share.py's {{{loop()}}}, but other
than that there aren't any suspicious CPU users. It seems to spend most of
its time waiting for the network, which is as it should be.
--
Ticket URL: <http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1264#comment:14>
tahoe-lafs <http://tahoe-lafs.org>
secure decentralized storage
More information about the tahoe-dev
mailing list