#20 closed defect (fixed)
unit tests take too long
Reported by: | zooko | Owned by: | warner |
---|---|---|---|
Priority: | major | Milestone: | eventually |
Component: | code | Version: | unknown |
Keywords: | test performance | Cc: | |
Launchpad Bug: |
Description (last modified by warner)
The best way to fix this would be to optimize the code under test so that it passes the tests faster.
The second best way would be to optimize the tests so that they test the same thing but faster.
The third best way would be to reduce the "replications, redundancy, random stabs in the dark" and so forth, and the tests of scaling. (This cries out for another category of test that reports on scaling behavior...)
The fourth best way would be to judiciously prune tests which have been superceded by more comprehensive tests and are now redundant.
Change History (34)
comment:1 Changed at 2007-05-02T23:34:23Z by warner
comment:2 Changed at 2007-05-04T07:44:43Z by warner
curiously, the 'dapper' buildslave just went from taking 40+ minutes to run the test suite down to 12 minutes. I'm not quite sure how it helped, but the proximate change was to remove the .pyc files before running the tests: the build process (that whole setup.py install --prefix/--root thing) seems to put bogus source file names in the .pyc files, messing up stack traces and figleaf output.
For at least one test run (on the edgy buildslave), I used 'strace' to watch the test's progress, and I saw a whole boatload of futex() syscalls interspersed with failed attempts to open (or maybe just access()) bogus .py filenames that started with /lib . It feels like something was trying to find its source code, failing, and not cacheing the failure. There weren't a lot of stack traces taking place at the time (it was in the middle of test_system, somewhere between send_block_hashes and close_buckets), at least none that made it into the logs.
Oh! But, if Deferred debugging were turned on, that would capture a stack trace inside every single Deferred, both at creation and at invocation. If it were trying to open a source file for those stack traces, that would be a lot of path searching and failed open() calls.
Huh, look at that. Removing the defer.setDebugging(True) from test_introducer.py reduces the time to run the unit tests on my laptop from 1m56s to 55s, a factor of 2. Guess I should fix that.
comment:3 Changed at 2007-05-04T07:57:10Z by warner
- Resolution set to fixed
- Status changed from new to closed
wow. the dapper buildslave, our slowest, was taking half an hour to run the unit test suite yesterday. After removing defer.setDebugging() (and dealing with the .pyc problem in #35), the dapper buildslave just finished a test run in two minutes. Cool.
I think I can close this ticket now. When we add enough tests for this two minutes to become 10, let's revisit the ideas presented above.
comment:4 Changed at 2008-12-08T22:41:28Z by zooko
- Resolution fixed deleted
- Status changed from closed to reopened
Okay, nowadays it takes about 20 minutes to run our tests on the Dapper buildslave. More troublesome for me is that it takes about 5 minutes to run on my own personal workstation. Hopefully we'll implement #217 (DSA-based mutable files -- small URLs, fast file creation) and that it will help a bit. Also, could we use DSA instead of RSA for foolscap tub ids? Or even unencrypted foolscap with randomly-generated tub ids?
comment:5 Changed at 2008-12-08T23:01:34Z by warner
I've considered building a NotQuiteSystemTestMixin framework which would use real storage servers but not real Tubs: to bypass the serialization and network-roundtrip part of the framework. I figure that would cut out a lot of the time spent in system-ish tests.
We're using a lot more SystemTestMixin -style tests these days, partially because it's easier to set up (fewer "fake" objects), partially the system is basically complete (I wrote a lot of unit tests for, say, upload, before there was any code to perform download). The SystemTestMixin takes a second or two just to get initialized and establish all the connections.
comment:6 Changed at 2008-12-10T04:26:11Z by warner
Oh, so source:src/allmydata/test/test_mutable.py LessFakeClient is basically the framework I was thinking of. It uses real StorageServer instances but doesn't use Foolscap or an introducer. It can create a client and 10 connected storage servers in 7 milliseconds, whereas the usual SystemTestMixin takes 1.4 seconds to construct 5 client+servers and wait for them to connect. Most upload/download operations should be faster too.
comment:7 Changed at 2009-01-10T15:20:24Z by zooko
On my main workstation, if it happens to be loaded down with other work, it can take more than 15 minutes to run all the tahoe tests. This interferes with my workflow.
Also I see that the cygwin buildslave took about 8 hours to run all the steps of its builder yesterday.
comment:8 Changed at 2009-12-13T03:52:08Z by davidsarah
- Keywords test performance added
- Priority changed from minor to major
This still definitely takes too long (almost 9 minutes just now). It's quite disruptive.
Also, it would be helpful to document on the Dev page how to run a subset of the tests (I did work out how to do this but I've already forgotten the incantation).
comment:9 Changed at 2009-12-14T02:05:18Z by zooko
davidsarah, where would such documentation live? I thought it would be better in the docs directory than on the wiki, but examining both places I didn't see a natural place to describe it. Anyway the description that should be added might include a link to http://twistedmatrix.com/trac/wiki/TwistedTrial plus the specific tip that trial allmydata.test.test_module.TestClass.test_func is the way to run just one func, and likewise for other prefixes of that test identifier.
comment:10 Changed at 2009-12-17T23:47:00Z by warner
FYI, "trial $TESTNAME" doesn't work, because it doesn't set up PYTHONPATH and PATH and whatnot. I use make quicktest TEST=$TESTNAME. Using make test TEST=$TESTNAME does something similar, but also does a rebuild first, which adds maybe 10 seconds and an awful lot of noise (but works right after a source checkout, whereas quicktest requires an explicit build first).
python setup.py test -s $TESTNAME is equivalent to make test TEST=$TESTNAME, since the latter is implemented in terms of the former.
We could probably add a "developers.txt" file to the docs/ directory.. this hint could live there, along with how-is-the-code-organized and maybe a description of the various Makefile/setup.py commands that are available. OTOH, I usually just look for these descriptions in the Makefile and setup.py, so putting them elsewhere seems a bit weird.
comment:11 Changed at 2010-10-27T05:39:25Z by zooko
An interesting next-step would be to turn on the profiler feature of trial and see which Python functions are taking the most time. :-)
comment:12 Changed at 2011-07-21T19:44:29Z by davidsarah
- Milestone set to eventually
comment:13 Changed at 2012-04-23T22:32:06Z by zooko
The first person who submits a patch that reduces runtime of the full unit test suite, on Brian's laptop, by 20%, without causing any code to be less covered by tests, will receive 50 ⓑ from me. (Current market value, about $250.00.)
Offer good until I change my mind, so hurry up.
comment:14 Changed at 2012-04-23T22:40:04Z by zooko
Oh wait, did I say reduce by 20%? I changed my mind. To get my 50ⓑ, you have to make it 5X faster—in other words take 20% of the total runtime—on Brian's laptop.
Note that Brian's laptop has some kind of performance problem with disk I/O, such that unit tests take about 24 minutes on there, compared to these runtimes on our buildslaves:
https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/lucid-amd64/builds/58
12 minutes
https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/Atlas%20ubuntu%20natty/builds/65/steps/test
15 minutes
https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/Ruben%20Fedora/builds/65
13 minutes
https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/Kyle%20OpenBSD%20amd64/builds/72
10 minutes.
So you might be able to win by finding figuring out where our unit tests are relying on disk I/O and replacing those parts of the unit tests with in-memory fake objects. That would almost certainly not lose coverage over any line of code.
I probably won't change my mind again for at least a few days, so get crackin'!
comment:15 Changed at 2012-04-23T23:10:56Z by zooko
In the unlikely and happy circumstance that more than one person posts a winning patch, the order that they appear as attachments to this ticket will determine who has priority. Also, if someone else wins, but you post a patch which is both original (you didn't copy it from the winner) and has some independent value (it contains code which we might like to adopt even after we adopted the winner's), then I might consider giving you a consolation prize of Bitcoin.
comment:16 follow-ups: ↓ 20 ↓ 21 Changed at 2012-04-24T02:04:39Z by davidsarah
I won't have time to work on this ticket, but here are some ideas:
- create _trial_temp in a tmpfs.
- run the tests in parallel (obviously this will only help on multicore machines).
To test the first of these, I created a tmpfs using sudo mount -t tmpfs -o size=330m tmpfs _trial_temp, and hacked Twisted Trial so that it would just use this existing directory rather than trying to delete and recreate it.
Timings for time bin/tahoe debug trial on my machine with _trial_temp on disk:
real 14m40.738s user 7m34.270s sys 0m30.600s
Corresponding timings with _trial_temp on a tmpfs:
real 9m21.444s user 7m16.930s sys 0m26.130s
Notice that the user CPU time is only slightly reduced, but the real time is reduced by ~36%, consistent with a significant improvement in I/O wait time. The machine was under memory pressure and I would expect the results to be even better if it hadn't been.
The 330 MiB tmpfs could be smaller if the test suite was optimized to not write as much (or to delete files created for tests that pass).
comment:17 Changed at 2012-04-24T02:08:45Z by davidsarah
Sigh, should have thought of the tmpfs idea in the hour between comment:13 and comment:14 ;-)
comment:18 Changed at 2012-04-24T02:13:33Z by davidsarah
Incidentally, the test run in comment:16 left ~324 MiB of memory being used by the tmpfs, so the approach would obviously need more work to be acceptable for general use.
comment:19 Changed at 2012-05-07T22:30:23Z by warner
FYI, on my work laptop (2010 MacBookPro, with OS-X "Snow Leopard" Filevault non-FDE encrypted home directory), the full test suite (current git master, 8aa690b) takes 27m48s to run, of which 7m23s is "user" and 17m19s is "sys", and leaves 233MiB in _trial_temp (of which 118MiB is allmydata.test.test_mutable, 54MiB is cli, and 5-10MiB are used by system, repairer, dirnode, test_upload, test_download, hung_server, and test_encode).
That feels like an awful lot of data being generated. I'd start by looking at test_mutable and seeing if we can get rid of most of the k=N=255 cases. We probably need to keep one of them to exercise that end of the encoding spectrum, but I really doubt we need to do it for every single test_mutable case.
I'd also walk though the tests that upload data and look at the filesizes they're using. Many are casually creating 10MB files, resulting in 30MB of shares. We really don't need to cover more than a few segments, so 1MB should be plenty, and in most cases we should be able to lower the min-segment-size to cover a multi-segment file with just a few kB.
I'd also look into moving most tests to use the common NoNetworkGrid class, and make sure that it's using fake server connections to avoid the SSL spinup time (except for test_system, of course, which needs to exercise the SSL connections too). We have about half a dozen fake-server classes in the test suite, partially because I was too lazy to improve an existing one and decided to create new ones instead. So there's probably some cleanup there that would help.
comment:20 in reply to: ↑ 16 Changed at 2012-10-23T22:04:11Z by davidsarah
Replying to davidsarah:
- run the tests in parallel (obviously this will only help on multicore machines).
See http://twistedmatrix.com/trac/ticket/1784. Summary: you'll be able to do
bin/tahoe debug trial -j NUMBER_OF_PROCESSES
when that is released in the next version of Twisted. You'll need a lot of memory though!
comment:21 in reply to: ↑ 16 ; follow-up: ↓ 22 Changed at 2012-10-23T22:22:13Z by davidsarah
Replying to davidsarah:
To test the first of these, I created a tmpfs using sudo mount -t tmpfs -o size=330m tmpfs _trial_temp, and hacked Twisted Trial so that it would just use this existing directory rather than trying to delete and recreate it.
A better way to do that is using --temp-directory: trial will still attempt to delete the directory you specify, so it can't directly be the tmpfs mount, but that's OK as long as it is underneath the tmpfs mount:
mkdir -p tmp sudo mount -t tmpfs -o size=330m tmpfs tmp bin/tahoe debug trial --rterrors --temp-directory=tmp/_trial_temp
When done:
sudo umount tmp rmdir tmp
These tests:
allmydata.test.test_runner.BinTahoe.test_import_in_repl allmydata.test.test_runner.BinTahoe.test_the_right_code
will spuriously fail because the current directory is not as expected by the test.
comment:22 in reply to: ↑ 21 Changed at 2012-10-23T23:57:06Z by davidsarah
Replying to davidsarah:
A better way to do that is using --temp-directory...
Pull request for a "make quickertest" target is at https://github.com/tahoe-lafs/tahoe-lafs/pull/17. It includes a fix for the test_runner failures.
Note that, like "make quicktest", "make quickertest" assumes that it is run in a Tahoe-LAFS source distribution that is already built. Use "make quickertest TEST=<testname>" to run a specific test.
comment:23 Changed at 2012-10-23T23:59:23Z by davidsarah
- Keywords review-needed added
- Owner changed from somebody to warner
- Status changed from reopened to new
- Version set to unknown
comment:24 Changed at 2012-10-24T00:05:09Z by davidsarah
- Milestone changed from eventually to 1.10.0
comment:25 Changed at 2012-10-25T01:19:19Z by David-Sarah Hopwood <david-sarah@…>
In edc1f5f67fb34734:
comment:26 Changed at 2012-10-30T17:21:28Z by davidsarah
- Keywords review-needed removed
comment:27 Changed at 2012-10-30T17:21:51Z by davidsarah
- Milestone changed from 1.10.0 to eventually
comment:28 Changed at 2016-03-16T08:49:19Z by warner
- Description modified (diff)
I noticed today that the full test suite takes 30 minutes on my (slow) linux box. Part of the problem is the size of the files created by the tests: the _trial_temp directory has 340MB of data at the end of the run, which is kind of excessive.
The worst offender is test_mutable, which creates 132MB. test_cli makes 76MB. The next heaviest ones are test_system, test_download, test_dirnode, test_upload, and test_repairer, which create 12MB-14MB each. There's another 6 that produce 5-10MB, and the rest are 3MB or less.
Note: now that we're on tox, the fastest cycle time for a specific test (when you're doing a tight edit-test-coverage-repeat loop) is something like:
- .tox/py27/bin/trial allmydata.test.test_foo.Bar.test_blah
Or:
- source .tox/py27/bin/activate
- coverage 'which trial' allmydata.test.test_foo.Bar.test_blah
That won't do anything extra (installs, updates, etc). I'm no longer concerned about having a make quicktest or the like: virtualenvs are good enough for me now.
comment:29 Changed at 2016-08-04T19:48:13Z by warner
I'm working on a branch that fixes a bunch of this. test_mutable uses smaller uploaded files, and stops exercising the N=255 case, which reduces the time by a factor of 7. test_cli uses a grid with k=N=1, which reduces it by 50%. Smaller speedups were obtained by using k=N=1 on all other NoNetworkGrid tests that don't actually exercise multiple shares.
test_system is still pretty bad, I haven't yet tried to tackle it.
comment:30 Changed at 2016-08-06T00:17:21Z by Brian Warner <warner@…>
In 594dd26/trunk:
comment:31 Changed at 2016-12-23T13:44:32Z by rvs
Related: #2845
comment:32 Changed at 2020-01-13T19:45:20Z by exarkun
The test suite is faster now. How fast is fast enough? I don't know. I would love it if I could run the whole test suite locally in zero seconds. That would certainly be fast enough. It's probably not practical, though.
What I can do is run the unit test suite locally on my 4 core laptop in about 120 seconds. This is no doubt largely due to hardware improvements since this ticket was originally filed. However, whatever the reason, 120 seconds doesn't seem absurdly excessive, particularly considering comments on this ticket above are discussing run times of 1000 - 2000 seconds.
We should certainly remain open to further speedups but I think this ticket has served its purpose. Future speedups can fall under the umbrella of some other ticket with a narrower focus.
I decline the Bitcoin offered by Zooko above for observing this current state of affairs (and anyway I don't have Brian's laptop to verify the result).
comment:33 Changed at 2020-01-14T20:23:47Z by exarkun
- Resolution set to fixed
- Status changed from new to closed
comment:34 Changed at 2020-01-15T19:56:17Z by warner
thanks for closing this!
FWIW, tox -e py2 on my current Debian/buster laptop currently spends 276 seconds in trial (and fails test_util.FileUtil.test_create_long_path for some reason).
one source of improvement would be the test_encode.Encode.test_send_NNN tests which seek to cover fencepost issues on both the size of the plaintext (relative to the segment size) and the number of segments (relative to the power-of-two size of the block hash tree).
At the moment this is using 100 shares per test, while we could probably get the same test coverage by using 10 share per test.
On the other hand, test_encode appears to take about 1% of the total test time on my system. test_system takes 4%.