wiki:HowToWriteTests

create a test file

Choose a name for your test file. We'll use test_fname.py:

touch src/allmydata/test/test_fname.py
python -m twisted.trial trial allmydata.test.test_fname

Okay, so that was boring because there are no tests in the file. Add these contents:

from testtools.matchers import (
    Never,
)
from .common import (
    TestCase,
)

class T(TestCase):
    def test_a(self):
        self.assertThat("a", Never())

Now run it!

no mocks policy

You may notice some existing tests use the mock module. We are phasing out this style of testing (https://nedbatchelder.com/blog/201206/tldw_stop_mocking_start_testing.html, https://martinfowler.com/articles/mocksArentStubs.html). Please do not use it in new tests. Instead, factor units and interfaces so mocks are not required. If that is not possible then implement units so that they don't need to be mocked and can be used in the test suite. If that is not possible then implement a verified fake of the required interface and use that in the test suite.

code coverage

Now install Ned Batchelder's "coverage" tool and run your test with code coverage, like this:

python -m coverage run -m twisted.trial allmydata.test.test_fname

This does the same as running the tests without coverage -- print a list of what happened when each test was run. It also writes out a file named .coverage.<something> into the current directory. Run the following command to read that file and produce nice HTML pages:

python -m coverage combine
python -m coverage html

That will product a directory named htmlcov. View its contents with a web browser.

using code coverage results

This is important: we do not treat code coverage numbers as a litmus test (like "aim to have 90% of lines covered"). We hardly even treat it as a scalar measurement of goodness — 91% code coverage is not necessarily better than 90% code coverage. Maybe the alternative would have been to remove some (covered) lines of code that were not necessary, which would have resulted in a worse “code coverage” metric but a better codebase. Finally, note that even if you have 100% branch-level coverage of a codebase, that doesn't mean that your tests are exercising all possible ways that the codebase could be run! There could be data-dependent bugs, such as a divide-by-zero error, or a path which sets one variable to a setting which is inconsistent with a different variable. These sorts of bugs might not be getting exercised by the test code even though every line and every branch of the code is getting tested.

So what do we use it for? It is a lens through which to view your code and your test code. You should look at the code coverage results and think about what it says about your tests. Think about “what could go wrong” in this function — where bugs could be in this function or a future version of it — and whether the current tests would catch those bugs. Both authors of patches and reviewers of patches should look at the code coverage results, and see if they indicate important holes in the tests.

Code coverage displays turn out to be very handy for showing you facts about your tests and your code that you didn't know.

turning on verbose logging during unit tests

trunk/docs/logging.rst#log-messages-during-unit-tests

further reading

http://twistedmatrix.com/documents/current/core/howto/testing.html

Last modified at 2021-09-15T10:44:29Z Last modified on 2021-09-15T10:44:29Z