[tahoe-lafs-trac-stream] [Tahoe-LAFS] #2297: improve precision of coverage reports by including coverage by subprocesses
Tahoe-LAFS
trac at tahoe-lafs.org
Mon Jan 13 20:23:57 UTC 2020
#2297: improve precision of coverage reports by including coverage by subprocesses
------------------------+---------------------------------
Reporter: daira | Owner:
Type: defect | Status: new
Priority: normal | Milestone: undecided
Component: code | Version: 1.10.0
Resolution: | Keywords: coverage subprocess
Launchpad Bug: |
------------------------+---------------------------------
Comment (by exarkun):
I think it *should* be possible to eliminate most subprocess use in the
test suite. The tests that run subprocesses are error prone and slow on
top of being a problem for coverage measurement.
*All* code should be covered by unit tests in-process. It may be
necessary to *also* have some tests which run a small number of very
simple subprocesses in order to test real code that does subprocess
management but other application logic should not be exercised by these
tests.
Also, despite the information in the ticket description, measuring
subprocess coverage is *hard*. coverage.py is slow enough that many tests
can't complete within the various deadlines imposed by different test
runners and CI systems when the subprocesses run by those tests have
coverage.py turned on. All of that code could *probably* be sped up so
this isn't a problem but I would much rather see the effort put into
removing the over-use of subprocesses in the test suite.
Also, since this ticket was created, a new suite of "integration" tests
have been added which *do* measure coverage on subprocesses they launch.
This coverage isn't integrated into the coverage report generated by the
unit test runs but it could be (or it could be reported alongside or
something).
However it's never been entirely clear to me what the value of subprocess
coverage measurements are. Tests that run a whole Tahoe-LAFS child
process execute huge amounts of code but they generally don't assert the
*correctness* of very much such code. All a coverage report can tell you
is what was *executed*. Coverage measurement of one single test that runs
a child process will show you thousands or tens of thousands of lines of
covered code. Sure, it was executed, but is it right? That one test
hardly tells you.
--
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2297#comment:4>
Tahoe-LAFS <https://Tahoe-LAFS.org>
secure decentralized storage
More information about the tahoe-lafs-trac-stream
mailing list