From trac at tahoe-lafs.org Tue Dec 9 16:31:25 2025 From: trac at tahoe-lafs.org (Tahoe-LAFS) Date: Tue, 09 Dec 2025 16:31:25 -0000 Subject: [tahoe-lafs-trac-stream] [Tahoe-LAFS] #4188: Test results are harder to read on GitHub Actions Message-ID: <040.2c1817e3dbb4001636e98ae944eeaf67@tahoe-lafs.org> #4188: Test results are harder to read on GitHub Actions ---------------------+--------------------------- Reporter: sajith | Owner: Type: defect | Status: new Priority: normal | Milestone: undecided Component: unknown | Version: n/a Keywords: | Launchpad Bug: ---------------------+--------------------------- When some tests fail on !GitHub Actions CI, it is tedious to figure out exactly which tests failed and why. This is is somewhat easier on CircleCI, because there is a tab that shows which tests failed, with some details. For example, see the tests tab on: https://app.circleci.com/pipelines/github/tahoe-lafs/tahoe- lafs/5362/workflows/d9bcd1bc-a039-425e-aed9-f0697961532b/jobs/91649/tests Now compare that with: https://github.com/tahoe-lafs/tahoe- lafs/actions/runs/19959845121/job/57237405534. With !GitHub Actions, our options are to manually grep through the raw log, or run `gh run view --log-failed --job=57237405534`. (Note that `gh` is !GitHub CLI.) It looks like there is a third option: it should be possible to achieve a similar thing as CircleCI on !GitHub Actions using https://github.com/dorny/test-reporter. On CircleCI we use `SUBUNITREPORTER_OUTPUT_PATH: "test-results.subunit2" twisted.trial --reporter=subunitv2-file ...`, and then `subunit2junitxml.exe --output-to=test-results.xml test-results.subunit2`, which gets stored in a [https://circleci.com/docs/guides/test/collect- test-data/ store_test_results] step. I suppose we could follow similar steps with GHA. -- Ticket URL: Tahoe-LAFS secure decentralized storage From trac at tahoe-lafs.org Tue Dec 9 16:31:42 2025 From: trac at tahoe-lafs.org (Tahoe-LAFS) Date: Tue, 09 Dec 2025 16:31:42 -0000 Subject: [tahoe-lafs-trac-stream] [Tahoe-LAFS] #4188: Test results are harder to read on GitHub Actions In-Reply-To: <040.2c1817e3dbb4001636e98ae944eeaf67@tahoe-lafs.org> References: <040.2c1817e3dbb4001636e98ae944eeaf67@tahoe-lafs.org> Message-ID: <055.a03b1b47c38f7fca5a5e14812bd858f1@tahoe-lafs.org> #4188: Test results are harder to read on GitHub Actions -------------------------+----------------------- Reporter: sajith | Owner: sajith Type: defect | Status: new Priority: normal | Milestone: undecided Component: unknown | Version: n/a Resolution: | Keywords: Launchpad Bug: | -------------------------+----------------------- Changes (by sajith): * owner: => sajith -- Ticket URL: Tahoe-LAFS secure decentralized storage From trac at tahoe-lafs.org Tue Dec 9 17:28:54 2025 From: trac at tahoe-lafs.org (Tahoe-LAFS) Date: Tue, 09 Dec 2025 17:28:54 -0000 Subject: [tahoe-lafs-trac-stream] [Tahoe-LAFS] #4189: Integration tests are failing Message-ID: <040.429ad45cfdfebdab3d77a53dffc8f2fa@tahoe-lafs.org> #4189: Integration tests are failing ---------------------+--------------------------- Reporter: sajith | Owner: sajith Type: defect | Status: new Priority: normal | Milestone: undecided Component: unknown | Version: n/a Keywords: | Launchpad Bug: ---------------------+--------------------------- All integration tests are failing for the same reason, it seems. See https://github.com/tahoe-lafs/tahoe-lafs/actions/runs/19959845121/ for example. This is the error: {{{ integration: commands[0]> py.test --timeout=1800 --coverage -s -v integration ImportError while loading conftest '/home/runner/work/tahoe-lafs/tahoe- lafs/integration/conftest.py'. integration/conftest.py:308: in @pytest.fixture(scope='session') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .tox/integration/lib/python3.11/site- packages/_hypothesis_pytestplugin.py:453: in _ban_given_call return _orig_call(self, function) ^^^^^^^^^^^^^^^^^^^^^^^^^^ E pytest.PytestRemovedIn9Warning: Marks applied to fixtures have no effect E See docs: https://docs.pytest.org/en/stable/deprecations.html#applying-a-mark-to-a -fixture-function }}} The actual error seems to be the two `@pytest.mark.skipif` marks on test fixtures though, which pytest has deprecated, per https://docs.pytest.org/en/stable/deprecations.html#applying-a-mark-to-a -fixture-function. -- Ticket URL: Tahoe-LAFS secure decentralized storage