ecu-tests/docs/03_reporting_and_metadata.md

3.9 KiB
Raw Blame History

Reporting and Metadata: How your docs show up in reports

This document describes how test documentation is extracted and rendered into the HTML report, and what appears in JUnit XML.

What the plugin does

File: conftest_plugin.py

  • Hooks into pytest_runtest_makereport to parse the tests docstring
  • Extracts the following fields:
    • Title
    • Description
    • Requirements
    • Test Steps
    • Expected Result
  • Attaches them as user_properties on the test report
  • Customizes the HTML results table to include Title and Requirements columns

Docstring format to use

"""
Title: Short, human-readable test name

Description: What is this test proving and why does it matter.

Requirements: REQ-001, REQ-00X

Test Steps:
1. Describe the first step
2. Next step
3. etc.

Expected Result:
- Primary outcome
- Any additional acceptance criteria
"""

What appears in reports

  • HTML (reports/report.html):
    • Title and Requirements appear as columns in the table
    • Other fields are available in the report payload and can be surfaced with minor tweaks
  • JUnit XML (reports/junit.xml):
    • Standard test results and timing
    • Note: By default, the XML is compact and does not include custom properties; if you need properties in XML, we can extend the plugin to emit a custom JUnit format or produce an additional JSON artifact for traceability.

Open the HTML report on Windows PowerShell:

start .\reports\report.html

Related artifacts written by the plugin:

  • reports/requirements_coverage.json — requirement → test nodeids map and unmapped tests
  • reports/summary.md — compact pass/fail/error/skip totals, environment info

To generate separate HTML/JUnit reports for unit vs non-unit test sets, use the helper script:

./scripts/run_two_reports.ps1

Parameterized tests and metadata

When using @pytest.mark.parametrize, each parameter set is treated as a distinct test case with its own nodeid, e.g.:

tests/test_babylin_wrapper_mock.py::test_babylin_master_request_with_mock_wrapper[wrapper0-True]
tests/test_babylin_wrapper_mock.py::test_babylin_master_request_with_mock_wrapper[wrapper1-False]

Metadata handling:

  • The docstring on the test function is parsed once per case; the same Title/Requirements are attached to each parameterized instance.
  • Requirement mapping (coverage JSON) records each parameterized nodeid under the normalized requirement keys, enabling fine-grained coverage.
  • In the HTML table, you will see a row per parameterized instance with identical Title/Requirements but differing nodeids (and potentially differing outcomes if parameters influence behavior).

Markers

Declared in pytest.ini and used via @pytest.mark.<name> in tests. They also appear in the HTML payload for each test (as user properties) and can be added as a column with a small change if desired.

Extensibility

  • Add more columns to HTML by updating pytest_html_results_table_header/row
  • Persist full metadata (steps, expected) to a JSON file after the run for audit trails
  • Populate requirement coverage map by scanning markers and aggregating results

Runtime properties (record_property) and the rp helper fixture

Beyond static docstrings, you can attach dynamic key/value properties during a test.

  • Built-in: record_property("key", value) in any test
  • Convenience: use the shared rp fixture which wraps record_property and also prints a short line to captured output for quick scanning.

Example usage:

def test_example(rp):
  rp("device", "mock")
  rp("tx_id", "0x12")
  rp("rx_present", True)

Where they show up:

  • HTML report: expand a test row to see a Properties table listing all recorded key/value pairs
  • Captured output: look for lines like [prop] key=value emitted by the rp helper

Suggested standardized keys across suites live in docs/15_report_properties_cheatsheet.md.