92 lines
3.6 KiB
Markdown
92 lines
3.6 KiB
Markdown
# Pytest Plugin: Reporting & Traceability Overview
|
|
|
|
This guide explains the custom pytest plugin in `conftest_plugin.py` that enriches reports with business-facing metadata and builds requirements traceability artifacts.
|
|
|
|
## What it does
|
|
|
|
- Extracts metadata (Title, Description, Requirements, Test Steps, Expected Result) from test docstrings and markers.
|
|
- Attaches this metadata as `user_properties` on each test report.
|
|
- Adds custom columns (Title, Requirements) to the HTML report.
|
|
- Produces two artifacts under `reports/` at the end of the run:
|
|
- `requirements_coverage.json`: a traceability matrix mapping requirement IDs to test nodeids, plus unmapped tests.
|
|
- `summary.md`: a compact summary of results suitable for CI dashboards or PR comments.
|
|
|
|
## Inputs and sources
|
|
|
|
- Test docstrings prefixed lines:
|
|
- `Title:` one-line title
|
|
- `Description:` free-form text until the next section
|
|
- `Requirements:` comma- or space-separated tokens such as `REQ-001`, `req_002`
|
|
- `Test Steps:` numbered list (1., 2., 3., ...)
|
|
- `Expected Result:` free-form text
|
|
- Pytest markers on tests: `@pytest.mark.req_001` etc. are normalized to `REQ-001`.
|
|
|
|
## Normalization logic
|
|
|
|
Requirement IDs are normalized to the canonical form `REQ-XYZ` using:
|
|
- `req_001` → `REQ-001`
|
|
- `REQ-1` / `REQ-001` / `REQ_001` → `REQ-001`
|
|
|
|
This ensures consistent keys in the coverage JSON and HTML.
|
|
|
|
## Hook call sequence
|
|
|
|
Below is the high-level call sequence of relevant plugin hooks during a typical run:
|
|
|
|
```mermaid
|
|
sequenceDiagram
|
|
autonumber
|
|
participant Pytest
|
|
participant Plugin as conftest_plugin
|
|
participant FS as File System
|
|
|
|
Pytest->>Plugin: pytest_configure(config)
|
|
Note right of Plugin: Ensure ./reports exists
|
|
|
|
Pytest->>Plugin: pytest_collection_modifyitems(session, config, items)
|
|
Note right of Plugin: Track all collected nodeids for unmapped detection
|
|
|
|
loop For each test phase
|
|
Pytest->>Plugin: pytest_runtest_makereport(item, call)
|
|
Note right of Plugin: hookwrapper
|
|
Plugin-->>Pytest: yield to get report
|
|
Plugin->>Plugin: parse docstring & markers
|
|
Plugin->>Plugin: attach user_properties (Title, Requirements, ...)
|
|
Plugin->>Plugin: update _REQ_TO_TESTS, _MAPPED_TESTS
|
|
end
|
|
|
|
Pytest->>Plugin: pytest_terminal_summary(terminalreporter, exitstatus)
|
|
Plugin->>Plugin: compile stats, coverage map, unmapped tests
|
|
Plugin->>FS: write reports/requirements_coverage.json
|
|
Plugin->>FS: write reports/summary.md
|
|
```
|
|
|
|
## HTML report integration
|
|
|
|
- `pytest_html_results_table_header`: inserts Title and Requirements columns.
|
|
- `pytest_html_results_table_row`: fills in values from `report.user_properties`.
|
|
|
|
The HTML plugin reads `user_properties` to render the extra metadata per test row.
|
|
|
|
## Artifacts
|
|
|
|
- `reports/requirements_coverage.json`
|
|
- `generated_at`: ISO timestamp
|
|
- `results`: counts of passed/failed/skipped/etc.
|
|
- `requirements`: map of `REQ-XXX` to an array of test nodeids
|
|
- `unmapped_tests`: tests with no requirement mapping
|
|
- `files`: relative locations of key artifacts
|
|
- `reports/summary.md`
|
|
- Human-readable summary with counts and quick artifact links
|
|
|
|
## Error handling
|
|
|
|
Artifact writes are wrapped in try/except to avoid failing the test run if the filesystem is read-only or unavailable. Any write failure is logged to the terminal.
|
|
|
|
## Extensibility ideas
|
|
|
|
- Add more normalized marker families (e.g., `capability_*`, `risk_*`).
|
|
- Emit CSV or Excel in addition to JSON/Markdown.
|
|
- Include per-test durations and flakiness stats in the summary.
|
|
- Support a `--requirement` CLI filter that selects tests by normalized req IDs.
|