3.6 KiB
3.6 KiB
Unit Testing Guide
This guide explains how the project's unit tests are organized, how to run them (with and without markers), how coverage is generated, and tips for writing effective tests.
Why unit tests?
- Fast feedback without hardware
- Validate contracts (config loader, frames, adapters, flashing scaffold)
- Keep behavior stable as the framework evolves
Test layout
tests/unit/— pure unit tests (no hardware, no external I/O)test_config_loader.py— config precedence and defaultstest_linframe.py—LinFramevalidationtest_babylin_adapter_mocked.py— BabyLIN adapter error paths with a mocked SDK wrappertest_hex_flasher.py— flashing scaffold against a stub LIN interface
tests/plugin/— plugin self-tests usingpytestertest_conftest_plugin_artifacts.py— verifies JSON coverage and summary artifacts
tests/— existing smoke/mock/hardware tests
Markers and selection
A unit marker is provided for easy selection:
- By marker (recommended):
pytest -m unit -q
- By path:
pytest tests\unit -q
- Exclude hardware:
pytest -m "not hardware" -v
Coverage
Coverage is enabled by default via pytest.ini addopts:
--cov=ecu_framework --cov-report=term-missing
You’ll see a summary with missing lines directly in the terminal. To disable coverage locally, override addopts on the command line:
pytest -q -o addopts=""
(Optional) To produce an HTML coverage report, you can add --cov-report=html and open htmlcov/index.html.
Writing unit tests
- Prefer small, focused tests
- For BabyLIN adapter logic, inject
wrapper_modulewith the mock:
from ecu_framework.lin.babylin import BabyLinInterface
from vendor import mock_babylin_wrapper as mock_bl
lin = BabyLinInterface(wrapper_module=mock_bl)
lin.connect()
# exercise send/receive/request
- To simulate specific SDK signatures, use a thin shim (see
_MockBytesOnlyintests/test_babylin_wrapper_mock.py). - Include a docstring with Title/Description/Requirements/Steps/Expected Result so the reporting plugin can extract metadata (this also helps the HTML report).
- When testing the plugin itself, use the
pytesterfixture to generate a temporary test run and validate artifacts exist and contain expected entries.
Typical commands (Windows PowerShell)
- Run unit tests with coverage:
pytest -m unit -q
- Run only plugin self-tests:
pytest tests\plugin -q
- Run the specific plugin artifact test (verifies HTML/JUnit, summary, and coverage JSON under
reports/):
python -m pytest tests\plugin\test_conftest_plugin_artifacts.py -q
- Run all non-hardware tests with verbose output:
pytest -m "not hardware" -v
- Open the HTML report:
start .\reports\report.html
- Generate two separate reports (unit vs non-unit):
./scripts/run_two_reports.ps1
CI suggestions
- Run
-m unitandtests/pluginon every PR - Optionally run mock integration/smoke on PR
- Run hardware test matrix on a nightly or on-demand basis (
-m "hardware and babylin") - Publish artifacts from
reports/: HTML/JUnit/coverage JSON/summary MD
Troubleshooting
- Coverage not showing: ensure
pytest-covis installed (seerequirements.txt) andpytest.iniaddopts include--cov. - Import errors: activate the venv and reinstall requirements.
- Plugin artifacts missing under
pytester: verify tests write toreports/(our plugin creates the folder automatically inpytest_configure).