5.8 KiB
Using the ECU Test Framework
This guide shows common ways to run the test framework: from fast local mock runs to full hardware loops, CI, and Raspberry Pi deployments. Commands use Windows PowerShell (as your default shell).
Prerequisites
- Python 3.x and a virtual environment
- Dependencies installed (see
requirements.txt) - Optional: BabyLIN SDK files placed under
vendor/as described invendor/README.mdwhen running hardware tests
Configuring tests
- Configuration is loaded from YAML files and can be selected via the environment variable
ECU_TESTS_CONFIG. - See
docs/02_configuration_resolution.mdfor details and examples.
Example PowerShell:
# Use a mock-only config for fast local runs
$env:ECU_TESTS_CONFIG = ".\config\mock.yml"
# Use a hardware config with BabyLIN SDK wrapper
$env:ECU_TESTS_CONFIG = ".\config\hardware_babylin.yml"
Quick try with provided examples:
# Point to the combined examples file
$env:ECU_TESTS_CONFIG = ".\config\examples.yaml"
# The 'active' section defaults to the mock profile; run non-hardware tests
pytest -m "not hardware" -v
# Edit 'active' to the babylin profile (or point to babylin.example.yaml) and run hardware tests
## Running locally (mock interface)
Use the mock interface to develop tests quickly without hardware:
```powershell
# Run all mock tests with HTML and JUnit outputs (see pytest.ini defaults)
pytest
# Run only smoke tests (mock) and show progress
pytest -m smoke -q
# Filter by test file or node id
pytest tests\test_smoke_mock.py::TestMockLinInterface::test_mock_send_receive_echo -q
What you get:
- Fast execution, deterministic results
- Reports in
reports/(HTML, JUnit, coverage JSON, CI summary)
Open the HTML report on Windows:
start .\reports\report.html
Running on hardware (BabyLIN SDK wrapper)
- Place SDK files per
vendor/README.md. - Select a config that defines
interface.type: babylin,sdf_path, andschedule_nr. - Markers allow restricting to hardware tests.
# Example environment selection
$env:ECU_TESTS_CONFIG = ".\config\babylin.example.yaml"
# Run only hardware tests
pytest -m "hardware and babylin"
# Run the schedule smoke only
pytest tests\test_babylin_hardware_schedule_smoke.py -q
Tips:
- If multiple devices are attached, update your config to select the desired port (future enhancement) or keep only one connected.
- On timeout, tests often accept None to avoid flakiness; increase timeouts if your bus is slow.
- Master request behavior: the adapter prefers
BLC_sendRawMasterRequest(channel, id, length); it falls back to the bytes variant or a header+receive strategy as needed. The mock covers both forms.
Selecting tests with markers
Markers in use:
smoke: quick confidence testshardware: needs real devicebabylin: targets the BabyLIN SDK adapterreq_XXX: requirement mapping (e.g.,@pytest.mark.req_001)
Examples:
# Only smoke tests (mock + hardware smoke)
pytest -m smoke
# Requirements-based selection (docstrings and markers are normalized)
pytest -k REQ-001
Enriched reporting
- HTML report includes custom columns (Title, Requirements)
- JUnit XML written for CI
reports/requirements_coverage.jsonmaps requirement IDs to tests and lists unmapped testsreports/summary.mdaggregates key counts (pass/fail/etc.)
See docs/03_reporting_and_metadata.md and docs/11_conftest_plugin_overview.md.
To verify the reporting pipeline end-to-end, run the plugin self-test:
python -m pytest tests\plugin\test_conftest_plugin_artifacts.py -q
To generate two separate HTML/JUnit reports (unit vs non-unit):
./scripts/run_two_reports.ps1
Writing well-documented tests
Use a docstring template so the plugin can extract metadata:
"""
Title: <short title>
Description:
<what the test validates and why>
Requirements: REQ-001, REQ-002
Test Steps:
1. <step one>
2. <step two>
Expected Result:
<succinct expected outcome>
"""
Tip: For runtime properties in reports, prefer the shared rp fixture (wrapper around record_property) and use standardized keys from docs/15_report_properties_cheatsheet.md.
Continuous Integration (CI)
- Run
pytestwith your preferred markers in your pipeline. - Publish artifacts from
reports/(HTML, JUnit, coverage JSON, summary.md). - Optionally parse
requirements_coverage.jsonto power dashboards and gates.
Example PowerShell (local CI mimic):
# Run smoke tests and collect reports
pytest -m smoke --maxfail=1 -q
Raspberry Pi / Headless usage
- Follow
docs/09_raspberry_pi_deployment.mdto set up a venv and systemd service - For a golden image approach, see
docs/10_build_custom_image.md
Running tests headless via systemd typically involves:
- A service that sets
ECU_TESTS_CONFIGto a hardware YAML - Running
pytest -m "hardware and babylin"on boot or via timer
Troubleshooting quick hits
- ImportError for
BabyLIN_library: verify placement undervendor/and native library presence. - No BabyLIN devices found: check USB connection, drivers, and permissions.
- Timeouts on receive: increase
timeoutor verify schedule activity and SDF correctness. - Missing reports: ensure
pytest.iniincludes the HTML/JUnit plugins and the custom plugin is loaded.
Power supply (Owon) hardware test
Enable power_supply in your config and set the serial port, then run the dedicated test or the quick demo script.
copy .\config\owon_psu.example.yaml .\config\owon_psu.yaml
# edit COM port in .\config\owon_psu.yaml or set values in config\test_config.yaml
pytest -k test_owon_psu_idn_and_optional_set -m hardware -q
python .\vendor\Owon\owon_psu_quick_demo.py
See also: docs/14_power_supply.md for details and troubleshooting.