221 lines
8.2 KiB
Markdown
221 lines
8.2 KiB
Markdown
# Using the ECU Test Framework
|
|
|
|
This guide shows common ways to run the test framework: from fast local mock runs to full hardware loops, CI, and Raspberry Pi deployments. Commands use Windows PowerShell (as your default shell).
|
|
|
|
## Prerequisites
|
|
|
|
- Python 3.x and a virtual environment
|
|
- Dependencies installed (see `requirements.txt`)
|
|
- For MUM hardware: Melexis `pylin` and `pymumclient` Python packages on `PYTHONPATH` (see `vendor/automated_lin_test/install_packages.sh`) plus a reachable MUM (default IP `192.168.7.2`)
|
|
- For BabyLIN (legacy) hardware: SDK files placed under `vendor/` as described in `vendor/README.md`
|
|
|
|
## Configuring tests
|
|
|
|
- Configuration is loaded from YAML files and can be selected via the environment variable `ECU_TESTS_CONFIG`.
|
|
- See `docs/02_configuration_resolution.md` for details and examples.
|
|
|
|
Example PowerShell:
|
|
|
|
```powershell
|
|
# Use a mock-only config for fast local runs
|
|
$env:ECU_TESTS_CONFIG = ".\config\mock.yml"
|
|
|
|
# Use a hardware config with the MUM (current default)
|
|
$env:ECU_TESTS_CONFIG = ".\config\mum.example.yaml"
|
|
|
|
# Use a hardware config with the BabyLIN SDK wrapper (legacy)
|
|
$env:ECU_TESTS_CONFIG = ".\config\babylin.example.yaml"
|
|
```
|
|
|
|
Quick try with provided examples:
|
|
|
|
```powershell
|
|
# Point to the combined examples file
|
|
$env:ECU_TESTS_CONFIG = ".\config\examples.yaml"
|
|
# The 'active' section defaults to the mock profile; run non-hardware tests
|
|
pytest -m "not hardware" -v
|
|
# Edit 'active' to the mum or babylin profile (or point to mum.example.yaml /
|
|
# babylin.example.yaml) and run hardware tests
|
|
```
|
|
|
|
## Running locally (mock interface)
|
|
|
|
Use the mock interface to develop tests quickly without hardware:
|
|
|
|
```powershell
|
|
# Run all mock tests with HTML and JUnit outputs (see pytest.ini defaults)
|
|
pytest
|
|
|
|
# Run only smoke tests (mock) and show progress
|
|
pytest -m smoke -q
|
|
|
|
# Filter by test file or node id
|
|
pytest tests\test_smoke_mock.py::TestMockLinInterface::test_mock_send_receive_echo -q
|
|
```
|
|
|
|
What you get:
|
|
- Fast execution, deterministic results
|
|
- Reports in `reports/` (HTML, JUnit, coverage JSON, CI summary)
|
|
|
|
Open the HTML report on Windows:
|
|
|
|
```powershell
|
|
start .\reports\report.html
|
|
```
|
|
|
|
## Running on hardware (MUM — current default)
|
|
|
|
1) Install Melexis `pylin` and `pymumclient` (see `vendor/automated_lin_test/install_packages.sh` — on Windows, point `pip` at a wheel or extend `PYTHONPATH` to the Melexis IDE site-packages).
|
|
2) Make sure the MUM is reachable: `ping 192.168.7.2`.
|
|
3) Select a config that defines `interface.type: mum` plus `host`/`lin_device`/`power_device`.
|
|
|
|
```powershell
|
|
$env:ECU_TESTS_CONFIG = ".\config\mum.example.yaml"
|
|
|
|
# Run only the MUM-marked hardware tests
|
|
pytest -m "hardware and mum" -v
|
|
|
|
# Run a single MUM test by file
|
|
pytest tests\hardware\test_e2e_mum_led_activate.py -q
|
|
```
|
|
|
|
Tips:
|
|
- The MUM owns ECU power on `power_out0`; it powers up automatically in `connect()` and powers down on `disconnect()`. The Owon PSU is independent and can be left disabled (`power_supply.enabled: false`).
|
|
- The MUM is master-driven: `lin.receive(id)` requires a frame ID. The default `frame_lengths` covers ALM_Status (4 B) and ALM_Req_A (8 B); add others in YAML when you need slave-published frames at non-standard lengths.
|
|
- For BSM-SNPD diagnostic frames (service ID 0xB5), use `lin.send_raw(bytes)` — it routes through the transport layer's `ld_put_raw`, which uses LIN 1.x **Classic** checksum. `send()` uses Enhanced and the firmware will reject these frames.
|
|
|
|
## Running on hardware (BabyLIN SDK wrapper — legacy)
|
|
|
|
1) Place SDK files per `vendor/README.md`.
|
|
2) Select a config that defines `interface.type: babylin`, `sdf_path`, and `schedule_nr`.
|
|
3) Markers allow restricting to hardware tests.
|
|
|
|
```powershell
|
|
$env:ECU_TESTS_CONFIG = ".\config\babylin.example.yaml"
|
|
|
|
# Run only hardware tests
|
|
pytest -m "hardware and babylin"
|
|
|
|
# Run the schedule smoke only
|
|
pytest tests\test_babylin_hardware_schedule_smoke.py -q
|
|
```
|
|
|
|
Tips:
|
|
- If multiple devices are attached, update your config to select the desired port (future enhancement) or keep only one connected.
|
|
- On timeout, tests often accept None to avoid flakiness; increase timeouts if your bus is slow.
|
|
- Master request behavior: the adapter prefers `BLC_sendRawMasterRequest(channel, id, length)`; it falls back to the bytes variant or a header+receive strategy as needed. The mock covers both forms.
|
|
- `interface.schedule_nr: -1` defers schedule start to the test code (useful when the test wants to pick a specific schedule by name via `lin.start_schedule("CCO")`).
|
|
|
|
## Selecting tests with markers
|
|
|
|
Markers in use:
|
|
|
|
- `smoke`: quick confidence tests
|
|
- `hardware`: needs real device (any LIN master)
|
|
- `mum`: targets the Melexis Universal Master adapter (current default)
|
|
- `babylin`: targets the legacy BabyLIN SDK adapter
|
|
- `unit`: pure unit tests (no hardware, no external I/O)
|
|
- `req_XXX`: requirement mapping (e.g., `@pytest.mark.req_001`)
|
|
|
|
Examples:
|
|
|
|
```powershell
|
|
# Only smoke tests (mock + hardware smoke)
|
|
pytest -m smoke
|
|
|
|
# Requirements-based selection (docstrings and markers are normalized)
|
|
pytest -k REQ-001
|
|
```
|
|
|
|
## Enriched reporting
|
|
|
|
- HTML report includes custom columns (Title, Requirements)
|
|
- JUnit XML written for CI
|
|
- `reports/requirements_coverage.json` maps requirement IDs to tests and lists unmapped tests
|
|
- `reports/summary.md` aggregates key counts (pass/fail/etc.)
|
|
|
|
See `docs/03_reporting_and_metadata.md` and `docs/11_conftest_plugin_overview.md`.
|
|
|
|
To verify the reporting pipeline end-to-end, run the plugin self-test:
|
|
|
|
```powershell
|
|
python -m pytest tests\plugin\test_conftest_plugin_artifacts.py -q
|
|
```
|
|
|
|
To generate two separate HTML/JUnit reports (unit vs non-unit):
|
|
|
|
```powershell
|
|
./scripts/run_two_reports.ps1
|
|
```
|
|
|
|
## Writing well-documented tests
|
|
|
|
Use a docstring template so the plugin can extract metadata:
|
|
|
|
```python
|
|
"""
|
|
Title: <short title>
|
|
|
|
Description:
|
|
<what the test validates and why>
|
|
|
|
Requirements: REQ-001, REQ-002
|
|
|
|
Test Steps:
|
|
1. <step one>
|
|
2. <step two>
|
|
|
|
Expected Result:
|
|
<succinct expected outcome>
|
|
"""
|
|
```
|
|
|
|
Tip: For runtime properties in reports, prefer the shared `rp` fixture (wrapper around `record_property`) and use standardized keys from `docs/15_report_properties_cheatsheet.md`.
|
|
|
|
## Continuous Integration (CI)
|
|
|
|
- Run `pytest` with your preferred markers in your pipeline.
|
|
- Publish artifacts from `reports/` (HTML, JUnit, coverage JSON, summary.md).
|
|
- Optionally parse `requirements_coverage.json` to power dashboards and gates.
|
|
|
|
Example PowerShell (local CI mimic):
|
|
|
|
```powershell
|
|
# Run smoke tests and collect reports
|
|
pytest -m smoke --maxfail=1 -q
|
|
```
|
|
|
|
## Raspberry Pi / Headless usage
|
|
|
|
- Follow `docs/09_raspberry_pi_deployment.md` to set up a venv and systemd service
|
|
- For a golden image approach, see `docs/10_build_custom_image.md`
|
|
|
|
Running tests headless via systemd typically involves:
|
|
|
|
- A service that sets `ECU_TESTS_CONFIG` to a hardware YAML
|
|
- Running `pytest -m "hardware and mum"` (or `"hardware and babylin"`) on boot or via timer
|
|
|
|
## Troubleshooting quick hits
|
|
|
|
- ImportError for `pylin` / `pymumclient`: install Melexis packages (`vendor/automated_lin_test/install_packages.sh`); the MUM adapter raises a clear error pointing at this script.
|
|
- "interface.host is required when interface.type == 'mum'": set `interface.host` in YAML.
|
|
- MUM unreachable: `ping 192.168.7.2`; check the USB-RNDIS link.
|
|
- ImportError for `BabyLIN_library`: verify placement under `vendor/` and native library presence.
|
|
- No BabyLIN devices found: check USB connection, drivers, and permissions.
|
|
- Timeouts on receive: increase `timeout` or verify schedule activity and SDF correctness.
|
|
- Missing reports: ensure `pytest.ini` includes the HTML/JUnit plugins and the custom plugin is loaded.
|
|
|
|
## Power supply (Owon) hardware test
|
|
|
|
Enable `power_supply` in your config and set the serial port, then run the dedicated test or the quick demo script.
|
|
|
|
```powershell
|
|
copy .\config\owon_psu.example.yaml .\config\owon_psu.yaml
|
|
# edit COM port in .\config\owon_psu.yaml or set values in config\test_config.yaml
|
|
|
|
pytest -k test_owon_psu_idn_and_optional_set -m hardware -q
|
|
python .\vendor\Owon\owon_psu_quick_demo.py
|
|
```
|
|
|
|
See also: `docs/14_power_supply.md` for details and troubleshooting.
|