8.2 KiB
Using the ECU Test Framework
This guide shows common ways to run the test framework: from fast local mock runs to full hardware loops, CI, and Raspberry Pi deployments. Commands use Windows PowerShell (as your default shell).
Prerequisites
- Python 3.x and a virtual environment
- Dependencies installed (see
requirements.txt) - For MUM hardware: Melexis
pylinandpymumclientPython packages onPYTHONPATH(seevendor/automated_lin_test/install_packages.sh) plus a reachable MUM (default IP192.168.7.2) - For BabyLIN (legacy) hardware: SDK files placed under
vendor/as described invendor/README.md
Configuring tests
- Configuration is loaded from YAML files and can be selected via the environment variable
ECU_TESTS_CONFIG. - See
docs/02_configuration_resolution.mdfor details and examples.
Example PowerShell:
# Use a mock-only config for fast local runs
$env:ECU_TESTS_CONFIG = ".\config\mock.yml"
# Use a hardware config with the MUM (current default)
$env:ECU_TESTS_CONFIG = ".\config\mum.example.yaml"
# Use a hardware config with the BabyLIN SDK wrapper (legacy)
$env:ECU_TESTS_CONFIG = ".\config\babylin.example.yaml"
Quick try with provided examples:
# Point to the combined examples file
$env:ECU_TESTS_CONFIG = ".\config\examples.yaml"
# The 'active' section defaults to the mock profile; run non-hardware tests
pytest -m "not hardware" -v
# Edit 'active' to the mum or babylin profile (or point to mum.example.yaml /
# babylin.example.yaml) and run hardware tests
Running locally (mock interface)
Use the mock interface to develop tests quickly without hardware:
# Run all mock tests with HTML and JUnit outputs (see pytest.ini defaults)
pytest
# Run only smoke tests (mock) and show progress
pytest -m smoke -q
# Filter by test file or node id
pytest tests\test_smoke_mock.py::TestMockLinInterface::test_mock_send_receive_echo -q
What you get:
- Fast execution, deterministic results
- Reports in
reports/(HTML, JUnit, coverage JSON, CI summary)
Open the HTML report on Windows:
start .\reports\report.html
Running on hardware (MUM — current default)
- Install Melexis
pylinandpymumclient(seevendor/automated_lin_test/install_packages.sh— on Windows, pointpipat a wheel or extendPYTHONPATHto the Melexis IDE site-packages). - Make sure the MUM is reachable:
ping 192.168.7.2. - Select a config that defines
interface.type: mumplushost/lin_device/power_device.
$env:ECU_TESTS_CONFIG = ".\config\mum.example.yaml"
# Run only the MUM-marked hardware tests
pytest -m "hardware and mum" -v
# Run a single MUM test by file
pytest tests\hardware\test_e2e_mum_led_activate.py -q
Tips:
- The MUM owns ECU power on
power_out0; it powers up automatically inconnect()and powers down ondisconnect(). The Owon PSU is independent and can be left disabled (power_supply.enabled: false). - The MUM is master-driven:
lin.receive(id)requires a frame ID. The defaultframe_lengthscovers ALM_Status (4 B) and ALM_Req_A (8 B); add others in YAML when you need slave-published frames at non-standard lengths. - For BSM-SNPD diagnostic frames (service ID 0xB5), use
lin.send_raw(bytes)— it routes through the transport layer'sld_put_raw, which uses LIN 1.x Classic checksum.send()uses Enhanced and the firmware will reject these frames.
Running on hardware (BabyLIN SDK wrapper — legacy)
- Place SDK files per
vendor/README.md. - Select a config that defines
interface.type: babylin,sdf_path, andschedule_nr. - Markers allow restricting to hardware tests.
$env:ECU_TESTS_CONFIG = ".\config\babylin.example.yaml"
# Run only hardware tests
pytest -m "hardware and babylin"
# Run the schedule smoke only
pytest tests\test_babylin_hardware_schedule_smoke.py -q
Tips:
- If multiple devices are attached, update your config to select the desired port (future enhancement) or keep only one connected.
- On timeout, tests often accept None to avoid flakiness; increase timeouts if your bus is slow.
- Master request behavior: the adapter prefers
BLC_sendRawMasterRequest(channel, id, length); it falls back to the bytes variant or a header+receive strategy as needed. The mock covers both forms. interface.schedule_nr: -1defers schedule start to the test code (useful when the test wants to pick a specific schedule by name vialin.start_schedule("CCO")).
Selecting tests with markers
Markers in use:
smoke: quick confidence testshardware: needs real device (any LIN master)mum: targets the Melexis Universal Master adapter (current default)babylin: targets the legacy BabyLIN SDK adapterunit: pure unit tests (no hardware, no external I/O)req_XXX: requirement mapping (e.g.,@pytest.mark.req_001)
Examples:
# Only smoke tests (mock + hardware smoke)
pytest -m smoke
# Requirements-based selection (docstrings and markers are normalized)
pytest -k REQ-001
Enriched reporting
- HTML report includes custom columns (Title, Requirements)
- JUnit XML written for CI
reports/requirements_coverage.jsonmaps requirement IDs to tests and lists unmapped testsreports/summary.mdaggregates key counts (pass/fail/etc.)
See docs/03_reporting_and_metadata.md and docs/11_conftest_plugin_overview.md.
To verify the reporting pipeline end-to-end, run the plugin self-test:
python -m pytest tests\plugin\test_conftest_plugin_artifacts.py -q
To generate two separate HTML/JUnit reports (unit vs non-unit):
./scripts/run_two_reports.ps1
Writing well-documented tests
Use a docstring template so the plugin can extract metadata:
"""
Title: <short title>
Description:
<what the test validates and why>
Requirements: REQ-001, REQ-002
Test Steps:
1. <step one>
2. <step two>
Expected Result:
<succinct expected outcome>
"""
Tip: For runtime properties in reports, prefer the shared rp fixture (wrapper around record_property) and use standardized keys from docs/15_report_properties_cheatsheet.md.
Continuous Integration (CI)
- Run
pytestwith your preferred markers in your pipeline. - Publish artifacts from
reports/(HTML, JUnit, coverage JSON, summary.md). - Optionally parse
requirements_coverage.jsonto power dashboards and gates.
Example PowerShell (local CI mimic):
# Run smoke tests and collect reports
pytest -m smoke --maxfail=1 -q
Raspberry Pi / Headless usage
- Follow
docs/09_raspberry_pi_deployment.mdto set up a venv and systemd service - For a golden image approach, see
docs/10_build_custom_image.md
Running tests headless via systemd typically involves:
- A service that sets
ECU_TESTS_CONFIGto a hardware YAML - Running
pytest -m "hardware and mum"(or"hardware and babylin") on boot or via timer
Troubleshooting quick hits
- ImportError for
pylin/pymumclient: install Melexis packages (vendor/automated_lin_test/install_packages.sh); the MUM adapter raises a clear error pointing at this script. - "interface.host is required when interface.type == 'mum'": set
interface.hostin YAML. - MUM unreachable:
ping 192.168.7.2; check the USB-RNDIS link. - ImportError for
BabyLIN_library: verify placement undervendor/and native library presence. - No BabyLIN devices found: check USB connection, drivers, and permissions.
- Timeouts on receive: increase
timeoutor verify schedule activity and SDF correctness. - Missing reports: ensure
pytest.iniincludes the HTML/JUnit plugins and the custom plugin is loaded.
Power supply (Owon) hardware test
Enable power_supply in your config and set the serial port, then run the dedicated test or the quick demo script.
copy .\config\owon_psu.example.yaml .\config\owon_psu.yaml
# edit COM port in .\config\owon_psu.yaml or set values in config\test_config.yaml
pytest -k test_owon_psu_idn_and_optional_set -m hardware -q
python .\vendor\Owon\owon_psu_quick_demo.py
See also: docs/14_power_supply.md for details and troubleshooting.