ecu-tests/docs/12_using_the_framework.md

5.1 KiB

Using the ECU Test Framework

This guide shows common ways to run the test framework: from fast local mock runs to full hardware loops, CI, and Raspberry Pi deployments. Commands use Windows PowerShell (as your default shell).

Prerequisites

  • Python 3.x and a virtual environment
  • Dependencies installed (see requirements.txt)
  • Optional: BabyLIN SDK files placed under vendor/ as described in vendor/README.md when running hardware tests

Configuring tests

  • Configuration is loaded from YAML files and can be selected via the environment variable ECU_TESTS_CONFIG.
  • See docs/02_configuration_resolution.md for details and examples.

Example PowerShell:

# Use a mock-only config for fast local runs
$env:ECU_TESTS_CONFIG = ".\config\mock.yml"

# Use a hardware config with BabyLIN SDK wrapper
$env:ECU_TESTS_CONFIG = ".\config\hardware_babylin.yml"

Quick try with provided examples:

# Point to the combined examples file
$env:ECU_TESTS_CONFIG = ".\config\examples.yaml"
# The 'active' section defaults to the mock profile; run non-hardware tests
pytest -m "not hardware" -v
# Edit 'active' to the babylin profile (or point to babylin.example.yaml) and run hardware tests

## Running locally (mock interface)

Use the mock interface to develop tests quickly without hardware:

```powershell
# Run all mock tests with HTML and JUnit outputs (see pytest.ini defaults)
pytest

# Run only smoke tests (mock) and show progress
pytest -m smoke -q

# Filter by test file or node id
pytest tests\test_smoke_mock.py::TestMockLinInterface::test_mock_send_receive_echo -q

What you get:

  • Fast execution, deterministic results
  • Reports in reports/ (HTML, JUnit, coverage JSON, CI summary)

Open the HTML report on Windows:

start .\reports\report.html

Running on hardware (BabyLIN SDK wrapper)

  1. Place SDK files per vendor/README.md.
  2. Select a config that defines interface.type: babylin, sdf_path, and schedule_nr.
  3. Markers allow restricting to hardware tests.
# Example environment selection
$env:ECU_TESTS_CONFIG = ".\config\babylin.example.yaml"

# Run only hardware tests
pytest -m "hardware and babylin"

# Run the schedule smoke only
pytest tests\test_babylin_hardware_schedule_smoke.py -q

Tips:

  • If multiple devices are attached, update your config to select the desired port (future enhancement) or keep only one connected.
  • On timeout, tests often accept None to avoid flakiness; increase timeouts if your bus is slow.
  • Master request behavior: the adapter prefers BLC_sendRawMasterRequest(channel, id, length); it falls back to the bytes variant or a header+receive strategy as needed. The mock covers both forms.

Selecting tests with markers

Markers in use:

  • smoke: quick confidence tests
  • hardware: needs real device
  • babylin: targets the BabyLIN SDK adapter
  • req_XXX: requirement mapping (e.g., @pytest.mark.req_001)

Examples:

# Only smoke tests (mock + hardware smoke)
pytest -m smoke

# Requirements-based selection (docstrings and markers are normalized)
pytest -k REQ-001

Enriched reporting

  • HTML report includes custom columns (Title, Requirements)
  • JUnit XML written for CI
  • reports/requirements_coverage.json maps requirement IDs to tests and lists unmapped tests
  • reports/summary.md aggregates key counts (pass/fail/etc.)

See docs/03_reporting_and_metadata.md and docs/11_conftest_plugin_overview.md.

To verify the reporting pipeline end-to-end, run the plugin self-test:

python -m pytest tests\plugin\test_conftest_plugin_artifacts.py -q

To generate two separate HTML/JUnit reports (unit vs non-unit):

./scripts/run_two_reports.ps1

Writing well-documented tests

Use a docstring template so the plugin can extract metadata:

"""
Title: <short title>

Description:
    <what the test validates and why>

Requirements: REQ-001, REQ-002

Test Steps:
    1. <step one>
    2. <step two>

Expected Result:
    <succinct expected outcome>
"""

Continuous Integration (CI)

  • Run pytest with your preferred markers in your pipeline.
  • Publish artifacts from reports/ (HTML, JUnit, coverage JSON, summary.md).
  • Optionally parse requirements_coverage.json to power dashboards and gates.

Example PowerShell (local CI mimic):

# Run smoke tests and collect reports
pytest -m smoke --maxfail=1 -q

Raspberry Pi / Headless usage

  • Follow docs/09_raspberry_pi_deployment.md to set up a venv and systemd service
  • For a golden image approach, see docs/10_build_custom_image.md

Running tests headless via systemd typically involves:

  • A service that sets ECU_TESTS_CONFIG to a hardware YAML
  • Running pytest -m "hardware and babylin" on boot or via timer

Troubleshooting quick hits

  • ImportError for BabyLIN_library: verify placement under vendor/ and native library presence.
  • No BabyLIN devices found: check USB connection, drivers, and permissions.
  • Timeouts on receive: increase timeout or verify schedule activity and SDF correctness.
  • Missing reports: ensure pytest.ini includes the HTML/JUnit plugins and the custom plugin is loaded.