ecu-tests/tests/hardware/mum/test_mum_alm_cases.py
Hosam-Eldin Mostafa 8fa4cf0be1 refactor(tests): layer fixtures by adapter type (mum/psu/babylin)
Restructures tests/hardware/ so that fixture access is controlled by
directory layout — pytest only walks upward through conftest.py files,
so a PSU test physically cannot request fio/alm/nad.

Layout:
- tests/hardware/conftest.py           (unchanged: PSU fixtures)
- tests/hardware/mum/conftest.py       NEW: _require_mum (session autouse),
                                       fio (session), nad (session),
                                       alm (session), _reset_to_off
                                       (function autouse)
- tests/hardware/mum/**                MUM tests + swe5/ + swe6/
- tests/hardware/psu/**                PSU-only tests
- tests/hardware/babylin/**            deprecated BabyLIN E2E

What this removes (was duplicated before):
- 7 verbatim copies of the `fio` fixture
- 6 copies of the `alm` fixture
- 6 copies of the `_reset_to_off` autouse
- 9 inline `if config.interface.type != "mum": pytest.skip(...)` gates

What this changes by design:
- fio / alm / nad scope: module → session. NAD discovery happens once
  per run instead of once per module. The helpers are immutable beyond
  their constructor args, so sharing them is safe; per-test state is
  reset by the autouse `_reset_to_off`.
- test_overvolt.py: `_park_at_nominal` is now `_reset_to_off`, which
  cleanly overrides the conftest's LED-only version (PSU + LED reset).
- test_mum_alm_animation_generated.py keeps a local `_reset_to_off` +
  `_force_off` so its "no AlmTester anywhere" demonstration is preserved
  via fixture override; the local `nad` is also retained because it
  uses the typed `AlmStatus.receive` API.

Docs:
- docs/24_test_wiring.md NEW — describes the three-layer fixture
  topology, lifecycle sequence diagram, helper class wiring, and the
  playbook for adding a new framework component.
- docs/05_architecture_overview.md: add MCF (mum conftest) node to the
  Mermaid diagram + mention it in the components list.
- docs/19_frame_io_and_alm_helpers.md: replace the per-module
  fixture-wiring example with a request-fixtures-by-name snippet plus
  the override pattern.
- Path references swept across docs/02, docs/14, docs/18, docs/20,
  docs/README to point at the new locations.

Verified: pytest --collect-only collects 93 tests with no errors;
30 unit tests and 10 mock-only smoke tests pass; fixture-per-test
output shows PSU tests cannot see fio/alm/nad.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-14 19:43:09 +02:00

299 lines
12 KiB
Python

"""POC — data-driven ALM_Req_A tests via an :class:`AlmCase` dataclass.
A single test function (:func:`test_alm`) is parametrized over a list
of :class:`AlmCase` instances. Each instance carries:
- identity & reporting metadata (id, title, description, requirements,
severity, story, tags)
- inputs to ``ALM_Req_A`` (RGB, intensity, mode, update, duration,
LID range)
- expected outcome (state to reach OR "must not transition", PWM-check
flags, timeouts)
- a ``run(fio, alm, rp)`` method that executes the case end-to-end
Compared with :mod:`test_mum_alm_animation` (one ``def`` per case):
- **Adding a new case is one Python literal**, not a new function +
duplicated boilerplate.
- The shape of every case is *visible* on the page — easy to scan
a coverage matrix at a glance.
- Cross-cutting changes (e.g. "all cases should also assert the
measured NTC is plausible") happen in one place, the runner.
- Trade-off: less freedom for a single case to do something
one-of-a-kind. When a case needs custom behaviour the dataclass
can be subclassed, or that case stays as a hand-written
``def test_xyz`` in the original file.
The fixtures and the docstring-derived metadata mirror what
``test_mum_alm_animation.py`` does — this is purely a re-arrangement
of the same domain logic. Per-case identity/severity attributes are
recorded via ``rp(...)`` so they show up in the JUnit XML and the
HTML report's metadata columns.
"""
from __future__ import annotations
import time
from dataclasses import dataclass, field
from typing import Optional
import pytest
from frame_io import FrameIO
from alm_helpers import (
AlmTester,
LED_STATE_OFF, LED_STATE_ANIMATING, LED_STATE_ON,
STATE_POLL_INTERVAL, STATE_TIMEOUT_DEFAULT,
)
pytestmark = [pytest.mark.ANM]
# ╔══════════════════════════════════════════════════════════════════════╗
# ║ AlmCase — attributes + methods that describe ONE test scenario ║
# ╚══════════════════════════════════════════════════════════════════════╝
@dataclass
class AlmCase:
"""One end-to-end ALM_Req_A scenario.
Attribute groups (matching the four-phase pattern):
Identity : ``id``, ``title``, ``description``,
``requirements``, ``severity``, ``story``,
``tags``
ALM_Req_A inputs : ``r``, ``g``, ``b``, ``intensity``, ``update``,
``mode``, ``duration``, ``lid_from``, ``lid_to``
(``None`` LID values default to ``alm.nad``)
Expectations : ``expect_transition``, ``expected_led_state``,
``state_timeout_s``, ``check_pwm_comp``,
``check_pwm_wo_comp``
"""
# ── Identity / reporting ────────────────────────────────────────────
id: str
title: str
description: str
requirements: list[str] = field(default_factory=list)
severity: str = "normal"
story: str = "AmbLightMode"
tags: list[str] = field(default_factory=list)
# ── Inputs to ALM_Req_A ─────────────────────────────────────────────
r: int = 0
g: int = 0
b: int = 0
intensity: int = 0
update: int = 0
mode: int = 0
duration: int = 0
lid_from: Optional[int] = None # None → use alm.nad
lid_to: Optional[int] = None # None → use alm.nad
# ── Expected outcome ────────────────────────────────────────────────
# When True: wait until ALMLEDState reaches `expected_led_state`.
# When False: poll for `state_timeout_s` and assert the state never
# entered ANIMATING or ON (the "Save / invalid LID"
# pattern: the request must be ignored).
expect_transition: bool = True
expected_led_state: int = LED_STATE_ON
state_timeout_s: float = STATE_TIMEOUT_DEFAULT
# PWM checks only meaningful when expect_transition=True and we
# reached LED_ON — they validate the rgb_to_pwm calculator output.
check_pwm_comp: bool = False
check_pwm_wo_comp: bool = False
# ── Methods (the four phases live here) ─────────────────────────────
def record_metadata(self, rp) -> None:
"""Stamp the per-case identity attributes onto the report.
Recorded as JUnit ``<property>`` entries via the ``rp(...)``
helper from ``tests/conftest.py``. The HTML report's metadata
columns pick these up.
"""
rp("case_id", self.id)
rp("case_title", self.title)
rp("case_story", self.story)
rp("case_severity", self.severity)
if self.tags:
rp("case_tags", ", ".join(self.tags))
if self.requirements:
rp("case_requirements", ", ".join(self.requirements))
def send(self, fio: FrameIO, default_nad: int) -> None:
"""Issue ALM_Req_A for this case; resolves None LIDs to ``default_nad``."""
lid_from = self.lid_from if self.lid_from is not None else default_nad
lid_to = self.lid_to if self.lid_to is not None else default_nad
fio.send(
"ALM_Req_A",
AmbLightColourRed=self.r,
AmbLightColourGreen=self.g,
AmbLightColourBlue=self.b,
AmbLightIntensity=self.intensity,
AmbLightUpdate=self.update,
AmbLightMode=self.mode,
AmbLightDuration=self.duration,
AmbLightLIDFrom=lid_from,
AmbLightLIDTo=lid_to,
)
def assert_state(self, alm: AlmTester, rp) -> None:
"""Either wait for the target state, or watch that nothing happens."""
if self.expect_transition:
reached, elapsed, history = alm.wait_for_state(
self.expected_led_state, timeout=self.state_timeout_s
)
rp("led_state_history", history)
rp("on_elapsed_s", round(elapsed, 3))
assert reached, (
f"LEDState never reached {self.expected_led_state} "
f"(history: {history})"
)
else:
deadline = time.monotonic() + self.state_timeout_s
history: list[int] = []
while time.monotonic() < deadline:
st = alm.read_led_state()
if not history or history[-1] != st:
history.append(st)
time.sleep(STATE_POLL_INTERVAL)
rp("led_state_history", history)
assert LED_STATE_ANIMATING not in history, (
f"State unexpectedly entered ANIMATING: {history}"
)
assert LED_STATE_ON not in history, (
f"State unexpectedly drove LED ON: {history}"
)
def assert_pwm(self, alm: AlmTester, rp) -> None:
"""Run whichever PWM assertions the case enabled."""
if self.check_pwm_comp:
alm.assert_pwm_matches_rgb(rp, self.r, self.g, self.b)
if self.check_pwm_wo_comp:
alm.assert_pwm_wo_comp_matches_rgb(rp, self.r, self.g, self.b)
def run(self, fio: FrameIO, alm: AlmTester, rp) -> None:
"""Full case execution. Called from the parametrized test body."""
self.record_metadata(rp)
rp("rgb_in", (self.r, self.g, self.b))
rp("intensity", self.intensity)
rp("mode", self.mode)
rp("update", self.update)
self.send(fio, default_nad=alm.nad)
self.assert_state(alm, rp)
# PWM checks only meaningful for cases that reach LED_ON
if (self.expect_transition
and self.expected_led_state == LED_STATE_ON
and (self.check_pwm_comp or self.check_pwm_wo_comp)):
self.assert_pwm(alm, rp)
# ╔══════════════════════════════════════════════════════════════════════╗
# ║ The case matrix ║
# ╚══════════════════════════════════════════════════════════════════════╝
# Each entry is one test row in the report. Adding a new case is just
# appending another AlmCase(...) literal here — no new function body
# needed. Inputs and expectations sit side by side so reviewers can
# scan a coverage matrix at a glance.
ALM_CASES: list[AlmCase] = [
AlmCase(
id="VTD_ANM_0001",
title="Mode 0 — Immediate setpoint reaches LED_ON; PWM matches calculator",
description=(
"AmbLightMode=0 jumps directly to the requested colour at "
"full intensity. ALMLEDState should reach LED_ON quickly "
"and both PWM frames should match rgb_to_pwm.compute_pwm()."
),
requirements=["REQ-ANM-00001"],
severity="critical",
story="AmbLightMode",
tags=["AmbLightMode", "Mode0", "PWM"],
r=0, g=180, b=80, intensity=255,
update=0, mode=0, duration=10,
expected_led_state=LED_STATE_ON,
check_pwm_comp=True,
check_pwm_wo_comp=True,
),
AlmCase(
id="VTD_LID_0002",
title="LID broadcast (0x00..0xFF) reaches this node",
description=(
"A broadcast LID range should include any NAD; this node "
"should react and drive the LED ON."
),
requirements=["REQ-LID-00002"],
severity="normal",
story="LID range",
tags=["LID", "Broadcast"],
r=120, g=0, b=255, intensity=255,
update=0, mode=0, duration=0,
lid_from=0x00, lid_to=0xFF,
expected_led_state=LED_STATE_ON,
),
AlmCase(
id="VTD_LID_0003",
title="LID From > To is rejected (no LED change)",
description=(
"An ill-formed LID range (From > To) should be ignored; "
"ALMLEDState must remain at the OFF baseline for the watch "
"window."
),
requirements=["REQ-LID-00003"],
severity="normal",
story="LID range",
tags=["LID", "Negative"],
r=255, g=255, b=255, intensity=255,
update=0, mode=0, duration=0,
lid_from=0x14, lid_to=0x0A,
expect_transition=False,
state_timeout_s=1.0,
),
AlmCase(
id="VTD_LID_0004",
title="Update=1 (Save) does not change LED state",
description=(
"With AmbLightUpdate=1 the ECU should buffer the command "
"without executing it; ALMLEDState must remain at OFF."
),
requirements=["REQ-UPDATE-00004"],
severity="normal",
story="AmbLightUpdate",
tags=["AmbLightUpdate", "Save"],
r=0, g=255, b=0, intensity=255,
update=1, mode=1, duration=10,
expect_transition=False,
state_timeout_s=1.0,
),
]
# Fixtures (fio, alm, _reset_to_off) and the MUM gate come from
# tests/hardware/mum/conftest.py.
# ╔══════════════════════════════════════════════════════════════════════╗
# ║ The single parametrized runner ║
# ╚══════════════════════════════════════════════════════════════════════╝
@pytest.mark.parametrize(
"case",
ALM_CASES,
ids=[c.id for c in ALM_CASES], # nice short IDs in the pytest CLI
)
def test_alm(case: AlmCase, fio: FrameIO, alm: AlmTester, rp):
"""Execute one :class:`AlmCase` end-to-end.
The body is intentionally a one-liner — every per-case decision
(which signals to send, what to assert, which PWM checks to run)
lives on the case object itself. Adding new coverage means
appending another AlmCase to ALM_CASES; no new test function needed.
"""
case.run(fio, alm, rp)