ecu-tests/tests/hardware/mum/test_mum_alm_cases.py
Hosam-Eldin Mostafa 08247f9321 refactor(tests): AlmTester as the single contributor-facing API
Extends ``tests/hardware/alm_helpers.py`` into the full surface that
hardware tests use, so contributors write intent (``alm.send_color``,
``alm.read_led_state``, ``alm.wait_for_led_on``) and never touch
``fio.send("ALM_Req_A", AmbLight…=…)`` or LDF schema details.

What landed:

- AlmTester gains ~16 methods:
    read_nad, read_voltage_status, read_thermal_status, read_nvm_status,
    read_sig_comm_err, read_ntc_kelvin, read_ntc_celsius, read_pwm,
    read_pwm_wo_comp, send_color, send_color_broadcast, save_color,
    apply_saved_color, discard_saved_color, send_config, plus
    wait_for_led_on / wait_for_led_off / wait_for_animating wrappers.
- The six IntEnum classes that ALM tests need (LedState, Mode, Update,
  NVMStatus, VoltageStatus, ThermalStatus) are defined directly in
  alm_helpers.py — tests get them via `from alm_helpers import …`.
- All ALM test files migrated:
    test_mum_alm_animation.py, test_mum_alm_cases.py, test_overvolt.py,
    swe5/test_anm_management.py, swe5/test_com_management.py
    each now go through AlmTester for every common pattern.
- swe6/test_com_management.py: stays on `fio` (these tests probe
  schema features not in the current production LDF and skip when
  the LDF doesn't declare them) — change limited to LedState enum.
- test_mum_alm_animation_generated.py deleted — its "no-AlmTester"
  demonstration loses its point now that AlmTester is the
  recommended path.
- docs/19_frame_io_and_alm_helpers.md reframed: AlmTester is the
  contributor surface; FrameIO is implementation detail. New API
  reference + Cookbook examples + a note that the maintenance pact
  is "LDF changes → AlmTester updates".

Verified: pytest --collect-only collects 87 tests cleanly; 40 unit
+ mock smoke tests pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-15 01:23:52 +02:00

297 lines
12 KiB
Python

"""POC — data-driven ALM_Req_A tests via an :class:`AlmCase` dataclass.
A single test function (:func:`test_alm`) is parametrized over a list
of :class:`AlmCase` instances. Each instance carries:
- identity & reporting metadata (id, title, description, requirements,
severity, story, tags)
- inputs to ``ALM_Req_A`` (RGB, intensity, mode, update, duration,
LID range)
- expected outcome (state to reach OR "must not transition", PWM-check
flags, timeouts)
- a ``run(fio, alm, rp)`` method that executes the case end-to-end
Compared with :mod:`test_mum_alm_animation` (one ``def`` per case):
- **Adding a new case is one Python literal**, not a new function +
duplicated boilerplate.
- The shape of every case is *visible* on the page — easy to scan
a coverage matrix at a glance.
- Cross-cutting changes (e.g. "all cases should also assert the
measured NTC is plausible") happen in one place, the runner.
- Trade-off: less freedom for a single case to do something
one-of-a-kind. When a case needs custom behaviour the dataclass
can be subclassed, or that case stays as a hand-written
``def test_xyz`` in the original file.
The fixtures and the docstring-derived metadata mirror what
``test_mum_alm_animation.py`` does — this is purely a re-arrangement
of the same domain logic. Per-case identity/severity attributes are
recorded via ``rp(...)`` so they show up in the JUnit XML and the
HTML report's metadata columns.
"""
from __future__ import annotations
import time
from dataclasses import dataclass, field
from typing import Optional
import pytest
from alm_helpers import (
AlmTester,
LedState,
STATE_POLL_INTERVAL, STATE_TIMEOUT_DEFAULT,
)
pytestmark = [pytest.mark.ANM]
# ╔══════════════════════════════════════════════════════════════════════╗
# ║ AlmCase — attributes + methods that describe ONE test scenario ║
# ╚══════════════════════════════════════════════════════════════════════╝
@dataclass
class AlmCase:
"""One end-to-end ALM_Req_A scenario.
Attribute groups (matching the four-phase pattern):
Identity : ``id``, ``title``, ``description``,
``requirements``, ``severity``, ``story``,
``tags``
ALM_Req_A inputs : ``r``, ``g``, ``b``, ``intensity``, ``update``,
``mode``, ``duration``, ``lid_from``, ``lid_to``
(``None`` LID values default to ``alm.nad``)
Expectations : ``expect_transition``, ``expected_led_state``,
``state_timeout_s``, ``check_pwm_comp``,
``check_pwm_wo_comp``
"""
# ── Identity / reporting ────────────────────────────────────────────
id: str
title: str
description: str
requirements: list[str] = field(default_factory=list)
severity: str = "normal"
story: str = "AmbLightMode"
tags: list[str] = field(default_factory=list)
# ── Inputs to ALM_Req_A ─────────────────────────────────────────────
r: int = 0
g: int = 0
b: int = 0
intensity: int = 0
update: int = 0
mode: int = 0
duration: int = 0
lid_from: Optional[int] = None # None → use alm.nad
lid_to: Optional[int] = None # None → use alm.nad
# ── Expected outcome ────────────────────────────────────────────────
# When True: wait until ALMLEDState reaches `expected_led_state`.
# When False: poll for `state_timeout_s` and assert the state never
# entered ANIMATING or ON (the "Save / invalid LID"
# pattern: the request must be ignored).
expect_transition: bool = True
expected_led_state: int = LedState.LED_ON
state_timeout_s: float = STATE_TIMEOUT_DEFAULT
# PWM checks only meaningful when expect_transition=True and we
# reached LED_ON — they validate the rgb_to_pwm calculator output.
check_pwm_comp: bool = False
check_pwm_wo_comp: bool = False
# ── Methods (the four phases live here) ─────────────────────────────
def record_metadata(self, rp) -> None:
"""Stamp the per-case identity attributes onto the report.
Recorded as JUnit ``<property>`` entries via the ``rp(...)``
helper from ``tests/conftest.py``. The HTML report's metadata
columns pick these up.
"""
rp("case_id", self.id)
rp("case_title", self.title)
rp("case_story", self.story)
rp("case_severity", self.severity)
if self.tags:
rp("case_tags", ", ".join(self.tags))
if self.requirements:
rp("case_requirements", ", ".join(self.requirements))
def send(self, alm: AlmTester) -> None:
"""Issue ALM_Req_A for this case via ``alm.send_color``.
Unset (``None``) ``lid_from`` / ``lid_to`` resolve to ``alm.nad``
inside :meth:`AlmTester.send_color` — no explicit fallback needed.
"""
alm.send_color(
red=self.r, green=self.g, blue=self.b,
intensity=self.intensity,
update=self.update,
mode=self.mode,
duration=self.duration,
lid_from=self.lid_from,
lid_to=self.lid_to,
)
def assert_state(self, alm: AlmTester, rp) -> None:
"""Either wait for the target state, or watch that nothing happens."""
if self.expect_transition:
reached, elapsed, history = alm.wait_for_state(
self.expected_led_state, timeout=self.state_timeout_s
)
rp("led_state_history", history)
rp("on_elapsed_s", round(elapsed, 3))
assert reached, (
f"LEDState never reached {self.expected_led_state} "
f"(history: {history})"
)
else:
deadline = time.monotonic() + self.state_timeout_s
history: list[int] = []
while time.monotonic() < deadline:
st = alm.read_led_state()
if not history or history[-1] != st:
history.append(st)
time.sleep(STATE_POLL_INTERVAL)
rp("led_state_history", history)
assert LedState.LED_ANIMATING not in history, (
f"State unexpectedly entered ANIMATING: {history}"
)
assert LedState.LED_ON not in history, (
f"State unexpectedly drove LED ON: {history}"
)
def assert_pwm(self, alm: AlmTester, rp) -> None:
"""Run whichever PWM assertions the case enabled."""
if self.check_pwm_comp:
alm.assert_pwm_matches_rgb(rp, self.r, self.g, self.b)
if self.check_pwm_wo_comp:
alm.assert_pwm_wo_comp_matches_rgb(rp, self.r, self.g, self.b)
def run(self, alm: AlmTester, rp) -> None:
"""Full case execution. Called from the parametrized test body."""
self.record_metadata(rp)
rp("rgb_in", (self.r, self.g, self.b))
rp("intensity", self.intensity)
rp("mode", self.mode)
rp("update", self.update)
self.send(alm)
self.assert_state(alm, rp)
# PWM checks only meaningful for cases that reach LED_ON
if (self.expect_transition
and self.expected_led_state == LedState.LED_ON
and (self.check_pwm_comp or self.check_pwm_wo_comp)):
self.assert_pwm(alm, rp)
# ╔══════════════════════════════════════════════════════════════════════╗
# ║ The case matrix ║
# ╚══════════════════════════════════════════════════════════════════════╝
# Each entry is one test row in the report. Adding a new case is just
# appending another AlmCase(...) literal here — no new function body
# needed. Inputs and expectations sit side by side so reviewers can
# scan a coverage matrix at a glance.
ALM_CASES: list[AlmCase] = [
AlmCase(
id="VTD_ANM_0001",
title="Mode 0 — Immediate setpoint reaches LED_ON; PWM matches calculator",
description=(
"AmbLightMode=0 jumps directly to the requested colour at "
"full intensity. ALMLEDState should reach LED_ON quickly "
"and both PWM frames should match rgb_to_pwm.compute_pwm()."
),
requirements=["REQ-ANM-00001"],
severity="critical",
story="AmbLightMode",
tags=["AmbLightMode", "Mode0", "PWM"],
r=0, g=180, b=80, intensity=255,
update=0, mode=0, duration=10,
expected_led_state=LedState.LED_ON,
check_pwm_comp=True,
check_pwm_wo_comp=True,
),
AlmCase(
id="VTD_LID_0002",
title="LID broadcast (0x00..0xFF) reaches this node",
description=(
"A broadcast LID range should include any NAD; this node "
"should react and drive the LED ON."
),
requirements=["REQ-LID-00002"],
severity="normal",
story="LID range",
tags=["LID", "Broadcast"],
r=120, g=0, b=255, intensity=255,
update=0, mode=0, duration=0,
lid_from=0x00, lid_to=0xFF,
expected_led_state=LedState.LED_ON,
),
AlmCase(
id="VTD_LID_0003",
title="LID From > To is rejected (no LED change)",
description=(
"An ill-formed LID range (From > To) should be ignored; "
"ALMLEDState must remain at the OFF baseline for the watch "
"window."
),
requirements=["REQ-LID-00003"],
severity="normal",
story="LID range",
tags=["LID", "Negative"],
r=255, g=255, b=255, intensity=255,
update=0, mode=0, duration=0,
lid_from=0x14, lid_to=0x0A,
expect_transition=False,
state_timeout_s=1.0,
),
AlmCase(
id="VTD_LID_0004",
title="Update=1 (Save) does not change LED state",
description=(
"With AmbLightUpdate=1 the ECU should buffer the command "
"without executing it; ALMLEDState must remain at OFF."
),
requirements=["REQ-UPDATE-00004"],
severity="normal",
story="AmbLightUpdate",
tags=["AmbLightUpdate", "Save"],
r=0, g=255, b=0, intensity=255,
update=1, mode=1, duration=10,
expect_transition=False,
state_timeout_s=1.0,
),
]
# Fixtures (fio, alm, _reset_to_off) and the MUM gate come from
# tests/hardware/mum/conftest.py.
# ╔══════════════════════════════════════════════════════════════════════╗
# ║ The single parametrized runner ║
# ╚══════════════════════════════════════════════════════════════════════╝
@pytest.mark.parametrize(
"case",
ALM_CASES,
ids=[c.id for c in ALM_CASES], # nice short IDs in the pytest CLI
)
def test_alm(case: AlmCase, alm: AlmTester, rp):
"""Execute one :class:`AlmCase` end-to-end.
The body is intentionally a one-liner — every per-case decision
(which signals to send, what to assert, which PWM checks to run)
lives on the case object itself. Adding new coverage means
appending another AlmCase to ALM_CASES; no new test function needed.
"""
case.run(alm, rp)