Automating WCET and Timing Analysis in CI: Integrating RocqStat with Embedded Toolchains
embeddedci/cdtesting

Automating WCET and Timing Analysis in CI: Integrating RocqStat with Embedded Toolchains

nnewworld
2026-01-29
10 min read
Advertisement

Practical guide to integrate RocqStat and VectorCAST into CI for automated WCET, scheduling checks and timing regression gating.

Stop guessing at timing — run WCET and Timing Analysis inside your CI

If you manage embedded or automotive software, you know the pain: missed deadlines, flaky timing tests on different lab machines, and late-stage surprises when a change inflates a task's worst-case execution time (WCET). In 2026 the problem is only getting harder as vehicles become software-defined and regulatory pressure on timing safety increases. This guide shows how to automate WCET and timing verification in continuous integration using RocqStat and VectorCAST (post-acquisition), with practical pipelines, scripts, and regression strategies you can apply today.

Why this matters in 2026

Late 2025 and early 2026 marked a turning point: Vector Informatik acquired StatInf's RocqStat technology and team, signaling consolidation of timing analysis into mainstream verification toolchains. Vector has stated plans to integrate RocqStat into the VectorCAST ecosystem to create a unified timing-analysis and code-testing environment. For engineering leaders that means tighter workflows but also an opportunity: you can now run WCET estimates, timing verification, and software tests from the same CI pipelines.

Why integrate WCET into CI? Because timing regressions are code regressions. If your CI only runs unit tests, you miss non-functional regressions that break real-time guarantees.

What this guide covers

  • Practical prerequisites for automated WCET and timing analysis
  • How to run RocqStat from CI agents (example GitLab and GitHub Actions workflows)
  • Integrating results with VectorCAST and gating merges on timing thresholds
  • Strategies for multicore, RTOS scheduling and regression testing
  • Advanced tips: caching, reproducible builds, drift detection and reporting

Prerequisites and inputs — what WCET tools need

Before you automate, ensure CI has the deterministic inputs that WCET tools require. WCET is only meaningful with an accurate platform model and reproducible binaries.

  • Reproducible build artifacts: deterministic compiler flags, consistent linker maps and build IDs (ELF/PE/hex). Store compiler and linker versions in your pipeline manifest.
  • Binary and debug info: the ELF or binary plus map files, symbol tables, and optionally DWARF debug info for mapping to source.
  • Platform model: CPU pipeline model, cache parameters, and interrupt/timer models. RocqStat needs these to compute architectural WCET.
  • Task and scheduling model: for system-level timing verification, provide RTOS task priorities, periods, and activation sources (e.g., CAN, OS timers).
  • Test harnesses: VectorCAST test artifacts or unit/integration tests if you run measurement-based validation alongside static WCET.

Reproducible builds — make your binaries deterministic

WCET analysis depends on stable object code. Add these build rules to your CI:

  • Pin compiler and linker versions via container images or toolchain managers.
  • Use -fdata-sections and -ffunction-sections to keep function-level granularity in maps; avoid link-time optimizations (LTO) during WCET baselining unless you can reproduce LTO deterministically.
  • Embed version metadata in build artifacts (e.g., a JSON manifest) so results are traceable to commits and toolchain versions.

Running RocqStat from CI: a practical pipeline

RocqStat exposes CLI primitives you can call from CI agents. Below is a minimal, practical workflow pattern you can adapt to GitLab CI or GitHub Actions. The pattern: build → static analysis & unit tests → RocqStat WCET → report & gate.

Pipeline stages

  1. Build: produce deterministic ELF, map, and symbol files.
  2. Unit tests: run VectorCAST or existing unit test frameworks.
  3. WCET analysis: run RocqStat in a container with the platform model and generate a machine-readable report (JSON/XML).
  4. Compare: compute delta vs baseline, generate JUnit/SARIF, store artifacts and optionally fail the pipeline if timing regression exceeds thresholds.

Example: GitLab CI job for RocqStat

stages:
  - build
  - test
  - wcet

build_job:
  stage: build
  image: registry.example.com/toolchains/gcc-arm:12
  script:
    - make all
    - nm --defined-only build/app.elf > build/symbols.txt
  artifacts:
    paths:
      - build/app.elf
      - build/app.map

unit_test:
  stage: test
  image: registry.example.com/vectorcast:latest
  script:
    - vectorcast-cli run --project tests/vcast_project
  artifacts:
    when: always
    paths:
      - test-results/

wcet_analysis:
  stage: wcet
  image: registry.example.com/rocqstat:2026.1
  script:
    - rocqstat analyse --binary build/app.elf --map build/app.map --platform models/stm32f7.yaml --output build/wcet.json
    - python tools/compare_wcet.py --baseline baselines/wcet_baseline.json --current build/wcet.json --threshold 0.05
  artifacts:
    paths:
      - build/wcet.json
      - build/wcet-report.html

Notes: replace the image tag with your internal container that bundles RocqStat and required licensing. The compare_wcet.py script computes percent change and returns non-zero when a regression exceeds 5%.

Gating merges on timing thresholds

Enforce timing budgets by failing CI when a regulated threshold is exceeded. Practical thresholds:

  • Hard gate: fail merge if WCET increases by more than X% (e.g., 2–5%) or by absolute time exceeding available slack.
  • Soft alert: open a ticket or annotate the MR with the WCET delta for small deviations but don’t block (useful during aggressive refactors).

Implement gating by converting RocqStat output to a JUnit XML or custom exit code. Use the compare script above to set CI exit codes.

Integrating VectorCAST + RocqStat

Following Vector's acquisition of RocqStat, expect tighter integration between VectorCAST and timing analysis. Practical steps today:

  • Run VectorCAST unit and integration tests in CI first to validate functional correctness.
  • Feed VectorCAST test harnesses and coverage info into RocqStat when using hybrid measurement/static approaches. Coverage can help prioritize functions for dynamic testing and model calibration.
  • If your VectorCAST instance supports plugin hooks, call RocqStat after a successful VectorCAST run, attach WCET artifacts to the VectorCAST test report, and present timing regressions together with failing functional tests.

Check Vector's roadmap and licensing notes after the RocqStat acquisition — vendor packaging may change. The acquisition is an opportunity to rationalize toolchains across teams.

Scheduling and system-level timing verification

WCET is a per-task worst-case metric. You also need system-level scheduling analysis (response time analysis, blocking due to resources, interrupts) to prove tasks meet deadlines under chosen policies (Fixed Priority, EDF, etc.).

  • Collect inputs: WCET per runnable, task activation models, priorities, shared resource locking protocols.
  • Run RTA: integrate a scheduler analysis tool (or use VectorCAST extensions) that consumes WCETs to compute Worst-Case Response Times (WCRTs).
  • Automate: run WCRT analysis in CI and fail when any task's WCRT exceeds its deadline.

For RTOS-specific behavior (preemption thresholds, priority inheritance), ensure your scheduler model in CI accurately reflects production configuration. Keep the model under version control with the codebase.

Multicore and shared resource complexities

Multicore systems introduced in vehicles complicate WCET. Cache contention, bus arbitration, and shared accelerators create interference not captured by single-core models. Strategies:

  • Start conservative: use conservative interference bounds for early CI gating.
  • Hybrid analysis: combine static per-core WCET with measurement-based interference tests (load generators) in CI lab stages.
  • Model evolution: maintain explicit models of shared resources and update them with calibration runs; keep calibration data as CI artifacts.

Measurement-based validation in CI labs

Static WCET is necessary but not always sufficient. Use target hardware labs and measurement harnesses when possible:

  • Schedule nightly hardware runs that exercise critical paths under stress (cache, DMA, bus load).
  • Correlate static estimates with measured execution times to detect model drift.
  • Keep a rolling baseline of measured maxima and flag divergences automatically.

Reporting, traceability and auditability

For safety audits and traceability, produce human- and machine-readable artifacts from your CI runs:

  • Human: detailed HTML reports per run with function-level WCETs and trace links to source lines.
  • Machine: JSON or XML artifacts that include tool versions, platform model hash, and baseline commit IDs for automated comparison.
  • Audit trails: store artifacts in immutable storage linked to the CI run ID, and export summaries to your test management or ALM tools.

Regression testing strategies

Treat WCET as a first-class regression metric:

  • Continuous baseline: maintain a baseline per branch/build-target. When a PR is created, compute delta vs baseline and fail or mark depending on policy.
  • Function-level annotations: allow developers to mark functions as timing-sensitive so CI prioritizes analysis on those hotspots.
  • Noise filtering: handle small, non-actionable changes via statistical thresholds and persistent change detection (e.g., only alert after N consecutive increases).

Performance, cost and CI runner sizing

WCET tools can be compute-heavy, especially with detailed microarchitectural modeling. Reduce cost without sacrificing fidelity:

Security and licensing considerations

Post-acquisition tool licensing may change. Actions to take now:

  • Inventory current RocqStat/VectorCAST licenses and discuss CI/automation use cases with vendors.
  • Design CI so the license server is reachable only from authorized runners; rotate access keys and use network ACLs.
  • When you containerize tools, avoid baking licenses into images; mount license files at runtime or use secure environment variables.

Advanced strategies — model calibration, differential analysis, and AI assistance

As toolchains evolve in 2026, advanced techniques are practical in CI:

  • Model calibration: use measurement runs to tune cache and pipeline parameters automatically and store calibrated models in the repo.
  • Differential analysis: run lightweight WCET checks for PRs and full analyses for nightly or merge-to-main pipelines.
  • AI-assisted triage: use ML to cluster timing regressions by code paths and suggest likely root causes (cache-unfriendly loops, inlined library changes).

Checklist: deployable CI integration plan

  1. Pin toolchain and container images and add to CI manifest.
  2. Version and store platform models and mapping to hardware revisions.
  3. Implement deterministic build flags and produce symbol/map files as artifacts.
  4. Run VectorCAST/unit tests, then RocqStat, then scheduler/RTA in CI stages.
  5. Compare WCET to baseline, produce JUnit/SARIF and fail merge on threshold breach.
  6. Archive artifacts and link results to issue trackers for regressions.

Case study (short): a tier‑1 automotive ECU project

Context: an ECU running an AUTOSAR Classic stack with mixed-critical tasks on a single-core Cortex‑R. Team goals: reduce manual timing reviews and avoid late-stage failures.

What they did:

  • Containerized RocqStat + VectorCAST with license server access restricted to internal CI runners.
  • Built a nightly lab stage to run target measurement-based validation under controlled interference patterns.
  • Implemented a 3% soft threshold on PRs (annotate and notify) and a 1% hard threshold for main branch merges.
  • Result: timing-related regressions were detected earlier, reducing engineering rework by a measurable margin over six months.

Practical scripts and helpers

Example compare_wcet.py (high level):

#!/usr/bin/env python3
import json,sys

base=json.load(open('baselines/wcet_baseline.json'))
cur=json.load(open('build/wcet.json'))

# naive compare: find maximum percent increase
max_inc=0.0
for func in cur['functions']:
    b = next((x for x in base['functions'] if x['name']==func['name']), None)
    if b:
        inc = (func['wcet'] - b['wcet'])/b['wcet']
        max_inc = max(max_inc, inc)

THRESHOLD=0.05
print(f"max_inc={max_inc:.2%}")
if max_inc>THRESHOLD:
    print('WCET regression detected')
    sys.exit(2)
else:
    print('OK')
    sys.exit(0)

Persist the baseline under source control and update it via a formal review when changes are approved.

Final notes and 2026 outlook

By integrating RocqStat into CI and combining it with VectorCAST-driven functional verification, automotive and embedded teams can close the gap between functional correctness and timing safety. Expect vendor toolchains to merge more tightly following the 2026 consolidation trend. The key success factor is reproducibility: deterministic builds, versioned platform models, and automated regression policies.

Actionable takeaways

  • Automate WCET as part of CI — treat timing as code-sensitive and gate merges on regressions.
  • Make builds reproducible and store map/symbol artifacts for analysis traceability.
  • Combine static and measurement-based approaches to validate tool models and detect drift.
  • Use staged pipelines (lightweight per-PR checks, full nightly analyses) to balance speed and fidelity.

Call to action

Ready to stop timing surprises? Start by adding a deterministic build job and a minimal RocqStat CI stage using the example pipeline above. If you need a checklist tailored to your hardware platform (single-core, multicore, AUTOSAR), contact your tools vendor or internal verification team and evolve the baseline into an enforceable policy.

Advertisement

Related Topics

#embedded#ci/cd#testing
n

newworld

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-04T02:44:26.522Z