WCET and CI: Adding Timing Analysis to Embedded Software Pipelines (Using RocqStat/VectorCAST)
embeddedCIsafety

WCET and CI: Adding Timing Analysis to Embedded Software Pipelines (Using RocqStat/VectorCAST)

ddev tools
2026-02-01
9 min read
Advertisement

Embed WCET and timing checks into CI to avoid late-stage timing regressions—practical VectorCAST + RocqStat patterns and CI examples for safety-critical projects.

Stop late surprises: add WCET and timing analysis to your embedded CI pipelines

Timing regressions and unpredictable execution latency are a top source of late-stage rework in safety-critical embedded projects. As systems become more software-defined and multicore, teams must move worst-case execution time (WCET) analysis from a manual, post-integration task into the CI pipeline. This article shows a pragmatic, engineer-first path to embed WCET and timing verification into CI using Vector's toolchain—now extending its capabilities after Vector's Jan 2026 acquisition of StatInf's RocqStat technology.

The key takeaway (read first)

  • Shift-left WCET: run static WCET estimation and measurement-based timing checks during feature branches and merge requests, not after integration.
  • Unify test artifacts: collect VectorCAST test results and RocqStat WCET reports in CI artifacts and fail builds on timing regressions.
  • Make timing part of gating: define deterministic thresholds and allow controlled regressions with traceable requirements links.

Why this matters in 2026

Late 2025 and early 2026 saw two reinforcing trends: vehicle and avionics software grew more centralized, and regulators pushed stricter verification expectations for timing in ISO 26262 and DO-178 governed projects. Vector Informatik's acquisition of StatInf's RocqStat in January 2026 signals vendor consolidation around integrated timing and verification workflows. That makes it practical to treat WCET as first-class CI artifacts alongside unit and integration test results.

Vector's roadmap now aims to provide a unified environment for timing analysis, WCET estimation and software testing — reducing the manual handoffs that produce timing surprises.

Two complementary approaches to timing verification

There's no single silver bullet for timing. Use a combination of:

  • Static WCET analysis (RocqStat): produces safe upper bounds from code and platform models.
  • Measurement-based timing (instrumentation, hardware traces): validates assumptions and catches environment-induced latencies.

Integrated toolchains—VectorCAST for verification and RocqStat for WCET—let you combine these methods in CI: static analysis for gating, measurement as supporting evidence, and traceability to requirements. Tie these artifacts into your broader observability & cost-control practice so tests and WCET runs are discoverable and actionable.

CI integration patterns (practical)

Below are three patterns you can adopt depending on team maturity and tool access.

Pattern A — Developer feedback loop (fast)

  1. Run unit tests with VectorCAST on developer machines or as a lightweight CI job.
  2. Run a fast WCET estimate (static, conservative) with RocqStat on changed modules only.
  3. Fail the MR if WCET exceeds an agreed threshold or if growth > X%.

This pattern prevents obvious timing regressions early without full platform modeling. It pairs well with improved local tooling and developer hygiene guidance such as hardening local toolchains to reduce accidental regressions introduced by auxiliary scripts or test harnesses.

Pattern B — Merge gating and release verification (balanced)

  1. On merge to main: full VectorCAST regression tests + RocqStat static WCET on complete binaries.
  2. Run measurement harnesses (QEMU or lab HIL) in nightly jobs for statistical timing.
  3. Publish combined report—static bound, measured distribution, and justification—attached to release artifacts.

Pattern C — Safety-certification path (rigorous)

  1. Model target execution platform (caches, pipelines, context switch costs) for RocqStat.
  2. Use VectorCAST for code coverage, structural tests and traceability to requirements.
  3. Use hardware-in-the-loop runs, instruction-level traces, and trace-based timing models to close the loop for certification artifacts.

Concrete CI examples

Below are runnable examples for common CI systems. The command names and CLI flags are representative—adjust them to your VectorCAST/RocqStat installation and licensing. The key is the flow: build -> test -> WCET -> artifact -> gate.

GitLab CI: stage-based pipeline

stages:
  - build
  - test
  - wcet

build:
  stage: build
  script:
    - export CC=arm-none-eabi-gcc
    - make -j$(nproc)
  artifacts:
    paths: [build/output.bin, build/map.txt]

test:
  stage: test
  script:
    - vectorcastcli create-project --name MyProject --source src/ --build build/
    - vectorcastcli run-all --project MyProject --report-format junit
  artifacts:
    when: always
    paths: [vectorcast/reports/*.xml]

wcet:
  stage: wcet
  script:
    - rocqstat-cli analyze --binary build/output.bin --platform models/target.json --output wcet/report.html --timeout 30m
    - python tools/parse_wcet.py wcet/report.html --threshold 50000  # microseconds
  artifacts:
    when: always
    paths: [wcet/report.html]
  allow_failure: false

Notes:

  • Use a small timeout for MR jobs; run heavy, complete analyses on merge/main or nightly.
  • parse_wcet.py should return non-zero exit code if threshold is violated to fail the job.

GitHub Actions: MR gating with job matrix

name: Timing CI
on:
  pull_request:
    branches: [ main ]

jobs:
  build-and-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Build
        run: make
      - name: Run VectorCAST
        run: vectorcastcli run-all --project MyProject --format junit
      - name: Upload test results
        uses: actions/upload-artifact@v4
        with:
          name: vc-reports
          path: vectorcast/reports/

  wcet-check:
    runs-on: ubuntu-latest
    needs: build-and-test
    steps:
      - uses: actions/checkout@v4
      - name: Run RocqStat
        run: |
          rocqstat-cli analyze --binary build/output.bin \
            --platform models/target.json --output wcet/result.json
      - name: Verify threshold
        run: python tools/check_wcet_json.py wcet/result.json 50000

Jenkins (Declarative): fail-fast merge gating

pipeline {
  agent any
  stages {
    stage('Build') { steps { sh 'make -j8' } }
    stage('VectorCAST') { steps { sh 'vectorcastcli run-all --project MyProject' } }
    stage('RocqStat') {
      steps {
        sh 'rocqstat-cli analyze --binary build/output.bin --platform models/target.json --output wcet/report.xml'
        sh 'python tools/check_wcet_xml.py wcet/report.xml 50000'
      }
    }
  }
  post {
    always { archiveArtifacts artifacts: 'wcet/*.xml, vectorcast/reports/*.xml', fingerprint: true }
  }
}

Practical tips for reliable and fast WCET in CI

  • Cache model artifacts: platform models (cache layout, pipeline timings) are heavy to build—cache them per platform and only change on platform updates.
  • Incremental analysis: run full-system WCET nightly; run per-module quick checks on PRs.
  • Parallelize: split static analysis across functions or object files when the tool supports it.
  • Use regression thresholds: absolute thresholds and percent-change both have value. Use percent-change to detect unexpected growth and absolute for safety limits.
  • Instrument for measurement: include optional runtime instrumentation to collect execution traces when tests run in QEMU or HIL; use these to refine RocqStat models. Consider lab management and cloud/hybrid scheduling patterns similar to edge-first scheduling for cloud HIL farms.
  • Version and trace: attach WCET reports to the commit and link them to requirements in your traceability matrix (VectorCAST supports these links). Good artifact hygiene also ties into overall observability and cost tracking.
  • Testing variants: test multiple build configurations (O0 vs O2, with/without LTO) and platform behavior (single core vs multi-core) if they will be used in production.

Handling non-determinism and measurement noise

Measurement-based timings vary. Use these techniques:

  • Statistical thresholds: require that the observed p99 < static WCET or specify acceptable probability of exceedance.
  • Controlled environment: use pinned CPU, disabled hypervisors, and isolated lab hardware. Record environment metadata alongside reports.
  • Reproducibility: include seed values for randomized scheduling in your test harness, and store them in CI artifacts.

Aligning with safety standards

For ISO 26262, DO-178C or IEC 61508 projects, timing evidence must be traceable, auditable and conservative. Use VectorCAST for requirements traceability and test coverage metrics; use RocqStat to generate conservative WCET bounds and documented assumptions (cache model, interrupt costs). In 2026, certification authorities increasingly expect a combined evidence package that includes static WCET plus measurement-based validation.

Advanced strategies for multicore and mixed-criticality systems

Multicore introduces interference; plain single-core WCET is insufficient. Advanced strategies:

  • Interference modeling: capture shared resource contention (memory bus, caches) in your RocqStat platform model.
  • Temporal isolation: use RTOS features (time partitioning) and validate isolation with stress tests in CI.
  • Compositional WCET: compute per-core bounds and combine with scheduling analysis for system-level guarantees.

Example: gating rule and acceptance policy

Here’s a pragmatic gating policy you can adapt:

  • MR-level quick static WCET: fail if predicted WCET > baseline + 10% or > hard limit.
  • Merge/main: full WCET + measurement-based verification pass within 95% of static WCET.
  • Nightly: run full-system analysis and record trend data. If trend shows steady growth for 3 consecutive nights, automatically open a ticket and block the next release. Use lightweight audits to keep pipeline complexity in check — a pattern described in stack-audit guides like Strip the Fat.

Case study: integrating VectorCAST and RocqStat in a mid-sized ECU project

Context: 120kLOC ECU, mixed-criticality tasks, dual-core MCU. The team implemented the following:

  1. Baseline: engineered platform model for RocqStat including cache and bus arbitration.
  2. CI: GitLab pipeline with per-PR quick WCET and nightly full WCET.
  3. Traces: vector trace capture on HIL for validating worst-case scenarios; traces fed to RocqStat to refine pessimism.
  4. Outcomes: early detection reduced late-stage WCET rework by 60% and reduced certification lead time by 3 months.

Key lesson: the cost to run WCET in CI was quickly offset by fewer late-cycle fixes and more predictable release schedules. Track CI runtime and costs as part of your platform observability: teams have reduced surprise spend by adopting the practices in observability & cost-control.

Operational recommendations

  • Start small: add quick static checks for hot paths first.
  • Automate artifacts: publish WCET reports and attach to Jira/systems-of-record for traceability.
  • Measure cost: track runtime of WCET jobs and tune schedule (PR vs nightly) to control CI costs — perform periodic stack audits similar to Strip the Fat.
  • Train engineers: ensure developers understand platform model assumptions and how code changes affect timing. Improve local developer feedback loops with guidance from local toolchain hardening guides like hardening local JavaScript tooling (for teams that mix web-based tooling into embedded pipelines).

Future predictions (2026+)

Expect tighter vendor integration and cloud-native support for timing tools. With Vector's integration of RocqStat into VectorCAST, teams can expect unified GUIs, shared models, and better automation hooks for CI/CD. Toolchains will add more support for compositional and probabilistic WCET, and cloud-based HIL farms will make measurement-based validation easier to schedule in CI — patterns that echo the move toward edge-first and cloud-native scheduling.

Common pitfalls and how to avoid them

  • Pitfall: Treating measurement as proof. Fix: Always combine static WCET with measurement-based validation and document assumptions.
  • Pitfall: Running full WCET on every PR. Fix: Use incremental checks and reserve full analyses for merge/main and nightly.
  • Pitfall: Ignoring platform model drift. Fix: Version platform models and trigger model reviews on hardware or toolchain changes.

Checklist to ship

  1. Define WCET acceptance thresholds per requirement.
  2. Integrate VectorCAST unit and integration tests into CI.
  3. Integrate RocqStat static WCET jobs; automate pass/fail rules.
  4. Configure measurement harnesses for nightly validation.
  5. Archive WCET reports and link to requirements for certification evidence.

Conclusion — make timing verification routine, not exceptional

In safety-critical projects, timing must be visible, testable and gated. With Vector's acquisition of RocqStat in January 2026 and the trend toward integrated verification platforms, it's now practical and cost-effective to operationalize WCET in CI. Start with quick static checks on PRs, graduate to full-system nightly analyses, and use combined static+measurement evidence for certification. Doing this prevents late surprises, speeds certification, and makes releases predictable.

Actionable next step: add a lightweight RocqStat job to your PR pipeline that runs a conservative per-module analysis and fails on >10% WCET growth. Use the examples above to scaffold the CI job.

Resources and further reading

Call to action

Ready to stop timing surprises? Start by adding a PR-level RocqStat check and a nightly full-system analysis in your CI. If you want a hands-on workshop or a templated pipeline for VectorCAST + RocqStat integration, contact our engineering team at dev-tools.cloud for a tailored implementation plan and CI templates that match your toolchain and certification needs.

Advertisement

Related Topics

#embedded#CI#safety
d

dev tools

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-04T09:46:17.170Z