Regulation (EU) 2024/1689 — Enforcement begins August 2026

The EU AI Act requires traceable AI. Ctrl AI makes it enforceable.

Starting August 2026, high-risk AI systems in the EU must demonstrate transparency, traceability, and human oversight. Ctrl AI provides enforceable Controls with full reasoning traces — compliant by design, not by afterthought.

Key enforcement dates

The EU AI Act is enforced in phases. High-risk AI requirements — the ones that demand traceability and oversight — hit in August 2026.

Aug 2024

Entry into Force

The AI Act officially entered into force across the EU.

Feb 2025

Prohibited Practices

Ban on unacceptable-risk AI: social scoring, manipulative AI.

Aug 2025

GPAI Rules

General-purpose AI model obligations and transparency rules.

Aug 2026UPCOMING

High-Risk AI

Full compliance for high-risk systems. Traceability, oversight, audit trails.

What the AI Act requires for high-risk systems

Articles 9–17 define specific obligations. Here's what they mean in practice — and how Ctrl AI maps to each.

Art. 9

Risk Management

Identify, analyze, and mitigate risks throughout the AI system lifecycle.

How Ctrl AI solves this

Controls define explicit rules with typed inputs/outputs. Risk is managed by structure, not hope.

Art. 10

Data Governance

Training and validation data must be relevant, representative, and traceable.

How Ctrl AI solves this

Every Control traces to its source document, section, and page. Data lineage built in.

Art. 11

Technical Documentation

Detailed documentation of the AI system's design, capabilities, and limitations.

How Ctrl AI solves this

Controls are the documentation. Typed I/O, execution rules, scripts, and source refs — always current.

Art. 12

Record-Keeping

Automatic logging of events during AI system operation for traceability.

How Ctrl AI solves this

Every query produces a reasoning trace: which Controls fired, which data accessed, which scripts ran.

Art. 13

Transparency

AI systems must be sufficiently transparent for users to interpret outputs.

How Ctrl AI solves this

Trust levels on every claim: verified, policy-enforced, synthesized, or AI-generated. No black boxes.

Art. 14

Human Oversight

AI systems must enable effective oversight by natural persons.

How Ctrl AI solves this

Expert sign-off on Controls. Procedure gates pause for human approval. Humans stay in the loop.

Art. 15

Accuracy & Robustness

AI systems must achieve appropriate levels of accuracy and robustness.

How Ctrl AI solves this

Deterministic scripts produce exact outputs. Guided Controls follow reviewed logic. Tested and signed off.

Art. 17

Quality Management

Implement a quality management system covering the entire AI lifecycle.

How Ctrl AI solves this

Control lifecycle: draft → review → sign-off → monitor. Freshness tracking. Coverage metrics.

Risk classification system

The AI Act categorizes systems into four risk levels. Ctrl AI is designed for companies operating in the high-risk category.

Unacceptable Risk

Banned

Social scoring, real-time biometric surveillance, manipulative AI. Prohibited entirely.

High Risk

Strict Requirements

AI in hiring, credit scoring, healthcare, law enforcement. Requires risk management, documentation, human oversight, and conformity assessment.

This is where most enterprise AI operates — and where Ctrl AI provides enforceable compliance.

Limited Risk

Transparency

Chatbots, deepfakes, emotion recognition. Must disclose that users are interacting with AI.

Minimal Risk

No Requirements

Spam filters, AI in video games, recommendation systems. Free to use with voluntary codes of conduct.

Controls are compliance controls — for AI.

Your auditors already understand controls. SOX controls, COSO controls, operational controls. Ctrl AI extends this concept to AI — making AI governable with the same rigor companies apply to everything else.

Art. 12 Record-Keeping

Reasoning Traces

Every AI decision logged step by step: which Controls fired, which data was accessed, which scripts ran. Export as CSV/PDF for any audit.

Art. 13 Transparency

Trust Levels

Every claim tagged: verified (deterministic), policy-enforced (signed-off), synthesized (pending review), or AI-generated (no coverage). No black boxes.

Art. 14 Human Oversight

Expert Sign-off

Domain experts review and approve Controls. Procedures gate decisions for human approval. Humans oversee AI, not the other way around.

Non-compliance costs

Penalties up to €35M or 7% of global turnover.

€35M

Prohibited practices violations

€15M

High-risk system non-compliance

€7.5M

Incorrect information to authorities

Start your compliance journey →

Don't wait for enforcement. Build compliance into your AI now.

From documents to enforceable AI Controls in 30 minutes. Every decision traceable. Every rule signed off. Every answer accountable.

14-day free trial. No credit card required.