← Back to EU Hub

Regulation (EU) 2024/1689 Deep Dive

Legislative Context

Preamble & Recitals

The Preamble consists of 180 Recitals that provide the interpretative lens for the Act. Unlike the Articles, Recitals are not legally binding in isolation but are critical for judicial interpretation by the ECJ.

Recital 1-5: Human-Centric AI

Establishes the dual objective: ensuring the functioning of the internal market while ensuring a high level of protection of health, safety, and fundamental rights. It explicitly aligns AI with the Charter of Fundamental Rights of the EU.

Recital 6-10: Risk-Based Approach

Clarifies that regulation should be proportionate to the risk. It explicitly excludes AI developed exclusively for military, defense, or national security purposes from the scope.

⚖️

Counsel Note: Recital 14 (Definition)

The definition of "AI System" is aligned with the OECD to ensure international interoperability. Critically, it emphasizes "inference" capabilities. Simple rules-based systems (Excel formulas) are intended to be excluded.

Title IArticles 1 – 4

Subject Matter & Scope

Article 2: Scope (The "Brussels Effect")

The Act applies to providers placing AI systems on the market in the EU, regardless of whether they are established in the EU or a third country. Crucially, it also applies to providers in third countries where the output produced by the system is used in the Union.

Key Exclusion

AI systems used exclusively for scientific research and development.

Article 3: Definitions

'AI System': A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Title IIArticle 5

Prohibited AI Practices

These practices are deemed to carry an Unacceptable Risk and are banned outright. Exceptions are extremely narrow (law enforcement with judicial authorization).

Subliminal Manipulation

Techniques aiming to distort behavior and impair informed decision-making, causing harm.

Exploitation of Vulnerabilities

Targeting age, disability, or social/economic situation.

Social Scoring

Evaluating natural persons over time leading to detrimental treatment in unrelated contexts.

Real-time Remote Biometric ID

Use in public spaces for law enforcement (unless strictly necessary for terrorism, missing child, etc.).

Biometric Categorization

Inferring race, political opinions, trade union membership, religious beliefs, or sexual orientation.

Emotion Recognition

Banned in workplace and education institutions.

Predictive Policing

Assessing risk of offending based solely on profiling or personality traits.

Untargeted Scraping (Clearview AI)

Building facial recognition databases by scraping internet or CCTV footage.

Title IIIArticles 6 – 50

High-Risk AI Systems

Title III constitutes the bulk of the compliance burden. High-Risk systems are permitted but subject to strict ex-ante and ex-post obligations.

Classification Criteria (Art 6)

  • 1. Safety Components:AI used as a safety component in regulated products (Toys, Cars, Medical Devices, Aviation) under Annex II.
  • 2. Stand-alone Systems:Listed in Annex III. Includes Biometrics, Critical Infrastructure, Education, Employment, Essential Public Services, Law Enforcement, Migration, and Administration of Justice.

Core Obligations (Art 8-15)

Risk Management System
Data Governance (Training/Test sets)
Technical Documentation
Record Keeping (Logging)
Transparency & Instructions
Human Oversight
Accuracy, Robustness & Cybersec

The "Derogation" Procedure (Art 6(3))

A system listed in Annex III is NOT high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. This requires a documented assessment proving the AI only performs a narrow procedural task or detects decision-making deviations.

Title IVArticle 50

Transparency Obligations

Deepfakes & Synthetic Content

AI-generated or manipulated image, audio, or video content that resembles existing persons, objects, places, or entities must be disclosed as artificially generated or manipulated.

Chatbots & AI Interaction

Users must be informed when they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.

Title VArticles 51 – 56

General Purpose AI Models (GPAI)

This title regulates foundation models (e.g., GPT-4, Gemini, Claude). It distinguishes between standard GPAI models and those with Systemic Risk.

All GPAI Models

  • • Maintain technical documentation.
  • • Comply with EU Copyright Law.
  • • Publish a detailed summary of training content used.

Systemic Risk Models

Threshold: 10^25 FLOPs
  • • Perform model evaluations (Red Teaming).
  • • Assess and mitigate systemic risks.
  • • Report serious incidents to the AI Office.
  • • Ensure cybersecurity protections.
Title VIArticles 57 – 63

AI Regulatory Sandboxes

Member States shall establish at least one AI regulatory sandbox to facilitate the development and testing of innovative AI systems under regulatory supervision before market placement.

Title VIIArticles 64 – 70

Governance Structure

Enforcement Architecture

EU

European AI Office

Within the Commission. Enforces GPAI rules and coordinates national bodies.

MS

National Competent Authorities

Enforce High-Risk AI rules within Member States (e.g., CNIL in France, BfDI in Germany).

Title XArticles 99 – 101

Penalties & Fines

€35M
or 7% Turnover

For Prohibited Practices (Art 5)

€15M
or 3% Turnover

For High-Risk AI Obligations

€7.5M
or 1.5% Turnover

For Incorrect Information