EU AI Act Framework

EU AI Act Framework

Article 6 classification, technical documentation, and transparency requirements for high-risk AI systems deployed within the European Union

4
Risk Categories
€35M
Maximum Penalty
3
Enforcement Phases
27
Member States
Article 6: High-Risk Classification
Risk Architecture

Article 6: High-Risk Classification

The EU AI Act defines "high-risk" AI systems as those that substantially impact fundamental rights or safety. High-risk categories include: - Biometric identification and emotion recognition - Critical infrastructure management - Education and vocational training - Credit and employment decisions - Law enforcement and border control - Access to public services and benefits Not all AI systems are high-risk. A recommendation engine may be low-risk, while a credit decision system is high-risk. The Article 6 classification exercise is the foundation for all downstream compliance requirements.
Article 10: Technical Documentation
Documentation Standards

Article 10: Technical Documentation

High-risk AI systems require comprehensive technical documentation: - Training data sources and characteristics - Model architecture and decision logic - Performance metrics across demographic groups - Testing and validation procedures - Known limitations and failure modes - Monitoring and update procedures Documentation must demonstrate compliance with Article 6 risk classification. Organizations cannot use "black box" models for high-risk applications—explainability is not optional.
Article 13: Transparency Information
User Disclosure

Article 13: Transparency Information

High-risk AI systems must disclose: - That an AI system is being used - Which decisions the AI makes or influences - When a human can override AI decisions - How to appeal an AI decision Transparency information must be written for end-users (not regulators), explaining AI impact in plain language. Users must understand when and how AI affects them.
Article 14: Human Oversight
Governance Mandate

Article 14: Human Oversight

High-risk AI systems require documented human review procedures: - Definition of when human review is required - Who performs the review (roles and competencies) - What information is available to the reviewer - Response time for human decisions - Appeal procedures for users Article 14 is not about checking every decision. It's about documenting which decisions require human judgment and establishing protocols for that judgment.

Compliance Timeline

Staged enforcement across three phases

Phase 1 — December 2024

High-Risk System Requirements

Article 6–14 fully applicable. Organizations must have technical documentation and human oversight procedures in place.

Phase 2 — June 2025

Prohibited Practices & GPAI

Prohibited AI practices banned. General-purpose AI transparency requirements begin. Codes of practice enforcement begins.

Phase 3 — December 2025

Full Enforcement

Full AI Act enforcement. Regulatory authorities begin investigations and fines. Organizations must demonstrate ongoing compliance.

Practical Implementation
Roadmap

Practical Implementation

Organizations should take a phased approach: Month 1-2: Audit existing AI systems against Article 6 criteria. Classify which are high-risk. Month 3-4: Develop technical documentation for high-risk systems. Audit against Article 10 requirements. Month 5-6: Implement transparency information for end-users. Design human review protocols per Article 14. Month 7-9: Test and validate compliance. Conduct regulatory readiness audit. Organizations with 3-5 high-risk systems typically require 6-9 months for full compliance.

Prepare for EU AI Act Compliance

Start your Article 6 classification and compliance assessment now. The clock is ticking.