Regulatory PlaybookEU AI ActSaaS Sector

AI Frameworks and Risk Assessments for SaaS Companies under the EU AI Act

A practitioner's guide to regulatory classification, conformity obligations, and enforcement exposure for software-as-a-service providers deploying artificial intelligence within the European Union.

Effective Date
2 August 2026 (Full Application)
Key Articles
Art. 6, 9, 16, 25, 51-54
Penalty Ceiling
€35M or 7% Global Turnover

Executive Summary

The EU AI Act (Regulation 2024/1689) establishes the world's first comprehensive horizontal framework for artificial intelligence. For SaaS companies, the Regulation creates a tiered compliance architecture predicated on risk classification, with obligations attaching differentially to "providers" (those placing AI systems on the market) and "deployers" (those using such systems under their authority).

The central question for any SaaS business is whether its AI functionality constitutes a "high-risk AI system" within the meaning of Article 6. If so, the provider faces mandatory conformity assessment, CE marking requirements, technical documentation obligations under Annex IV, and registration in the EU database established under Article 71. Non-compliance attracts administrative fines of up to €35 million or 7% of worldwide annual turnover, whichever is higher.

This playbook provides a structured framework for SaaS counsel and compliance officers to assess regulatory exposure, determine applicable obligations, and implement defensible compliance programmes ahead of the phased enforcement timeline commencing February 2025.

Section I

Scope and Applicability

1.1 Territorial Application

The Regulation applies to providers placing AI systems on the Union market or putting them into service in the Union, irrespective of whether those providers are established within the Union or in a third country (Article 2(1)(a)). For SaaS companies headquartered outside the EU, the decisive factor is not corporate domicile but whether the AI system's output is "intended to be used" within the Union.

This extraterritorial reach mirrors the approach taken under the General Data Protection Regulation and demands that non-EU SaaS providers appoint an authorised representative established in the Union where they place high-risk AI systems on the market (Article 22). The authorised representative must be mandated in writing, with authority to interact with market surveillance authorities and to provide documentation upon request.

1.2 Definition of "AI System"

Article 3(1) defines an AI system as a "machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

For SaaS applications, this definition captures substantially all machine learning functionality, including but not limited to: recommendation engines, predictive analytics modules, natural language processing features, computer vision components, and automated decision-making systems. Traditional rule-based software operating on deterministic logic without inference capability falls outside the definition, though the boundary cases require careful analysis.

1.3 Provider vs. Deployer Classification

The Regulation distinguishes between "providers" (Article 3(3)) and "deployers" (Article 3(4)). A provider is any natural or legal person that develops an AI system or has an AI system developed and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge. A deployer is any natural or legal person using an AI system under its authority, except where the system is used in the course of a personal non-professional activity.

For SaaS businesses, this classification carries significant implications. Where a SaaS company develops AI functionality and makes it available to enterprise customers, the SaaS company is prima facie the provider. However, where the SaaS offering permits customers to train, fine-tune, or substantially modify the AI system, the customer may assume provider obligations under Article 25's "substantial modification" doctrine. This allocation of responsibility should be addressed expressly in commercial agreements.

Practitioner Note: White-Label and OEM Arrangements

Where a SaaS provider supplies AI systems to distributors, importers, or other persons who place them on the market under their own name or trademark, such downstream actors may be treated as providers and assume the full suite of Article 16 obligations. SaaS companies operating white-label or OEM models must ensure contractual allocation of compliance responsibilities reflects the Regulation's operator classification framework.

Section II

Risk Classification Framework

The Regulation establishes a four-tier risk taxonomy: prohibited practices (Article 5), high-risk systems (Articles 6-7), limited-risk systems subject to transparency obligations (Article 50), and minimal-risk systems subject only to voluntary codes of conduct. For SaaS providers, the critical determination is whether any AI functionality falls within the high-risk classification.

2.1 Prohibited AI Practices

Article 5 prohibits certain AI practices absolutely, with no exemption for SaaS deployment. These include: systems deploying subliminal, manipulative, or deceptive techniques that materially distort behaviour causing significant harm; systems exploiting vulnerabilities of specific groups; social scoring by public authorities; real-time remote biometric identification in publicly accessible spaces (subject to limited law enforcement exceptions); emotion recognition in workplace and educational settings; biometric categorisation inferring sensitive attributes; and untargeted scraping of facial images for facial recognition databases.

SaaS providers must conduct an Article 5 assessment as a threshold matter. Any AI functionality falling within the prohibited categories must be removed or redesigned before the prohibition takes effect on 2 February 2025.

2.2 High-Risk AI Systems: The Article 6 Framework

Article 6 establishes two pathways to high-risk classification. Under Article 6(1), an AI system is high-risk where it is (a) intended to be used as a safety component of a product covered by Union harmonisation legislation listed in Annex I, or is itself such a product, and (b) is required to undergo third-party conformity assessment under that sectoral legislation. This captures AI embedded in machinery, toys, lifts, equipment for explosive atmospheres, radio equipment, pressure equipment, cableway installations, personal protective equipment, appliances burning gaseous fuels, medical devices, in vitro diagnostic devices, civil aviation, motor vehicles, agricultural vehicles, marine equipment, rail systems, and two- or three-wheel vehicles.

Under Article 6(2), an AI system is high-risk where it falls within one of the use-case categories enumerated in Annex III. For SaaS providers, the following Annex III categories warrant particular attention:

Annex III High-Risk Use Cases Relevant to SaaS

1. Biometrics (Annex III, Point 1)

Remote biometric identification systems (excluding verification); biometric categorisation by sensitive attributes; emotion recognition systems.

2. Critical Infrastructure (Annex III, Point 2)

AI as safety components in management and operation of critical digital infrastructure, road traffic, water/gas/heating/electricity supply.

3. Education and Vocational Training (Annex III, Point 3)

AI determining access to education, assigning persons to educational institutions, assessing learning outcomes, monitoring prohibited behaviour during tests.

4. Employment and Worker Management (Annex III, Point 4)

AI for recruitment, candidate screening, interview evaluation, promotion decisions, contract termination, task allocation based on behaviour/traits, and performance monitoring.

5. Access to Essential Services (Annex III, Point 5)

AI evaluating eligibility for public assistance benefits, creditworthiness assessment, risk assessment for life/health insurance pricing, and emergency services dispatch.

6. Law Enforcement (Annex III, Point 6)

Polygraph and similar tools, reliability assessment of evidence, risk of offending/reoffending, profiling during criminal investigation.

7. Migration, Asylum and Border Control (Annex III, Point 7)

Polygraphs, application assessment, risk assessment relating to security/health/irregular migration.

8. Administration of Justice (Annex III, Point 8)

AI assisting judicial authorities in researching and interpreting facts and law and applying the law to concrete facts.

2.3 The Article 6(3) Exception

Article 6(3) provides a critical safe harbour. An AI system falling within Annex III is not considered high-risk where it does not pose a "significant risk of harm to the health, safety or fundamental rights of natural persons," including by not materially influencing the outcome of decision-making. The Commission's implementing guidance indicates this exception applies where the AI system is intended to perform a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing or influencing the previously completed human assessment, or performs a preparatory task to an assessment relevant for the Annex III use cases.

Providers relying on this exception must document their assessment and notify market surveillance authorities before placing the system on the market. The exception cannot apply where the AI system performs profiling of natural persons within the meaning of Article 4(4) GDPR.

2.4 SaaS-Specific Classification Analysis

For SaaS providers, the risk classification exercise requires mapping each AI-enabled feature against the Annex III taxonomy and the Article 6(3) exception criteria. This is not a one-time assessment. Where a SaaS platform permits customers to deploy AI functionality across multiple use cases, the provider must consider whether any reasonably foreseeable deployment falls within Annex III.

Consider a hypothetical SaaS HR platform offering AI-powered candidate screening. The AI analyses CVs, scores candidates against job requirements, and ranks applicants. This functionality falls squarely within Annex III, Point 4(a) ("AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates"). The Article 6(3) exception is unlikely to apply because the AI materially influences hiring decisions and performs profiling.

Contrast this with a SaaS customer support platform deploying AI to route tickets to appropriate agents based on topic classification. While the AI makes operational decisions, it does not fall within any Annex III category and therefore attracts minimal-risk classification, subject only to voluntary codes of practice.

Section III

Provider Obligations for High-Risk Systems

Where a SaaS provider's AI system is classified as high-risk, Article 16 imposes a comprehensive suite of obligations extending across the system lifecycle. These obligations are not merely documentary; they require substantive technical implementation and ongoing operational compliance.

3.1 Risk Management System (Article 9)

Providers must establish, implement, document, and maintain a risk management system comprising a continuous iterative process running throughout the entire lifecycle of the high-risk AI system. The system must include: identification and analysis of known and reasonably foreseeable risks; estimation and evaluation of risks arising from use in accordance with intended purpose and under conditions of reasonably foreseeable misuse; evaluation of risks arising from analysis of data gathered from post-market monitoring; and adoption of appropriate and targeted risk management measures.

The risk management system must ensure that residual risks are judged acceptable when the high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. The provider must identify the most appropriate risk management measures, giving due consideration to the effects of measures on risks from intended use and reasonably foreseeable misuse, and reflecting the generally acknowledged state of the art.

3.2 Data Governance (Article 10)

High-risk AI systems using techniques involving the training of AI models must be developed on the basis of training, validation, and testing datasets that meet quality criteria specified in Article 10(2)-(5). Datasets must be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose. They must have appropriate statistical properties, including as regards the persons or groups of persons in relation to whom the system is intended to be used.

Where special categories of personal data are processed for bias detection and correction purposes, Article 10(5) provides a legal basis, subject to appropriate safeguards including technical limitations on re-use, state-of-the-art security measures, logging, access restrictions, and prohibition on processing for other purposes.

3.3 Technical Documentation (Article 11, Annex IV)

Providers must draw up technical documentation before the high-risk AI system is placed on the market or put into service. The documentation must be kept up-to-date and contain, at a minimum, the elements set out in Annex IV. This includes: general description; detailed description of system elements and development process; information about monitoring, functioning, and control; description of appropriateness of performance metrics; detailed description of risk management system; description of any change made; list of harmonised standards applied; copy of EU declaration of conformity; and detailed description of evaluation and testing procedures.

3.4 Record-Keeping and Logging (Article 12)

High-risk AI systems must be designed and developed with capabilities enabling automatic recording of events (logs) throughout the system's lifetime. Logging capabilities must ensure traceability of system functioning, enable monitoring of operation, and facilitate post-market monitoring. The technical design must ensure that logs record: period of use; reference database against which input data was checked; input data for which the search led to a match; and identification of natural persons involved in verification of results.

3.5 Transparency and Information to Deployers (Article 13)

High-risk AI systems must be designed and developed to ensure their operation is sufficiently transparent to enable deployers to interpret system output and use it appropriately. Providers must supply deployers with instructions for use containing concise, complete, correct, and clear information that is relevant, accessible, and comprehensible. Instructions must include: provider identity and contact details; system characteristics, capabilities, and limitations; changes to the system; integration and installation measures; description of hardware on which the system is intended to operate; and expected lifetime of the system.

3.6 Human Oversight (Article 14)

High-risk AI systems must be designed and developed to be effectively overseen by natural persons during the period in which they are in use. Human oversight must aim to prevent or minimise risks when the system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. The system must be provided with appropriate human-machine interface tools enabling the persons to whom human oversight is assigned to understand the capabilities and limitations of the system, remain aware of the possible tendency of automatically relying or over-relying on the output, correctly interpret the system's output, and decide not to use the system or otherwise disregard, override, or reverse its output.

3.7 Accuracy, Robustness, and Cybersecurity (Article 15)

High-risk AI systems must be designed and developed to achieve an appropriate level of accuracy, robustness, and cybersecurity, and to perform consistently in those respects throughout their lifecycle. The levels of accuracy and relevant metrics must be declared in accompanying instructions for use. The system must be resilient to errors, faults, or inconsistencies that may occur within the system or its environment, and resilient against attempts by unauthorised third parties to alter its use, outputs, or performance by exploiting system vulnerabilities.

3.8 Quality Management System (Article 17)

Providers must put in place a quality management system ensuring compliance with the Regulation. The system must be documented in the form of written policies, procedures, and instructions, and include: a strategy for regulatory compliance; techniques, procedures, and systematic actions for design, design control, and design verification; techniques, procedures, and systematic actions for development, quality control, and quality assurance; examination, test, and validation procedures before, during, and after development; technical specifications and standards; systems and procedures for data management; risk management system; post-market monitoring system; procedures for reporting serious incidents; handling of communication with authorities; systems and procedures for record-keeping; resource management; and an accountability framework.

Integration with ISO 42001

While not mandated by the Regulation, alignment with ISO/IEC 42001:2023 (AI Management Systems) provides an internationally recognised framework for the Article 17 quality management system. The AI Management System Standard specifies requirements for establishing, implementing, maintaining, and continually improving an AIMS within the context of an organisation. SaaS providers with existing ISO 27001 (Information Security) or ISO 9001 (Quality Management) certifications may find natural extension points for AIMS integration.

Section IV

Deployer Obligations and Contractual Allocation

Article 26 imposes distinct obligations on deployers of high-risk AI systems. While the compliance burden falls primarily on providers, deployers are not mere passive recipients. SaaS providers must understand deployer obligations both to ensure their systems enable compliance and to address allocation of responsibilities in commercial agreements.

4.1 Core Deployer Duties

Deployers must: assign human oversight to natural persons with necessary competence, training, and authority; ensure input data is relevant and sufficiently representative; monitor the high-risk AI system's operation and inform the provider of any serious incident or malfunctioning; keep logs automatically generated by the system, to the extent under their control, for a period appropriate to the intended purpose (and at least six months); inform natural persons that they are subject to the use of the high-risk AI system (where the system makes decisions or assists in making decisions related to them); where the deployer is a public authority, register the use in the EU database; and carry out a fundamental rights impact assessment before putting the system into use (for certain deployers).

4.2 Fundamental Rights Impact Assessment

Deployers that are bodies governed by public law, private entities providing public services, or deployers of certain high-risk AI systems (credit scoring, life and health insurance pricing, or risk assessment and pricing for natural persons) must conduct a fundamental rights impact assessment before first use. The assessment must include: description of the deployer's processes; description of the period and frequency of use; categories of natural persons and groups likely to be affected; specific risks of harm likely to impact those categories; description of human oversight measures; and measures in the event of materialisation of identified risks.

4.3 Substantial Modification and Provider Status

Under Article 25, a deployer that modifies a high-risk AI system already placed on the market in a way that alters its compliance with the applicable requirements, or that changes the intended purpose in a way not anticipated by the provider, assumes provider obligations for the modified system. This has significant implications for SaaS deployments involving customer-side fine-tuning, custom model training, or integration modifications.

SaaS providers should consider: (a) whether their platform architecture permits modifications that could constitute "substantial modification"; (b) technical controls limiting customer modification capability; (c) contractual provisions allocating responsibility for modifications; and (d) documentation obligations for customer-initiated changes.

4.4 Contractual Considerations

SaaS agreements should address the following matters arising under the Regulation:

  • Confirmation of provider and deployer classification and respective obligations
  • Warranties regarding conformity assessment completion and CE marking
  • Provision of technical documentation and instructions for use
  • Access to logs and cooperation for deployer's monitoring obligations
  • Notification procedures for serious incidents and system malfunctioning
  • Restrictions on substantial modifications and consequences of breach
  • Cooperation for deployer's fundamental rights impact assessment
  • Indemnification for regulatory fines arising from the other party's non-compliance
  • Audit rights regarding quality management system and post-market monitoring
  • Termination rights for regulatory non-compliance or withdrawal of CE marking
Section V

Conformity Assessment Procedures

5.1 Self-Assessment vs. Notified Body Assessment

Article 43 establishes two conformity assessment pathways. For most high-risk AI systems falling under Annex III (standalone AI systems not embedded in regulated products), providers may conduct conformity assessment through internal control (self-assessment) pursuant to Annex VI. The provider draws up technical documentation, ensures the quality management system is in place, carries out the conformity assessment procedure, draws up the EU declaration of conformity, and affixes the CE marking.

However, for biometric identification and categorisation systems (Annex III, Point 1), conformity assessment requires involvement of a notified body under the quality management system assessment procedure (Annex VII) or the technical documentation assessment procedure (Annex VI with notified body involvement), at the provider's choice. The notified body must be designated by a Member State and notified to the Commission.

5.2 Annex VI Internal Control Procedure

The internal control procedure under Annex VI requires the provider to verify that the established quality management system is in compliance with the requirements of Article 17; examine the information contained in the technical documentation to assess compliance of the AI system with the relevant essential requirements; and verify that the design and development process of the AI system and its post-market monitoring is consistent with the technical documentation.

5.3 EU Declaration of Conformity

Following successful conformity assessment, the provider must draw up an EU declaration of conformity for each high-risk AI system and keep it at the disposal of national competent authorities for ten years after the system is placed on the market. The declaration must be kept up to date and contain the elements set out in Annex V: provider name and address; statement that the declaration is issued under the provider's sole responsibility; AI system identification; statement of conformity with the Regulation; relevant harmonised standards applied; notified body (if applicable); and place/date of issue.

5.4 CE Marking

The CE marking must be affixed visibly, legibly, and indelibly to the high-risk AI system. Where this is not possible or not warranted on account of the nature of the system, it must be affixed to the packaging or accompanying documentation. For software-only AI systems (including SaaS), the CE marking may be affixed digitally.

5.5 EU Database Registration

Before placing a high-risk AI system on the market, providers must register themselves and their system in the EU database established under Article 71. The registration must include the information specified in Annex VIII, Sections A and B. The Commission is developing the database infrastructure, which must be operational by the high-risk system application date of 2 August 2026.

Section VI

Technical Documentation Requirements

Annex IV specifies the minimum contents of technical documentation for high-risk AI systems. For SaaS providers, preparing compliant technical documentation represents a substantial undertaking requiring input from engineering, product, data science, legal, and compliance functions.

6.1 Annex IV Documentation Elements

  1. General Description: Intended purpose; provider name and version; interaction with hardware/software; versions of software; forms in which the system is placed on the market; description of hardware; user instructions; product photographs and illustrations where relevant.
  2. Detailed Description of System Elements: Development methods and procedures; design specifications and system architecture; description of computational resources; third-party tools and components; data requirements; assessment of human oversight measures; predetermined changes.
  3. Monitoring, Functioning, and Control: Capabilities and limitations; specifications on accuracy, robustness, and cybersecurity; reasonably foreseeable situations that may lead to risks; human oversight measures; input data specifications.
  4. Risk Management System Description: Identification of risks; assessment methodology; risk mitigation measures; residual risk evaluation; testing results.
  5. Lifecycle Changes: Description of any change made to the system during its lifecycle that affects compliance.
  6. Harmonised Standards: List of harmonised standards applied in full or in part; where not applied, description of solutions adopted to meet requirements.
  7. EU Declaration of Conformity Copy.
  8. Evaluation Procedures: Description of post-market monitoring procedures; system to be used for evaluating performance after placing on market.

6.2 Data Governance Documentation

For AI systems developed using machine learning, the technical documentation must include a detailed description of data governance, covering: training methodologies and techniques; training data characteristics (source, size, representativeness); data preparation operations (annotation, labelling, cleaning, enrichment); formulation of assumptions; assessment of data availability, quantity, and suitability; examination for possible biases; identification of relevant data gaps and shortcomings; and measures taken to address them.

6.3 Record Retention

Providers must retain the technical documentation and EU declaration of conformity for ten years after the high-risk AI system is placed on the market or put into service. Logs generated by high-risk AI systems must be retained for a period appropriate to the intended purpose of the system, and for at least six months. The substantial documentation burden requires SaaS providers to establish robust information management systems from the outset of AI system development.

Section VII

General-Purpose AI Model Provisions

Chapter V (Articles 51-56) establishes a distinct regulatory regime for general-purpose AI models (GPAI). For SaaS providers integrating GPAI models (such as large language models, foundation models, or multi-modal systems) into their offerings, these provisions layer additional obligations on top of high-risk system requirements.

7.1 Definition and Scope

A general-purpose AI model is defined under Article 3(63) as an AI model, including where trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market, and which can be integrated into a variety of downstream systems or applications.

7.2 Provider of GPAI Model Obligations

Providers of GPAI models must, under Article 53: draw up and keep up-to-date technical documentation of the model; draw up and keep up-to-date documentation for downstream providers; put in place a policy to respect Union copyright law, including mechanisms to identify and comply with reservations of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790; and make available a sufficiently detailed summary about the content used for training.

7.3 GPAI Models with Systemic Risk

Article 51 classifies GPAI models as presenting "systemic risk" where they have high impact capabilities. This is presumed where the cumulative amount of computation used for training is greater than 10^25 floating point operations. Providers of GPAI models with systemic risk face enhanced obligations under Article 55: perform model evaluation in accordance with standardised protocols; assess and mitigate possible systemic risks; track and document serious incidents and corrective measures; and ensure an adequate level of cybersecurity protection.

7.4 Downstream Integration Considerations

Where a SaaS provider integrates a third-party GPAI model into a high-risk AI system, Article 25(3) provides that the downstream provider is not considered the provider of the GPAI model itself. However, the SaaS provider remains fully responsible for compliance of the integrated high-risk AI system with all applicable requirements. The SaaS provider must ensure the GPAI model provider has fulfilled its Article 53 documentation obligations and that sufficient information is available to complete the high-risk system's technical documentation.

Practical considerations include: obtaining contractual warranties from GPAI model providers regarding Article 53 compliance; ensuring access to GPAI model technical documentation and training summaries; assessing whether the GPAI model is classified as presenting systemic risk; and building compliance monitoring mechanisms for GPAI model updates or version changes.

Section VIII

Enforcement and Penalty Framework

8.1 Administrative Fines

Article 99 establishes a tiered penalty structure. The maximum fines for non-compliance are:

Prohibited AI Practices (Article 5)
€35M or 7% worldwide turnover
High-Risk System Requirements Breach
€15M or 3% worldwide turnover
Other Obligations Breach
€7.5M or 1.5% worldwide turnover
Incorrect Information to Authorities
€7.5M or 1% worldwide turnover

For SMEs (enterprises with fewer than 250 employees and annual turnover not exceeding €50 million) and startups, the Regulation provides for reduced caps, with the lower of the absolute amount or turnover percentage applying.

8.2 Market Surveillance and Enforcement Architecture

Member States must designate national competent authorities and market surveillance authorities (Article 70). The AI Office within the Commission has exclusive competence for GPAI model supervision (Article 64). Market surveillance authorities have extensive investigatory powers including: access to any data, documentation, and information necessary; access to premises, equipment, and AI systems; and power to request explanations from any natural or legal person.

8.3 Corrective Actions and Market Withdrawal

Where market surveillance authorities identify non-compliance, they may require operators to bring the AI system into compliance, prohibit or restrict availability on the market, or require withdrawal or recall. Providers are obligated to take immediate corrective action upon identifying that an AI system is not in conformity and, where a serious risk exists, must immediately inform national authorities.

8.4 Serious Incident Reporting

Providers and deployers must report serious incidents to market surveillance authorities no later than 15 days after becoming aware of the incident. A serious incident means an incident or malfunctioning of an AI system that directly or indirectly leads to death, serious damage to health, serious disruption to critical infrastructure management, serious and irreversible damage to property or the environment, or breach of fundamental rights.

8.5 Implementation Timeline

Key Compliance Dates

1 August 2024
Regulation enters into force
2 February 2025
Prohibitions on unacceptable risk AI (Article 5) apply
2 August 2025
GPAI model obligations (Chapter V) apply; penalties applicable
2 August 2026
Full application of high-risk AI system requirements (Chapter III); EU database operational
2 August 2027
Obligations for high-risk AI systems in Annex I (regulated products) apply
Annexure

SaaS Provider Compliance Checklist

Phase 1: Assessment (Immediate)

  • Inventory all AI functionality across SaaS platform
  • Conduct Article 5 prohibited practices assessment
  • Map AI systems against Annex III high-risk categories
  • Evaluate Article 6(3) exception applicability
  • Identify GPAI model integrations and provider obligations
  • Determine provider/deployer classification for each system

Phase 2: Gap Analysis (Q2 2025)

  • Assess current risk management system against Article 9
  • Review data governance practices against Article 10
  • Evaluate logging capabilities against Article 12
  • Audit human oversight design against Article 14
  • Assess quality management system against Article 17

Phase 3: Documentation (Q3-Q4 2025)

  • Draft technical documentation per Annex IV
  • Prepare instructions for use per Article 13
  • Document risk management methodology and results
  • Establish record-keeping and log retention systems

Phase 4: Conformity (H1 2026)

  • Complete conformity assessment (Annex VI or VII)
  • Draw up EU declaration of conformity
  • Affix CE marking (digital for software)
  • Register in EU database
  • Appoint authorised representative (non-EU providers)

Phase 5: Operationalisation (Ongoing)

  • Implement post-market monitoring system
  • Establish serious incident reporting procedures
  • Train personnel on compliance obligations
  • Update commercial agreements with deployer provisions
  • Establish continuous compliance monitoring

Related Practice Areas

Require Advisory on EU AI Act Compliance?

AMLEGALS maintains dedicated AI regulatory capability advising SaaS providers, technology companies, and institutional deployers on EU AI Act compliance. Our team assists with risk classification analysis, conformity assessment procedures, technical documentation preparation, and commercial agreement structuring.