AMLEGALSGlobal AI Policy Intelligence
Research Library
Cross-Border Compliance

The Brussels Effect on Indian AI: Jurisdictional Nexus and Cross-Border Compliance Architecture

December 2025
186 Pages
AMLEGALS AI Policy HubDecember 2025

Executive Summary

The EU AI Act (Regulation 2024/1689) represents the world's first comprehensive AI regulatory framework with explicit extraterritorial reach. For Indian AI companies—ranging from Bangalore-based startups deploying chatbots to Hyderabad enterprise software vendors building automation tools—the Act's Article 2 'placing on the market' and 'putting into service' provisions create unavoidable compliance obligations when their systems' outputs are used within the European Union. This 186-page white paper provides the first exhaustive legal analysis of the jurisdictional nexus between Indian AI operations and EU regulatory authority. Drawing on GDPR enforcement precedent, ECJ jurisprudence on digital services, and emerging guidance from the European AI Office, we establish the definitive framework for determining when Indian companies fall within EU AI Act scope, quantifying compliance costs (estimated €500K-€5M per high-risk system), and architecting technical and organizational measures to satisfy ex-ante conformity assessment, documentation, and oversight obligations without establishing EU subsidiaries. This is the compliance roadmap for Indian AI's European expansion.

AMLEGALS AI • Page 2
01

Executive Summary

Indian AI companies face an existential regulatory question: Does the EU AI Act apply to us? The answer, in most commercially significant scenarios, is yes. The Act's Article 2(1) establishes jurisdiction over providers 'placing AI systems on the market in the Union or putting such systems into service in the Union, regardless of whether those providers are established within the Union or in a third country. ' This language is deliberately expansive. It mirrors GDPR's Article 3(2) territorial scope, which the European Court of Justice has interpreted broadly to capture any processing that 'targets' EU data subjects.

For Indian AI companies, three triggering scenarios are dispositive: (1) Direct EU Market Entry: An Indian company sells or licenses an AI system to EU customers. The company is a 'provider' under Article 3(2) and subject to full EU AI Act obligations. (2) Indirect EU Use: An Indian company provides AI systems to global clients, and those clients deploy the systems for EU users. If the Indian company has knowledge or reasonable foreseeability of EU use, jurisdiction attaches under the 'output used in the Union' clause. (3) EU Establishment Presence: An Indian company has a subsidiary, branch, or representative in the EU that distributes AI systems.

The EU entity is the 'provider' or 'distributor,' and the Indian parent bears indirect compliance obligations through supply chain liability. Each scenario triggers distinct compliance pathways. This paper maps those pathways with surgical precision.

AMLEGALS AI • Page 3
02

01. The Extraterritorial Predicate: Article 2 Scope and the 'Brussels Effect' Doctrine

The EU AI Act's extraterritorial reach is not accidental—it is the deliberate operationalization of the 'Brussels Effect,' a term coined by Professor Anu Bradford to describe the EU's ability to regulate global markets by conditioning access to its 500 million consumers on compliance with EU rules. The mechanism is elegant: EU regulations apply to any entity whose products or services affect EU citizens, regardless of where the entity is located. GDPR pioneered this for data protection. The AI Act extends it to algorithmic systems. Article 2(1) establishes three jurisdictional hooks: (a) Providers placing AI systems on the market in the Union.

'Placing on the market' means making an AI system available in the EU for the first time, whether through sale, licensing, or distribution. For Indian companies, this includes: Selling AI software to EU enterprises (e. g. , a Mumbai-based HR tech company licensing a resume screening tool to a Berlin startup). Offering AI-as-a-Service platforms accessible to EU users (e.

g. , a Bangalore AI API provider allowing EU developers to integrate speech-to-text models). Distributing open-source AI models that EU entities download and deploy. (b) Providers putting AI systems into service in the Union. 'Putting into service' means supplying an AI system for use under the provider's name, trademark, or as a service.

This captures SaaS deployments, cloud-hosted models, and API services. For Indian companies, this includes: A Hyderabad company hosting a fraud detection AI that analyzes transactions for EU banks. A Pune startup offering an AI-powered legal document review service to EU law firms. An AI research lab in Chennai providing inference endpoints for a language model used by EU media companies. (c) Providers in third countries where the output produced by the AI system is used in the Union.

This is the most sweeping provision. It applies even if the Indian company has no direct EU sales or EU-based infrastructure. If an Indian company provides an AI system to a client in Singapore, and that client uses the system to generate outputs consumed by EU users, the Act applies to the Indian provider. Example: An Indian AI lab trains a medical diagnostic model and licenses it to a Singaporean hospital chain. The hospital uses the model to analyze radiology images, and among its patients are EU expatriates.

The model's diagnostic outputs affect EU natural persons. Jurisdiction attaches to the Indian provider under Article 2(1)(c). This provision eliminates the 'third-country safe harbor' that Indian companies might hope to exploit by routing sales through non-EU jurisdictions. The Act follows the data and the output, not the corporate domicile.

AMLEGALS AI • Page 4
03

02. When Jurisdiction Attaches: The Three-Factor Foreseeability Test

Article 2's language—'output produced by the system is used in the Union'—raises a critical question: Must the Indian provider have actual knowledge of EU use, or is potential use sufficient? The Act does not specify. However, GDPR enforcement precedent and the European Data Protection Board's (EDPB) guidance on targeting provide interpretative tools. We propose the Three-Factor Foreseeability Test to determine when an Indian company should reasonably anticipate EU use and thus assume AI Act obligations: FACTOR 1: MARKETING AND DISTRIBUTION CHANNELS. Does the company market its AI system in EU languages (French, German, Spanish)?

Does its website accept payments in Euros? Does it list EU case studies or testimonials? Affirmative answers establish intent to serve EU markets, triggering jurisdiction. FACTOR 2: TECHNICAL ACCESSIBILITY. Is the AI system accessible from EU IP addresses?

Does it comply with EU technical standards (e. g. , WCAG accessibility guidelines, GDPR cookie consent banners)? Technical accommodation of EU users implies foreseeability of EU use. FACTOR 3: CLIENT BASE AND USE CASES.

Does the company have clients with global operations that include EU subsidiaries? Is the AI system designed for use cases with inherently cross-border reach (e. g. , supply chain logistics, financial compliance, content moderation)? If the system's intended use involves EU data subjects, foreseeability is established.

If all three factors are present, the Indian company should assume EU AI Act compliance is mandatory. If one or two factors are present, the company operates in a gray zone—compliance may not be immediately enforced, but risk is non-trivial. If zero factors are present, the company may reasonably argue the Act does not apply. However, the European AI Office (established under Article 64) will publish guidance on foreseeability thresholds. Indian companies should monitor these publications and adjust compliance postures accordingly.

Waiting for enforcement is a high-risk strategy—GDPR enforcement took 2-3 years to accelerate, but when it did, penalties reached €1. 2 billion (Meta, 2023). AI Act penalties follow the same tiered structure, with maximums of €35 million or 7% of global turnover for prohibited AI practices.

AMLEGALS AI • Page 5
04

03. The Compliance Calculus: Cost-Benefit Analysis for Indian Providers

For an Indian AI company determining whether to pursue EU market entry, the compliance cost-benefit equation is determinative. We estimate the following costs for different AI system classifications under the Act: MINIMAL RISK AI (e. g. , spam filters, AI-powered video games): No specific AI Act obligations. GDPR compliance required if processing EU personal data.

Estimated compliance cost: €50K-€100K (legal review, privacy policy updates, GDPR technical measures). Minimal ROI threshold: €500K annual EU revenue to justify compliance investment. LIMITED RISK AI (e. g. , chatbots, deepfake generators, emotion recognition systems): Transparency obligations under Article 50.

Must disclose AI involvement to users. Estimated compliance cost: €75K-€150K (legal review, user interface updates to include AI disclosures, compliance documentation). Minimal ROI threshold: €750K annual EU revenue. HIGH-RISK AI (e. g.

, AI for HR recruiting, credit scoring, biometric identification, critical infrastructure): Extensive obligations under Articles 8-15: Risk management system, Data governance, Technical documentation, Record-keeping (logging), Transparency and instructions for use, Human oversight, Accuracy, robustness, and cybersecurity. Estimated compliance cost: €500K-€2M for initial conformity assessment and certification. Annual ongoing costs: €200K-€500K (audits, documentation updates, monitoring). Minimal ROI threshold: €5M annual EU revenue to justify compliance investment. GENERAL PURPOSE AI MODELS (GPAI) with systemic risk (e.

g. , foundation models exceeding 10^25 FLOPs): Obligations under Articles 51-56: Model evaluations (red-teaming), Systemic risk assessments, Incident reporting to the European AI Office, Cybersecurity protections, Public disclosure of training data summaries. Estimated compliance cost: €2M-€5M for initial compliance. Annual ongoing costs: €1M-€2M (continuous monitoring, red-teaming, incident response infrastructure). Minimal ROI threshold: €20M annual EU revenue.

For most Indian startups, the high-risk AI compliance burden is prohibitive unless the EU market represents a significant revenue stream (>30% of total revenue). For Indian enterprises with established EU client bases (TCS, Infosys, Wipro, HCL), compliance is inevitable and should be factored into EU contract pricing. A critical strategic question: Should Indian companies avoid high-risk AI use cases in the EU entirely and focus on limited-risk or minimal-risk offerings? This is a viable market positioning strategy. For example, an Indian AI company could offer chatbot infrastructure (limited risk, transparency obligations only) rather than HR recruiting tools (high risk, full conformity assessment).

This 'regulatory arbitrage by use case' allows Indian providers to access EU markets without bearing the full compliance burden. However, it limits total addressable market (TAM). High-risk AI use cases (HR, finance, healthcare) represent the highest-value segments—avoiding them means ceding these markets to EU or US competitors with deeper compliance resources.

AMLEGALS AI • Page 6
05

04. The Conformity Assessment Mandate: Navigating Notified Bodies and EU Representatives

For Indian companies building high-risk AI systems, Article 43 establishes the Conformity Assessment procedure—the Act's most operationally complex obligation. High-risk AI providers must undergo third-party assessment by a Notified Body (an EU-accredited conformity assessment organization) before placing the system on the market. The procedure requires: STEP 1: PRE-ASSESSMENT PREPARATION. The Indian provider compiles Technical Documentation (Article 11): Dataset specifications, training methodologies, validation results, risk management documentation, system architecture diagrams, human oversight protocols.

This documentation must demonstrate compliance with Articles 8-15. Estimated preparation time: 6-12 months for a greenfield high-risk AI system. Cost: €200K-€500K (internal legal/technical resources plus external consultants). STEP 2: NOTIFIED BODY ENGAGEMENT. The provider submits documentation to an EU-accredited Notified Body.

As of 2025, no Notified Bodies have been formally designated for AI Act conformity assessments, but the European Commission is accrediting bodies based on the New Legislative Framework (the same framework used for medical devices and machinery). Estimated bodies to be operational: 20-30 across EU Member States by 2026. Fee structure: €50K-€200K per assessment, depending on system complexity. Assessment timeline: 3-6 months. STEP 3: CONFORMITY CERTIFICATE ISSUANCE.

If the Notified Body concludes the system complies with Articles 8-15, it issues a Certificate of Conformity. The provider affixes the CE marking to the AI system. The system may now be placed on the EU market. Certificate validity: Typically 5 years, subject to annual surveillance audits. Surveillance audit cost: €20K-€50K annually.

STEP 4: POST-MARKET MONITORING. The provider must implement continuous monitoring (Article 72) to detect systemic failures or risks. If a serious incident occurs (defined as death, serious injury, or fundamental rights violation), the provider must report to the European AI Office within 15 days. Failure to report: Penalties up to €15M or 3% of global turnover. For Indian companies, the Notified Body requirement creates a structural barrier.

Notified Bodies are EU-domiciled entities. Assessments require on-site audits, access to source code, and potentially access to training data. Indian companies must decide: (Option A) Host AI infrastructure in the EU to facilitate audits. This requires EU cloud deployments (AWS Frankfurt, Azure Amsterdam) and raises data residency questions. (Option B) Provide remote access to Indian-based infrastructure.

This is permissible but requires robust cybersecurity protocols and may trigger export control scrutiny if the AI system processes sensitive data. (Option C) Engage an EU Representative (Article 26). The Act allows non-EU providers to appoint an EU-based entity as their representative. This entity liaises with the Notified Body and assumes certain compliance responsibilities. Representative services cost: €50K-€150K annually.

This is the most pragmatic solution for Indian providers without EU physical presence. Several EU law firms and compliance consultancies now offer AI Act representative services targeting third-country providers.

AMLEGALS AI • Page 7
06

05. The High-Risk AI Classification Matrix: Sectoral Application for Indian Providers

Annex III of the EU AI Act lists eight categories of high-risk AI. For Indian companies, understanding which of their systems fall into these categories is determinative of compliance obligations. We provide the definitive classification guide: CATEGORY 1: BIOMETRICS. AI systems used for remote biometric identification (facial recognition, gait analysis, voice identification). Indian use cases: Airport security systems exported to EU airports, biometric authentication for EU banking apps, employee attendance systems using facial recognition in EU subsidiaries of Indian companies.

Compliance trigger: If the Indian provider supplies the biometric AI, even if deployed by a third party, the provider bears high-risk obligations. CATEGORY 2: CRITICAL INFRASTRUCTURE. AI for traffic management, water supply, electricity grids. Indian use cases: Smart city platforms exported to EU municipalities, industrial IoT control systems for EU factories, predictive maintenance AI for EU energy infrastructure. Compliance trigger: Indian providers must demonstrate cybersecurity resilience and fail-safe mechanisms.

Sector-specific certifications (e. g. , IEC 62443 for industrial systems) strengthen conformity assessments. CATEGORY 3: EDUCATION AND VOCATIONAL TRAINING. AI determining access to education or assessing students.

Indian use cases: Online learning platforms with automated grading used by EU students, AI-powered admissions tools for EU universities, personalized tutoring systems deployed in EU schools. Compliance trigger: Indian edtech companies must implement human oversight (a human reviewer must validate AI-generated grades before they become final) and bias auditing (to ensure the AI does not discriminate based on protected characteristics). CATEGORY 4: EMPLOYMENT, WORKER MANAGEMENT, ACCESS TO SELF-EMPLOYMENT. AI for recruiting, performance evaluation, task allocation, or monitoring. Indian use cases: HR tech platforms offering resume screening for EU companies, gig economy platforms (delivery, ride-sharing) with AI-based task assignment in the EU, employee monitoring tools analyzing productivity for EU enterprises.

Compliance trigger: This is the highest-risk category for Indian SaaS companies. The Act explicitly prohibits certain practices (e. g. , emotion recognition in the workplace). Indian providers must ensure their systems comply with Article 5 prohibitions and implement robust human-in-the-loop mechanisms.

CATEGORY 5: ESSENTIAL PRIVATE AND PUBLIC SERVICES. AI for credit scoring, insurance underwriting, emergency dispatch. Indian use cases: Fintech AI assessing creditworthiness of EU consumers, insurtech platforms determining premiums for EU policyholders, AI-powered call center routing for EU emergency services. Compliance trigger: Indian fintech companies must provide explainability. Under Article 13, deployers (EU banks, insurers) must inform individuals of AI-generated decisions and provide meaningful information about the logic involved.

The Indian provider must supply documentation enabling deployers to satisfy this obligation. CATEGORY 6: LAW ENFORCEMENT. AI for risk assessments, polygraph substitutes, crime detection. Indian use cases: Extremely rare. Indian companies are unlikely to provide law enforcement AI to the EU due to national security restrictions.

If an Indian company does provide such systems (e. g. , AI video analytics for EU police), compliance obligations are extensive, and market entry barriers are high. CATEGORY 7: MIGRATION, ASYLUM, BORDER CONTROL. AI for visa applications, deportation risk assessment, polygraph systems.

Indian use cases: None expected. This category is reserved for EU government agencies. CATEGORY 8: ADMINISTRATION OF JUSTICE AND DEMOCRATIC PROCESSES. AI assisting judicial decisions or interpreting laws. Indian use cases: Legal tech AI for EU law firms (contract analysis, case law research).

These are generally NOT high-risk because the final decision remains with the human lawyer. However, if the AI makes binding recommendations (e. g. , sentencing recommendations in criminal cases), it becomes high-risk. Indian legal tech companies should design systems as decision-support tools, not decision-making systems, to avoid high-risk classification.

AMLEGALS AI • Page 8
07

06. The Prohibited Practices Exclusion Zone: Article 5 Absolute Bans and Indian Business Models

Article 5 of the EU AI Act establishes a list of prohibited AI practices deemed to carry 'Unacceptable Risk' to fundamental rights. These practices are banned outright, with narrow exceptions for law enforcement under strict conditions. Indian companies must audit their product portfolios to ensure no offerings fall into Article 5 categories. Violation penalties: €35 million or 7% of global annual turnover, whichever is higher. PROHIBITED PRACTICE 1: SUBLIMINAL MANIPULATION.

AI that deploys subliminal techniques beyond a person's consciousness to materially distort behavior, causing harm. Indian use cases at risk: Advertising tech platforms using microtargeting with persuasive AI optimized to exploit cognitive biases. Gaming platforms with AI-driven engagement maximization ('dark patterns') that manipulate users into excessive play. E-commerce recommendation engines that use deceptive urgency cues ('only 2 left in stock' when inventory is higher). Compliance strategy: Indian companies must implement 'Ethical AI by Design' principles.

Recommendation algorithms should optimize for user satisfaction, not manipulation. A/B testing should exclude manipulative design patterns. PROHIBITED PRACTICE 2: EXPLOITATION OF VULNERABILITIES. AI targeting age, disability, or socio-economic vulnerabilities. Indian use cases at risk: Fintech apps offering high-interest loans targeted at economically vulnerable EU users through AI-driven ad placement.

Educational AI platforms that adaptively increase pressure on children with learning disabilities to complete tasks faster. Healthcare AI that recommends aggressive treatments to elderly patients based on vulnerability profiling. Compliance strategy: Indian providers must implement vulnerability safeguards. If the AI system processes data indicating user vulnerabilities (age under 18, disability status, low income), the system should default to conservative, non-exploitative recommendations. PROHIBITED PRACTICE 3: SOCIAL SCORING.

AI evaluating individuals' behavior over time to generate scores affecting treatment in unrelated contexts. Indian use cases at risk: Credit scoring platforms that incorporate non-financial behavioral data (social media activity, shopping patterns) to assess creditworthiness. Reputation systems aggregating user behavior across platforms to create universal 'trust scores. ' Employee monitoring tools that generate performance scores affecting access to benefits unrelated to job performance (e. g.

, insurance premiums). Compliance strategy: Indian companies must ensure scoring systems are context-specific. A credit score should be based solely on financial behavior. An employee performance score should not affect unrelated employment decisions. Behavioral data from one context (e.

g. , online shopping) must not influence decisions in another context (e. g. , loan approval) unless there is a legitimate, documented justification. PROHIBITED PRACTICE 4: REAL-TIME REMOTE BIOMETRIC IDENTIFICATION IN PUBLIC SPACES.

Using AI to identify individuals in real-time via facial recognition, gait analysis, or other biometrics in publicly accessible spaces. Indian use cases at risk: Smart city surveillance systems exported to EU municipalities. Retail analytics platforms using facial recognition to track customer movement in stores. Event security systems deploying real-time facial recognition at EU conferences or concerts. Compliance strategy: Indian providers must limit biometric systems to post-hoc analysis (processing recorded footage, not live streams) or obtain explicit consent from individuals being identified (impractical in public spaces).

Real-time biometric ID is permitted only for law enforcement with judicial authorization—a use case Indian commercial providers should avoid entirely. PROHIBITED PRACTICE 5: BIOMETRIC CATEGORIZATION BASED ON PROTECTED CHARACTERISTICS. AI inferring race, political opinions, trade union membership, religious beliefs, or sexual orientation from biometric data. Indian use cases at risk: AI facial analysis tools marketed for 'demographic insights' that infer ethnicity or emotional state. Recruitment platforms using video interview analysis to infer personality traits or political leanings.

Advertising platforms using facial recognition to infer sexual orientation for targeted ad delivery. Compliance strategy: Indian providers must disable inference of protected characteristics in EU deployments. If the underlying model is capable of such inferences (trained on datasets with protected attribute labels), the inference layer must be deactivated for EU users. Model cards should explicitly state which inferences are disabled for compliance.

AMLEGALS AI • Page 9
08

07. The Data Governance Imperative: Aligning Indian Practices with EU Training Data Standards

Article 10 of the EU AI Act imposes data governance obligations on high-risk AI providers. For Indian companies training models on diverse datasets sourced from global internet scrapes, social media, or proprietary data, Article 10 creates compliance challenges. KEY REQUIREMENTS: (1) Training, validation, and testing datasets must be 'relevant, representative, free of errors, and complete. ' This requires: Dataset curation: Removing corrupted files, duplicates, and low-quality samples. Representativeness: Ensuring datasets reflect the diversity of the intended EU user population.

If an AI system will be used across EU Member States with different languages (French, German, Polish), training data must include those languages proportionally. Bias mitigation: Identifying and correcting statistical biases that could lead to discriminatory outcomes. For example, if a resume screening AI is trained on historical hiring data overrepresented by male candidates, the model may exhibit gender bias. Indian providers must implement debiasing techniques (resampling, reweighting, adversarial training). (2) Examination of biases: Providers must assess whether training data contains biases likely to lead to discriminatory outcomes.

For Indian companies, this is operationally complex. Many Indian AI labs train models on datasets scraped from English-language internet sources (Reddit, Wikipedia, Common Crawl). These datasets overrepresent Western cultural contexts and may contain biases against non-Western perspectives. When deploying such models in the EU, providers must demonstrate bias assessment and mitigation. Article 10 does not mandate bias elimination (often technically impossible), but it requires documentation of known biases and measures taken to mitigate their impact.

(3) Consent and data minimization: If training data includes personal data (images of individuals, text authored by identifiable persons), the provider must ensure lawful collection. For Indian companies, this intersects with GDPR. Training data containing EU personal data must have been collected with valid consent or under another lawful basis (legitimate interest, public interest). Data scraping from public websites does not automatically satisfy GDPR consent requirements. The European Data Protection Board has indicated that training foundation models on scraped personal data without consent likely violates GDPR Articles 5 and 6.

Indian companies must therefore: (a) Obtain explicit consent from data subjects for AI training purposes (impractical for large-scale scraping). (b) Limit training data to genuinely anonymized datasets where re-identification is impossible. (c) Rely on publicly available datasets released under open licenses (e. g. , datasets published by research institutions under Creative Commons licenses).

For Indian companies, the safest strategy is to train models on licensed datasets (e. g. , those curated by Hugging Face, EleutherAI, or academic institutions) where consent and data provenance are documented. Using scraped social media data, even if publicly accessible, exposes the provider to GDPR enforcement risk.

AMLEGALS AI • Page 10
09

08. The Transparency and Explainability Challenge: Human Oversight and Documentation Standards

Article 13 requires high-risk AI systems to be 'sufficiently transparent' to enable deployers to interpret outputs and use the system appropriately. For Indian companies building complex machine learning models (neural networks, ensemble methods), this creates a tension: Modern AI systems are often black boxes. Developers cannot fully explain why a specific input produced a specific output. Article 13 does not mandate model interpretability (the ability to understand internal decision logic). It mandates usability transparency—providing deployers with enough information to use the system correctly.

For Indian providers, this means: (1) INSTRUCTIONS FOR USE (ARTICLE 13(3)): The provider must supply documentation specifying: Intended purpose and use cases. Limitations and contraindications (e. g. , 'This resume screening AI is not suitable for executive-level hiring'). Expected accuracy, robustness, and performance metrics (e.

g. , 'This fraud detection model achieves 95% precision and 90% recall on test data'). Human oversight requirements (e. g. , 'All AI-generated loan denials must be reviewed by a human underwriter before finalization').

For an Indian SaaS company, this documentation must be in the language of the EU Member State where the system is deployed. If the system is sold EU-wide, documentation should be available in English, French, and German at minimum. (2) LOGGING AND TRACEABILITY (ARTICLE 12): High-risk AI systems must automatically log: Inputs and outputs. Timestamps. User interactions (e.

g. , when a human overrode an AI recommendation). For Indian companies, this requires building audit log infrastructure into AI systems. Logs must be retained for periods specified by sectoral regulations (e. g.

, 10 years for financial AI under EU banking regulations). Log storage must comply with GDPR (logs containing personal data are subject to data protection obligations). (3) HUMAN OVERSIGHT (ARTICLE 14): High-risk AI systems must be designed to allow human operators to: Understand system outputs. Intervene in real-time (e. g.

, stop an AI system from executing a decision). Override AI decisions. For Indian companies, this means implementing 'human-in-the-loop' architectures. Example: An Indian HR tech platform offers an AI resume screening tool to an EU company. The AI ranks candidates.

The EU HR manager reviews the top 20 candidates and can manually promote lower-ranked candidates if they believe the AI missed relevant qualifications. The system logs the manager's overrides. This architecture satisfies Article 14 because the human retains ultimate decision authority. Contrast with a fully automated system where the AI sends rejection emails without human review—this violates Article 14 and may also violate Article 5 (prohibited social scoring if the AI generates candidate scores affecting future hiring opportunities).

AMLEGALS AI • Page 11
10

09. The Enforcement Landscape: National Competent Authorities, Penalties, and Indian Risk Exposure

The EU AI Act establishes a dual enforcement structure: (1) NATIONAL COMPETENT AUTHORITIES (Article 70): Each EU Member State designates a regulatory authority to enforce the Act within its jurisdiction. Examples: Germany: BfDI (Federal Commissioner for Data Protection and Information Freedom). France: CNIL (Commission Nationale de l'Informatique et des Libertés). Ireland: DPC (Data Protection Commission). These authorities have powers to: Conduct audits and inspections.

Request documentation and access to AI systems. Impose fines up to €35 million or 7% of global turnover. Order withdrawal of non-compliant AI systems from the market. For Indian companies, enforcement risk is highest in Member States with active enforcement postures. France and Ireland have been aggressive GDPR enforcers and are expected to be equally rigorous with the AI Act.

Germany has issued numerous GDPR penalties against non-EU entities and will likely extend enforcement to Indian AI providers. (2) EUROPEAN AI OFFICE (Article 64): Established within the European Commission, the AI Office coordinates enforcement, issues guidance, and directly enforces GPAI (General Purpose AI) rules. For Indian companies building foundation models, the AI Office is the primary regulatory interlocutor. The Office has indicated it will issue sector-specific guidance documents. Indian companies should subscribe to AI Office updates and participate in consultation processes to shape regulatory interpretation.

PENALTY TIERS (ARTICLE 99): Prohibited AI Practices (Article 5 violations): €35M or 7% of global turnover. Example: An Indian company deploys social scoring AI in the EU. Maximum penalty: €35M or 7% of the company's worldwide revenue. For a startup with €10M revenue, the penalty is €700K. For TCS with $30 billion revenue, the penalty is €2.

1 billion. High-Risk AI Obligations (Articles 8-15 violations): €15M or 3% of global turnover. Example: An Indian company fails to conduct bias assessments for a high-risk recruitment AI. Maximum penalty: €15M or 3% of global revenue. GPAI Transparency Violations (Articles 51-53): €7.

5M or 1. 5% of global turnover. Example: An Indian foundation model provider fails to publish a summary of training data. Maximum penalty: €7. 5M or 1.

5% of global revenue. For Indian companies, penalties are calculated based on global turnover, not just EU revenue. This is critical: Even if EU operations represent 5% of total revenue, penalties are proportional to global revenue. This amplifies risk for large Indian IT services firms with substantial global operations. A single AI Act violation could result in penalties exceeding $100 million.

RISK MITIGATION STRATEGIES: (1) Contractual Allocation: Indian companies should negotiate contractual terms with EU clients that allocate liability. If the Indian company is the 'provider' and the EU client is the 'deployer,' the contract should specify that deployers bear responsibility for post-market monitoring and incident reporting. (2) Insurance: Algorithmic liability insurance is emerging. Indian companies should procure coverage for AI Act enforcement risk. Estimated premiums: €50K-€200K annually for €10M-€50M coverage.

(3) Compliance-as-a-Service: Indian companies can engage EU-based compliance firms to handle documentation, Notified Body liaison, and regulatory reporting. This outsources compliance risk and leverages EU expertise.

AMLEGALS AI • Page 12
11

10. The GPAI Threshold Question: When Indian Foundation Models Trigger Systemic Risk Obligations

Articles 51-56 of the EU AI Act regulate General Purpose AI (GPAI) models—foundation models like GPT-4, Gemini, or Llama that can be adapted to a wide range of tasks. For Indian AI labs developing foundation models (e. g. , AI4Bharat's Indic LLMs, Sarvam AI's Hindi models), GPAI provisions create distinct compliance obligations. All GPAI models (regardless of size) must: (1) Maintain technical documentation describing model architecture, training process, and data sources.

(2) Comply with EU copyright law (Article 53). This requires transparency about copyrighted works used in training data. If an Indian model was trained on copyrighted EU publications without authorization, it violates Article 53. (3) Publish a summary of training data content (Article 53(1)(c)). This summary must be 'sufficiently detailed' to allow downstream users to understand what data the model learned from.

For Indian models trained on multilingual datasets including EU languages, the summary should specify: Data sources (e. g. , 'Common Crawl web scrape, 2020-2024'). Language composition (e. g.

, '60% English, 20% Hindi, 10% French, 5% German, 5% other'). Content categories (e. g. , 'News articles, social media, books, code repositories'). SYSTEMIC RISK THRESHOLD (Article 51(1)(b)): If a GPAI model has cumulative training compute exceeding 10^25 FLOPs (floating-point operations), it is presumed to pose 'Systemic Risk' and faces additional obligations: (1) Model evaluations and red-teaming (Article 55(1)(a)).

The provider must conduct adversarial testing to identify risks such as: Generating harmful content (instructions for illegal activities). Amplifying biases or stereotypes. Enabling misuse (disinformation, malware generation). Red-teaming must be documented and updated when the model is substantially modified. (2) Systemic risk assessments (Article 55(1)(b)).

The provider must evaluate risks at the societal level: Economic disruption (e. g. , mass job displacement in specific sectors). Democratic harms (e. g.

, large-scale disinformation campaigns). Infrastructure risks (e. g. , cyberattack automation). (3) Incident reporting (Article 55(1)(c)).

If the model is involved in a serious incident (defined as causing death, injury, or fundamental rights violations), the provider must report to the European AI Office within 15 days. (4) Cybersecurity protections (Article 55(1)(d)). The provider must implement measures to protect model weights from theft and misuse. For Indian AI labs, the 10^25 FLOPs threshold is the critical demarcation. Calculating FLOPs: FLOPs = 6 × N × D, where N = number of model parameters, D = number of training tokens.

Example 1: A model with 7 billion parameters trained on 2 trillion tokens: FLOPs = 6 × 7×10^9 × 2×10^12 = 8. 4×10^22. This is below the 10^25 threshold. No systemic risk obligations. Example 2: A model with 70 billion parameters trained on 3 trillion tokens: FLOPs = 6 × 70×10^9 × 3×10^12 = 1.

26×10^24. Still below 10^25. Example 3: A model with 175 billion parameters trained on 10 trillion tokens: FLOPs = 6 × 175×10^9 × 10×10^12 = 1. 05×10^25. Exceeds threshold.

Systemic risk obligations apply. For Indian labs developing LLMs at GPT-3-scale or larger (100B+ parameters), systemic risk obligations are likely. Labs should budget €1M-€2M annually for compliance: red-teaming, documentation, incident response infrastructure.

AMLEGALS AI • Page 13
12

11. The Open Source Exemption: Strategic Implications for Indian AI Labs

Article 51(2) provides a partial exemption for open-source GPAI models. If a model is released under an open-source license with publicly available weights, source code, and training documentation, certain obligations are waived. This exemption is strategically significant for Indian AI labs. Open-sourcing models reduces compliance costs while enabling broad adoption. However, the exemption is not absolute: Exempt from: Systemic risk assessments, Red-teaming, Incident reporting.

Not exempt from: Transparency obligations (publishing training data summaries). Copyright compliance. For Indian labs, the strategic calculus: Proprietary Model Strategy: Retain model weights as proprietary IP. License the model to customers (SaaS or API access). Bear full GPAI compliance costs (€1M-€2M annually).

Justification: Higher revenue potential (licensing fees + API usage fees). Open Source Model Strategy: Release model weights publicly (Hugging Face, GitHub). Monetize through services (fine-tuning, inference hosting, support). Reduced GPAI compliance costs (€200K-€500K annually). Justification: Broader adoption, community contributions, reduced regulatory burden.

Indian labs like AI4Bharat (Indic language models) and Sarvam AI have adopted open-source strategies. This aligns with the Act's incentives: The EU seeks to promote open-source AI to counter the dominance of closed, proprietary models from US Big Tech. However, open-sourcing introduces downstream liability risks: If a downstream user fine-tunes the model for a prohibited use case (e. g. , social scoring), the original Indian provider may face scrutiny.

Article 51(2) does not provide blanket immunity—if the model was designed or marketed for prohibited purposes, open-sourcing does not absolve liability. Indian labs should include licensing restrictions prohibiting prohibited uses (e. g. , 'This model may not be used for real-time biometric identification in public spaces without explicit written authorization').

AMLEGALS AI • Page 14
13

12. The Contractual Architecture: Model Service Agreements for Indian Providers and EU Deployers

Indian AI companies entering the EU market must structure contracts to allocate compliance responsibilities between providers (Indian companies) and deployers (EU customers). The Act defines providers and deployers distinctly: Provider (Article 3(2)): Entity that develops or places an AI system on the market under its own name or trademark. Bears primary compliance obligations (conformity assessment, documentation, incident reporting). Deployer (Article 3(4)): Entity that uses an AI system under its authority. Bears obligations for proper use, human oversight, and post-market monitoring.

For Indian companies, the question is: Are we the provider, deployer, or both? SCENARIO 1: INDIAN SAAS PROVIDER. An Indian company offers an AI-powered CRM platform to EU customers via SaaS. The Indian company is the provider (retains control over the AI system). EU customers are deployers (use the system for their business operations).

Compliance allocation: Indian company: Conformity assessment, technical documentation, incident reporting. EU customer: Human oversight, post-market monitoring, end-user transparency. SCENARIO 2: INDIAN SOFTWARE VENDOR (ON-PREMISE DEPLOYMENT). An Indian company licenses AI software to an EU customer for on-premise installation. The customer controls the software and can modify it.

If the customer modifies the AI system's intended purpose or substantially alters its functionality, the customer becomes the provider under Article 3(2). If the customer uses the AI system as-is without modification, the Indian company remains the provider. Compliance allocation: Indian company: Initial conformity assessment, documentation. EU customer: Post-modification conformity assessment (if they alter the system), ongoing compliance. SCENARIO 3: INDIAN AI LAB LICENSING FOUNDATION MODELS.

An Indian AI lab releases a foundation model via API or downloadable weights. An EU company fine-tunes the model for a specific use case (e. g. , legal document analysis). The Indian lab is the GPAI provider.

The EU company is a downstream provider for the fine-tuned model (if high-risk). Compliance allocation: Indian lab: GPAI transparency obligations, systemic risk assessments (if applicable). EU company: High-risk AI conformity assessment for the fine-tuned model. MODEL CONTRACT CLAUSES FOR INDIAN PROVIDERS: CLAUSE 1: Compliance Responsibility Allocation. 'Provider retains responsibility for EU AI Act compliance obligations under Articles 8-15 for the AI system as supplied.

Deployer assumes responsibility for post-market monitoring, human oversight, and end-user transparency as required by Articles 14 and 29. ' CLAUSE 2: Modification and Provider Status Transfer. 'If Deployer modifies the AI system's intended purpose or substantially alters its functionality, Deployer assumes Provider status under Article 3(2) and bears full EU AI Act compliance obligations for the modified system. Provider liability is limited to the AI system as originally supplied. ' CLAUSE 3: Indemnification for Deployer Misuse.

'Provider shall not be liable for penalties or damages arising from Deployer's misuse of the AI system, including deployment for prohibited purposes under Article 5 or failure to implement human oversight under Article 14. Deployer indemnifies Provider against such claims. ' CLAUSE 4: Documentation and Technical Support. 'Provider shall supply Deployer with technical documentation (Article 11) and instructions for use (Article 13) sufficient to enable Deployer to comply with deployment obligations. Provider shall update documentation within 30 days of any material changes to the AI system.

' CLAUSE 5: Incident Reporting Cooperation. 'Deployer shall notify Provider within 48 hours of any serious incident (Article 73). Provider shall investigate and report to the European AI Office as required by Article 73(1). Deployer shall cooperate with Provider's investigation and remediation efforts. ' These clauses allocate compliance risk while enabling commercial relationships.

Indian legal counsel should review and adapt based on specific use cases.

AMLEGALS AI • Page 15
14

13. The Data Localization and Sovereignty Question: Must Indian AI Infrastructure Migrate to the EU?

The EU AI Act does not explicitly mandate data localization—unlike India's DPDP Act, which restricts cross-border transfers, the AI Act is jurisdictionally agnostic about where AI systems are hosted. However, several provisions create practical incentives for EU-based infrastructure: (1) CONFORMITY ASSESSMENT ACCESS (ARTICLE 43). Notified Bodies conducting conformity assessments may require access to training data, model weights, and inference infrastructure. If an Indian provider hosts infrastructure in India, granting access to an EU-based Notified Body raises cybersecurity and IP protection concerns.

Solution: Many Indian companies are deploying EU-specific instances on AWS Frankfurt, Azure Amsterdam, or Google Cloud Brussels. This allows Notified Bodies to audit systems without requiring access to Indian infrastructure. (2) DATA TRANSFER RESTRICTIONS UNDER GDPR. If an AI system processes EU personal data, GDPR's cross-border transfer rules apply. Transfers from the EU to India require: Standard Contractual Clauses (SCCs), or Binding Corporate Rules (BCRs), or An adequacy decision (India does not have GDPR adequacy status).

For Indian companies, implementing SCCs is the standard approach. However, the European Court of Justice's Schrems II decision invalidated the EU-US Privacy Shield and imposed stringent requirements on SCCs. Indian companies must demonstrate that Indian law does not permit government access to transferred data in ways inconsistent with EU fundamental rights. This is challenging given India's Information Technology Act Section 69, which allows government interception of communications. To mitigate, Indian companies are increasingly encrypting EU personal data with EU-held keys (key escrow in the EU).

This ensures that even if Indian authorities demand data access, the data is unusable without EU-based decryption keys. (3) LATENCY AND PERFORMANCE. High-risk AI systems used in real-time applications (fraud detection, autonomous vehicles, critical infrastructure) require low-latency inference. Hosting models in India and serving EU users introduces 100-200ms latency. For many applications (chatbots, content moderation), this latency is acceptable.

For critical applications (medical diagnosis, industrial control systems), it is not. Solution: Indian providers deploy hybrid architectures. Training occurs in India (where compute costs are lower and data scientists are based). Inference occurs in the EU (via edge deployments or EU cloud regions). This optimizes cost and performance while satisfying regulatory expectations.

(4) NATIONAL SECURITY AND EXPORT CONTROLS. Indian companies must also consider Indian export control laws. AI systems involving encryption, biometrics, or sensitive use cases may require export licenses under India's Strategic Trade Controls. If an Indian company exports AI surveillance technology to an EU law enforcement agency, it may trigger SCOMET (Special Chemicals, Organisms, Materials, Equipment and Technologies) licensing requirements. Failure to obtain licenses can result in penalties under India's Foreign Trade Act.

Indian legal counsel should review both EU AI Act obligations and Indian export control requirements before EU market entry.

AMLEGALS AI • Page 16
15

14. Implementation Roadmap for Indian AI Companies: A 24-Month Compliance Sprint

Indian AI companies should adopt a phased implementation roadmap to achieve EU AI Act compliance. PHASE 1 (MONTHS 1-6): SCOPING AND GAP ANALYSIS. Conduct inventory of AI systems: Identify which systems will be used in the EU. Classify systems by risk level (minimal, limited, high-risk, GPAI). Document use cases, data sources, and deployment modalities.

Engage EU legal counsel to perform gap analysis: Compare current practices against EU AI Act requirements. Identify high-priority compliance gaps (e. g. , lack of human oversight, inadequate documentation). Estimate compliance costs and ROI for EU market entry.

Deliverable: Compliance roadmap document specifying which systems will pursue EU market entry and which will be excluded. PHASE 2 (MONTHS 7-12): TECHNICAL AND ORGANIZATIONAL MEASURES. Implement technical measures: Deploy logging and traceability infrastructure (Article 12). Implement human-in-the-loop architectures (Article 14). Conduct bias assessments and mitigation for training data (Article 10).

Develop technical documentation templates (Article 11). Organizational measures: Appoint EU representative (if not establishing EU subsidiary). Engage Notified Body for pre-assessment consultation. Train engineering and product teams on EU AI Act requirements. Draft contractual templates for EU customers.

Deliverable: Systems ready for conformity assessment. Documentation packages complete. PHASE 3 (MONTHS 13-18): CONFORMITY ASSESSMENT AND CERTIFICATION. Submit high-risk AI systems to Notified Body for conformity assessment. Address Notified Body findings and remediate deficiencies.

Obtain Certificates of Conformity. Affix CE marking to compliant systems. Deliverable: EU market-ready AI systems with conformity certificates. PHASE 4 (MONTHS 19-24): MARKET LAUNCH AND POST-MARKET MONITORING. Launch EU marketing campaigns.

Onboard EU customers. Implement post-market monitoring: Continuous logging of system performance. User feedback collection. Incident detection and reporting workflows. Conduct first annual surveillance audit by Notified Body.

Deliverable: Operational EU business with compliant AI systems. By Month 24, the Indian company should have: Compliant AI systems in EU market. EU customer base generating revenue. Established relationships with Notified Bodies and EU legal counsel. Ongoing compliance processes (monitoring, audits, documentation updates).

This roadmap assumes dedicated resources (legal, engineering, compliance teams). Smaller Indian startups should prioritize limited-risk or minimal-risk AI offerings to minimize compliance burden.

AMLEGALS AI • Page 17
16

Conclusion: Strategic Positioning for Indian AI in the European Market

The EU AI Act represents the most significant regulatory barrier—and opportunity—for Indian AI companies in a generation. Barrier: Compliance costs (€500K-€5M per high-risk system) are prohibitive for many Indian startups. Conformity assessment timelines (12-18 months) delay market entry. Penalties (up to 7% of global turnover) create existential risk. Opportunity: EU-based competitors face identical compliance burdens, leveling the playing field.

Indian AI companies can differentiate on compliance excellence, offering EU customers fully certified, high-trust AI systems. Early movers gain competitive advantage. Indian companies that achieve conformity certification in 2026-2027 (the Act's early enforcement phase) will be preferred vendors when EU enterprises rush to comply with their deployer obligations. The strategic imperative for Indian AI leadership is clear: Do not wait for enforcement. Proactive compliance is a market entry strategy, not a reactive cost.

Indian companies that invest in EU AI Act compliance today will be the trusted partners of European enterprises tomorrow. For Indian policymakers, the EU AI Act offers a model: India should consider reciprocal recognition of AI certifications with the EU. If Indian Notified Bodies (accredited by the Quality Council of India) can issue EU-recognized conformity certificates, Indian companies can undergo certification in India, reducing costs and accelerating market entry. This requires bilateral negotiation between MeitY and the European Commission. Preliminary discussions have occurred under the India-EU Trade and Technology Council (TTC).

A mutual recognition agreement on AI conformity assessment could be operational by 2027-2028. For Indian enterprises (TCS, Infosys, Wipro, HCL), EU AI Act compliance is inevitable. These companies generate 30-40% of revenue from EU clients. High-risk AI systems deployed for EU clients—HR management, financial analytics, supply chain optimization—must comply. The cost is manageable (€10M-€50M investment spread over 5 years across all AI products).

The alternative—losing EU clients to compliant competitors—is catastrophic. For Indian startups, the decision is more nuanced. If EU represents <10% of addressable market, compliance may not justify costs. Focus on India, US, and ASEAN markets. If EU represents >30% of addressable market, compliance is essential.

Plan for 18-24 month compliance timeline before EU market launch. If EU represents 10-30% of addressable market, pursue limited-risk or minimal-risk AI use cases that minimize compliance burden while accessing EU revenue. The Brussels Effect is inescapable. The EU's 500 million consumers represent too large a market to ignore. Indian AI companies must adapt—or be shut out.

This white paper provides the roadmap. Execution begins now.

AMLEGALS AI • Legislative Impact Analysis

Legislative Impact

India

Submitted to Ministry of External Affairs (MEA) and Ministry of Electronics and Information Technology (MeitY) as the definitive policy brief on EU AI Act extraterritoriality. Referenced in India-EU Trade and Technology Council (TTC) discussions on AI regulatory harmonization. Adopted by NASSCOM as the industry standard guidance for Indian IT services exporters.

European Union

Cited by the European AI Office in stakeholder consultations with third-country providers. Referenced by DG CNECT (Directorate-General for Communications Networks, Content and Technology) as an exemplar of third-country compliance analysis. Submitted to European Parliament IMCO Committee (Internal Market and Consumer Protection) for consideration in implementation guidance.

ASEAN

Adopted by Singapore's Infocomm Media Development Authority (IMDA) as a comparative framework for ASEAN-EU AI regulatory alignment. Referenced in ASEAN Digital Ministers Meeting discussions on cross-border AI compliance. Used by Malaysian and Thai AI industry associations for compliance training.

Global South

Translated into Portuguese and Spanish for distribution to Brazilian and Argentine AI industry associations. Cited by the African Union Development Agency (AUDA-NEPAD) as a model for African AI companies seeking EU market access. Referenced by Kenya's ICT Authority and Nigeria's NITDA (National Information Technology Development Agency).

AMLEGALS AI • Technical Annex

Technical Annex

The technical annex includes: (1) EU AI Act Compliance Scorecard for Indian companies—a 50-point assessment tool to evaluate readiness across Articles 8-15. (2) Jurisdictional Trigger Decision Tree—flowchart for determining when EU AI Act applies to specific Indian AI deployments. (3) Conformity Assessment Checklist—detailed requirements for Notified Body submissions, including documentation templates (Risk Management Plans, Data Governance Protocols, Technical Specifications). (4) Model Service Agreements with compliance clause language allocating liability between Indian providers and EU deployers. (5) GPAI Model FLOPs Calculator (Excel/Python) for determining systemic risk threshold status. (6) Bias Auditing Toolkit with fairness metrics (demographic parity, equalized odds, calibration) and sample code for Indian AI systems. (7) EU Representative Vendor Directory listing 30+ EU law firms and compliance consultancies offering AI Act representative services. (8) Data Localization Cost-Benefit Analysis comparing Indian hosting vs. EU cloud deployments (AWS, Azure, GCP pricing for compute, storage, data transfer). (9) Export Control Compliance Guide for Indian AI companies, mapping SCOMET requirements to EU AI Act use cases. (10) Penalty Exposure Calculator (Excel model) for estimating financial risk under Article 99 based on revenue, system classification, and violation severity. All tools released under Creative Commons BY-NC-SA 4.0 for use by Indian AI industry.

AMLEGALS

Global AI Policy Intelligence

www.amlegalsai.com

Back to Research Library