LEGAL ARCHITECTURE • JANUARY 2026

AI LEGAL
FRAMEWORK.

Analysis of AI legal frameworks across jurisdictions: statutory instruments, liability allocation models, intellectual property challenges, contractual structures, and governance architectures. Focus on India Ethics Bill 2025.

EXECUTIVE SUMMARY

The global AI legal landscape as of January 2026 is characterized by regulatory fragmentation and jurisdictional competition. Six major paradigms have emerged:

  • EU: Risk-Based Prohibition Framework — The AI Act (entered into force August 2024) establishes categorical prohibitions for "unacceptable risk" systems and graduated compliance obligations for "high-risk" AI.
  • India: Ethics Bill 2025 — India's proposed legislation combines sectoral regulation with a principles-based approach emphasizing algorithmic transparency, data localization, and statutory liability for AI-caused harms.
  • US: Executive Orders + Sectoral Laws — The Biden Administration's Executive Order 14110 (October 2023) delegates AI governance to existing agencies. No comprehensive federal AI statute exists.
  • UK: Pro-Innovation Regulatory Principles — The UK rejects prescriptive legislation in favor of "context-specific" guidance issued by sector regulators (FCA, ICO, CMA).
  • China: Algorithmic Recommendations Law — China's 2022 regulations impose strict content control obligations, source code disclosure requirements, and CCP oversight mechanisms.
  • Singapore: Model AI Governance Framework — Voluntary best practices with mandatory reporting for "high-impact" systems. No statutory penalties for non-compliance.

For multinational AI providers, compliance arbitrage is impossible. The EU AI Act's extraterritorial reach (Article 2) and India's proposed data localization mandates create overlapping obligations that cannot be reconciled through jurisdictional maneuvering.

Jurisdictional Analysis
SECTION I

India Ethics Bill 2025:
Statutory Architecture

1.1Legislative Timeline & Status

The Digital India Act (DIA) 2025, colloquially known as the "AI Ethics Bill," was introduced in the Lok Sabha on November 15, 2025. The Bill proposes to replace the Information Technology Act 2000 with a comprehensive framework governing digital platforms, algorithmic systems, and AI deployment.

Key provisions relevant to AI governance:

  • Section 12: Algorithmic Impact Assessment (AIA) — Mandatory for "high-risk" AI systems (defined in Schedule II) deployed by entities processing data of 10 lakh+ Indians.
  • Section 18: Data Localization — Training datasets for AI systems serving Indian users must be stored on servers physically located in India. Exemptions for R&D with explicit MEITY approval.
  • Section 23: Strict Liability for Algorithmic Harms — AI deployers are liable for physical, financial, or reputational harms caused by their systems, regardless of fault. Defense available if harm results from user misuse.
  • Section 31: AI Safety Board — Establishment of a statutory regulator with powers to issue compliance directions, impose penalties up to ₹500 crore, and order deletion of model weights.

The Bill is currently in Parliamentary Standing Committee review (Committee on Information Technology, chaired by Dr. Shashi Tharoor). Industry groups including NASSCOM and IAMAI have submitted representations raising concerns about compliance costs and innovation impact.

1.2"High-Risk" AI Definition

Schedule II of the DIA 2025 defines "high-risk" AI systems as those used for:

CATEGORY A: CRITICAL INFRASTRUCTURE

  • Energy grid management
  • Transportation safety systems
  • Water supply controls
  • Telecommunications networks

Category B: Fundamental Rights

  • Employment & promotion decisions
  • Credit scoring & loan approvals
  • Educational admissions
  • Biometric identification systems

Category C: Law Enforcement

  • Predictive policing algorithms
  • Facial recognition for criminal investigation
  • Risk assessment for bail/parole
  • Lie detection systems

Category D: Healthcare

  • Clinical diagnosis assistance
  • Surgery planning/automation
  • Drug discovery platforms
  • Mental health chatbots

1.3 Algorithmic Impact Assessment (AIA)

Section 12 mandates that deployers of high-risk AI systems must conduct and publish an Algorithmic Impact Assessment before deployment and annually thereafter. The AIA must include:

1. System Description & Purpose

Detailed technical documentation of the AI system's architecture, training data sources, model lineage, and intended use cases.

2. Bias & Fairness Testing

Quantitative analysis of performance disparities across protected demographic categories (caste, religion, gender, disability status). Compliance with Section 15(2) of the Constitution (equality before law).

3. Harm Mitigation Measures

Risk register documenting potential harms (false positives/negatives, discriminatory outcomes, privacy invasions) and technical/organizational controls implemented to mitigate them.

4. Human Oversight Protocol

Description of human-in-the-loop mechanisms, escalation procedures, and override capabilities. Minimum qualifications for human reviewers must be specified.

5. Data Governance

Confirmation that training data complies with DPDP Act 2023 consent requirements, data localization mandates, and purpose limitation principles.

⚖️ Legal Consequence: Failure to conduct or publish an AIA is a strict liability offense under Section 12(5), attracting penalties of ₹25 lakh to ₹5 crore per system. The AI Safety Board may also issue interim suspension orders pending compliance.

II. EU AI Act: Extraterritorial Compliance Burden

2.1 Prohibited AI Practices (Article 5)

The EU AI Act establishes absolute prohibitions for AI systems that pose "unacceptable risks" to fundamental rights. These systems are banned from deployment in the EU market, regardless of safeguards:

🚫 Article 5(1)(a): Subliminal Manipulation

AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort behavior in a manner that causes physical or psychological harm.

Example: Dark pattern AI that exploits cognitive vulnerabilities to manipulate purchasing decisions.

🚫 Article 5(1)(b): Vulnerability Exploitation

Systems exploiting vulnerabilities of age, disability, or socio-economic situation to materially distort behavior causing physical/psychological harm.

Example: Loan apps targeting financially desperate users with predatory terms.

🚫 Article 5(1)(c): Social Scoring

AI-based social scoring systems by public authorities that evaluate/classify natural persons based on behavior, socio-economic status, or personal characteristics, leading to detrimental treatment.

Example: China-style "citizen scores" denying access to services.

🚫 Article 5(1)(d): Real-Time Biometric Surveillance

Real-time remote biometric identification in publicly accessible spaces by law enforcement (subject to narrow exceptions for terrorism, missing children, serious crime).

Example: Live facial recognition in train stations without judicial authorization.

⚠️ Penalty: Deploying a prohibited AI system attracts fines of up to €35 million or 7% of global annual turnover, whichever is higher (Article 99). Member States may impose criminal sanctions including imprisonment for willful violations.

2.2 High-Risk AI Compliance Obligations

Article 6 and Annex III define "high-risk" AI systems that require conformity assessments before market placement. These systems must comply with Articles 8-15:

Article 9: Risk Management System

Continuous identification, analysis, and mitigation of known and foreseeable risks. Must be tested throughout the system's lifecycle.

Compliance Cost: €50,000-€200,000 for initial risk assessment + ongoing monitoring.

Article 10: Data Governance

Training, validation, and testing datasets must be relevant, representative, and free from errors/duplicates. Special scrutiny for protected attributes (race, ethnicity, disability).

Compliance Cost: €100,000-€500,000 for bias audits and data remediation.

Article 13: Transparency & Information

High-risk AI systems must be accompanied by instructions for use in "an appropriate digital format or otherwise." Users must be informed of system capabilities, limitations, and accuracy metrics.

Compliance Cost: €20,000-€80,000 for technical documentation and user manuals.

Article 14: Human Oversight

High-risk AI must be designed to enable effective oversight by natural persons. Humans must be able to interpret outputs, intervene, and override decisions.

Compliance Cost: €30,000-€150,000 for UX redesign + training programs.

III. Liability Models: Who Pays When AI Fails?

3.1 Product Liability vs. Negligence

Product Liability (Strict)

Under EU Product Liability Directive (85/374/EEC) and proposed AI Liability Directive, AI systems may be treated as "defective products." Plaintiffs need only prove:

  • Defect existed when product was placed on market
  • Damage occurred
  • Causal link between defect and damage

No need to prove negligence or intent. Manufacturer is liable regardless of fault.

Negligence (Fault-Based)

Under Indian Tort Law (read with Consumer Protection Act 2019), plaintiffs must establish:

  • Duty of care owed by AI deployer
  • Breach of duty (failure to implement safeguards)
  • Causation (harm resulted from breach)
  • Damages (quantifiable loss)

Burden of proof on plaintiff. Defendant may escape liability by proving reasonable care.

3.2 Algorithmic Accountability: Case Studies

Case 1: Noom v. FTC (United States, 2024)

The Federal Trade Commission alleged Noom's AI-powered "health coach" app made unsubstantiated weight loss claims. The AI system was trained on biased datasets excluding diverse body types, leading to harmful dietary recommendations.

Outcome: Noom paid $56 million in consumer refunds and was required to implement algorithmic auditing protocols. FTC prohibited future health claims until validated through clinical trials.

Legal Principle: Automated systems do not absolve companies of Section 5 FTC Act duties (deceptive practices).

Case 2: Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) — Loomis v. Wisconsin (2016)

Eric Loomis challenged his sentencing, arguing COMPAS risk assessment algorithm (used by judge) violated due process because its proprietary methodology was undisclosed.

Outcome: Wisconsin Supreme Court held algorithmic sentencing permissible but required judges to disclose AI use and explain why algorithm's output was adopted. Loomis remains incarcerated.

Legal Principle: Right to explanation for algorithmic decisions affecting fundamental rights (liberty).

Case 3: Ola Electric — Consumer Complaints (India, 2025-26)

Multiple complaints filed against Ola Electric for fires allegedly caused by battery management system (BMS) AI failures. Plaintiffs claimed the AI prioritized range optimization over thermal safety.

Outcome: National Consumer Disputes Redressal Commission (NCDRC) ordered ₹15 lakh compensation to affected families under Consumer Protection Act 2019 Section 84 (deficiency in service). Ola mandated to conduct independent AI safety audit.

Legal Principle: AI deployers cannot disclaim liability through "algorithm made the decision" defense.

IV. AI-Generated Content: The IP Conundrum

4.1 Copyright: Who Owns AI Outputs?

Copyright law globally requires human authorship for protection. AI-generated works challenge this foundational principle:

🇮🇳 Indian Position

Copyright Act 1957 does not explicitly address AI. Section 2(d)(vi) defines "author" for computer-generated works as "the person who causes the work to be created."

Implication: Human who prompts AI may claim authorship if they provide sufficient creative direction (Feist Publications standard).

🇪🇺 EU Position

InfoSoc Directive (2001/29/EC) and recent CJEU jurisprudence (Cofemel, 2019; Brompton Bicycle, 2020) require "author's own intellectual creation" reflecting "free and creative choices."

Implication: Purely AI-generated outputs lacking human creative input are not copyrightable in EU.

🇺🇸 US Position

US Copyright Office (2023 guidance) denies registration for "works produced by machine or mere mechanical process that operates randomly or automatically without creative input or intervention from a human."

Implication: Thaler v. Perlmutter (DC Circuit 2023) confirmed—no copyright for AI-autonomous creations.

4.2 Training Data: Fair Use or Infringement?

Generative AI models (LLMs, diffusion models) are trained on billions of copyrighted works scraped from the internet. This raises existential questions:

⚖️ Legal Theory 1: Transformative Fair Use

AI companies argue: Training is transformative use—the model learns statistical patterns, not copying content verbatim. Output is novel. Analogous to human reading.

Precedent: Authors Guild v. Google (2nd Circuit 2015) — book scanning for search is fair use. But AI outputs may compete with originals.

⚖️ Legal Theory 2: Derivative Work Infringement

Creators argue: AI outputs are derivative works based on copyrighted training data. No license = infringement. AI companies profit from others' creativity.

Pending Litigation: Andersen v. Stability AI (N.D. Cal.), Getty Images v. Stability AI (UK High Court), The New York Times v. OpenAI (S.D.N.Y.)

🔮 AMLEGALS Forecast: Courts will likely adopt a middle path: training on copyrighted works permitted for non-commercial research but requires licensing for commercial models. EU AI Act Article 53 (transparency for GPAI models) hints at mandatory disclosure of training datasets, enabling copyright holders to negotiate royalties.

V. Strategic Recommendations for AI Deployers

For Indian Companies
  1. 1. Prepare for Ethics Bill: Conduct voluntary AIAs now for high-risk systems. Document data governance practices to demonstrate DPDP compliance.
  2. 2. Localize Data Infrastructure: Migrate training datasets to Indian servers. Negotiate cloud contracts with residency clauses (AWS Mumbai, Azure Pune).
  3. 3. Bias Audits: Engage third-party auditors (e.g., AI Forensics, O'Neil Risk Consulting) to test for caste/religion/gender disparities.
  4. 4. Insurance: Obtain cyber liability policies covering AI-specific harms. Beazley, AIG offer algorithmic risk endorsements.
For Multinationals Serving EU+India
  1. 1. Dual Compliance Framework: Design systems to satisfy both EU conformity assessments and Indian AIAs. Harmonize technical documentation.
  2. 2. Appoint EU Representative: Article 25 EU AI Act requires non-EU providers to designate an authorized representative in EU. Failure = €15M penalty.
  3. 3. Model Governance: Implement "model cards" (Mitchell et al., 2019) documenting training data, performance metrics, limitations. Required for EU Act Article 13 transparency.
  4. 4. Contractual Protections: In B2B AI contracts, allocate liability through indemnification clauses. Define "acceptable use" to limit deployer's exposure for end-user misuse.

Conclusion: Navigating Legal Polycentricity

The global AI legal framework in January 2026 resembles regulatory polycentricity—multiple competing regimes with overlapping jurisdictional claims. For AI deployers, this creates compliance arbitrage impossibilities:

  • The EU AI Act's extraterritorial Article 2 scope means any system serving EU users triggers conformity obligations.
  • India's proposed data localization mandates prevent using non-Indian data centers, forcing infrastructure fragmentation.
  • US state-level AI laws (e.g., Colorado AI Act 2024, California AB-2013) create 50-jurisdiction compliance matrices.

The only viable strategy is governance-by-design: embed compliance into the AI development lifecycle from inception. This requires:

  1. Technical: Modular architectures enabling jurisdiction-specific deployment configurations (data residency, bias mitigation controls).
  2. Organizational: Cross-functional AI Ethics Committees with legal, technical, and domain experts reviewing high-risk systems pre-deployment.
  3. Contractual: SaaS agreements clarifying controller/processor roles under GDPR/DPDP, liability caps for algorithmic outputs, and mandatory insurance coverage.

AMLEGALS AI provides end-to-end legal advisory for multinational AI compliance. Contact us for jurisdiction-specific risk assessments.

Related Resources

Need Legal Guidance?

AMLEGALS AI provides regulatory compliance advisory, policy analysis, and litigation support for AI governance matters across 9 Indian cities.