GENERATIVE AI
REGULATION.
The emergence of large language models and generative AI systems presents novel challenges for legal frameworks designed in an era of deterministic software. This analysis examines the regulatory architecture applicable to foundation model providers and deployers operating in or serving the Indian market.
EXECUTIVE SUMMARY
Generative artificial intelligence has moved from research curiosity to commercial ubiquity with remarkable speed. Foundation models trained on vast corpora of text, images, code, and other data now power applications ranging from conversational assistants to automated content generation to sophisticated analytical tools. The legal questions these systems raise cut across traditional doctrinal boundaries, implicating intellectual property, contract law, tort liability, data protection, and sectoral regulation simultaneously.
India has not yet enacted legislation specifically targeting generative AI. The Ministry of Electronics and Information Technology has issued advisories requiring disclosure and approval mechanisms for AI models deployed to Indian users, but comprehensive statutory treatment remains under consideration as part of the broader Digital India Act framework. In the interim, practitioners must navigate a patchwork of existing laws, regulatory guidance, and rapidly evolving international standards.
For enterprises deploying generative AI in India or serving Indian users from abroad, the central challenge is managing liability in the absence of clear statutory allocation. The outputs of these systems are probabilistic rather than deterministic, raising fundamental questions about who bears responsibility when those outputs cause harm.
Liability Framework:
The Chain of Responsibility
1.1 The Intermediary Safe Harbour Question
Section 79 of the Information Technology Act 2000 provides safe harbour protection to intermediaries for third party content, subject to compliance with due diligence requirements under the IT Rules 2021. The applicability of this protection to generative AI providers is deeply contested. Unlike traditional intermediaries that merely host or transmit user content, generative AI systems actively synthesise new content based on model weights derived from training data.
The argument for extending safe harbour protection rests on characterising the AI system as a tool that responds to user prompts, with the user bearing responsibility for how that tool is used. The counter argument is that the provider exercises substantial control over the system's capabilities through training data curation, fine tuning, and safety mechanisms, distinguishing these services from passive intermediation.
The MeitY Advisory of March 2024
The Ministry's advisory to AI platforms required that models capable of generating content that could be perceived as unreliable or deceptive must obtain government approval before deployment. Though subsequently modified, this advisory signaled regulatory intent to treat generative AI providers as bearing greater responsibility than traditional intermediaries.
1.2 Tort Liability for AI Outputs
Where generative AI outputs cause harm, whether through defamatory statements, negligent advice, or privacy violations, the question of tort liability becomes acute. Traditional negligence analysis requires demonstrating that the defendant owed a duty of care, breached that duty through failure to meet the applicable standard, and that the breach caused the plaintiff's injury.
Applying this framework to generative AI is challenging because the relationship between model training, system design, user input, and harmful output is complex. A model provider might argue that it implemented reasonable safety measures and that harmful output resulted from adversarial user behaviour. A deployer might argue that it merely integrated a third party model and had no ability to control its outputs. The injured party may struggle to establish which actor in the chain bears responsibility.
Model Provider
- •Training data curation
- •Safety mechanism design
- •Model capability decisions
- •API terms and restrictions
Application Deployer
- •Use case selection
- •Prompt engineering
- •Output filtering
- •User interface design
End User
- •Prompt formulation
- •Output verification
- •Downstream usage
- •Reliance decisions
Intellectual Property:
Uncharted Territory
2.1 Training Data and Copyright
The copyright implications of training generative AI models on vast datasets scraped from the internet represent perhaps the most significant unresolved legal question in this space. Rightsholders argue that training constitutes reproduction and that the outputs represent unauthorised derivative works. Model developers argue that training involves only transient copying for the purpose of extracting unprotectable statistical patterns.
Indian copyright law does not contain an explicit exception for machine learning, unlike recent amendments in jurisdictions such as Japan, the European Union, and Singapore. The fair dealing provisions under Section 52 of the Copyright Act 1957 are narrowly construed and may not provide adequate shelter for commercial scale training activities. The Delhi High Court's ongoing examination of these issues in pending litigation will provide important guidance.
2.2 Ownership of AI Generated Content
The Copyright Act requires that a work be created by an author, defined as a natural person (or in the case of computer generated works under Section 2(d)(vi), the person who causes the work to be created). Whether a user prompting a generative AI system qualifies as the person causing creation, and therefore the author of the output, is far from settled.
The US Copyright Office has taken the position that works generated primarily by AI without sufficient human authorship are not copyrightable. India has not issued equivalent guidance, but the underlying principle that copyright requires human creative input is deeply embedded in Indian jurisprudence. Enterprises relying on AI generated content for commercial purposes should carefully consider whether that content enjoys any intellectual property protection.
The practical implication is significant: AI generated marketing copy, code, designs, and other content may not be protectable, leaving enterprises vulnerable to competitors appropriating their AI generated materials without recourse.
Cross Border
Service Delivery
3.1 Extraterritorial Application of Indian Law
International providers of generative AI services accessible in India face extraterritorial compliance obligations under multiple statutes. The IT Act and IT Rules apply to any person who commits a contravention affecting a computer, computer system, or computer network located in India. The DPDPA applies to processing of personal data in connection with offering goods or services to data principals in India.
The practical effect is that a foundation model provider headquartered abroad, with infrastructure located outside India, serving Indian users through an API or web interface, remains subject to Indian jurisdiction. This creates compliance obligations around content moderation, data protection, and response to government requests that must be integrated into the provider's global operating framework.
3.2 Data Flows in AI Services
Generative AI services typically involve multiple categories of data flows that implicate cross border considerations. User prompts may contain personal data or confidential business information transmitted to servers abroad for processing. Model outputs flow back to users, potentially incorporating information derived from training data with complex provenance. Usage telemetry collected for model improvement may include behavioural data about Indian users.
Each of these flows must be analysed under the DPDPA framework. Where personal data is involved, the data fiduciary must have a valid legal basis for processing, and cross border transfer restrictions apply. Enterprise customers should carefully evaluate the data handling practices of their AI service providers and ensure appropriate contractual protections are in place.
3.3 EU AI Act Implications for Indian Providers
Indian enterprises developing or deploying AI systems for European markets must now contend with the EU AI Act's comprehensive requirements. The Act's extraterritorial reach extends to providers placing AI systems on the EU market and deployers using AI systems within the EU, regardless of where those entities are established.
For generative AI specifically, the Act imposes transparency obligations including disclosure that content was AI generated, technical documentation requirements, and compliance with copyright law for training data. General purpose AI models meeting certain capability thresholds face additional systemic risk assessment requirements. Indian providers targeting European customers must build these compliance capabilities into their product development processes.
Navigating Uncertainty
The legal framework for generative AI in India remains in formation. The Digital India Act, when enacted, is expected to provide greater statutory clarity on liability allocation, intermediary status, and compliance obligations. Until then, enterprises must navigate existing laws whose application to these novel technologies is not always clear.
The prudent approach is to implement robust risk management frameworks that address the foreseeable legal challenges even in the absence of prescriptive regulation. This includes careful attention to training data provenance, transparent communication about system capabilities and limitations, appropriate contractual allocation of liability across the value chain, and ongoing monitoring of regulatory developments across relevant jurisdictions.
Generative AI will continue to transform how businesses operate. Those who engage thoughtfully with the legal complexity will be better positioned to capture the technology's benefits while managing its risks.
AMLEGALS AI Policy Hub • AI Regulation Practice