AI MODELS
& GOVERNANCE.
The governance of artificial intelligence models represents one of the most consequential regulatory challenges of our time. As foundation models achieve unprecedented capabilities, the legal frameworks governing their development, deployment, and cross border distribution are rapidly evolving.
EXECUTIVE SUMMARY
The emergence of large language models and multimodal foundation models has fundamentally altered the AI governance landscape. These systems, trained on vast corpora of data at computational scales previously unimaginable, possess capabilities that extend far beyond their original training objectives. The legal profession now confronts questions that would have seemed speculative merely five years ago: How should liability be allocated when an AI system causes harm? What documentation obligations attach to model providers? How do cross border data flows during training interact with data protection frameworks?
The EU AI Act represents the most comprehensive attempt to regulate AI systems at the model level, introducing the concept of General Purpose AI Models with specific obligations for providers. India, while yet to enact dedicated AI legislation, must navigate an increasingly complex international environment where models trained abroad are deployed domestically and Indian developers create systems for global markets. The Digital Personal Data Protection Act 2023 intersects with model governance in ways that practitioners are only beginning to understand.
The governance challenge is not merely technical but fundamentally legal. Questions of accountability, transparency, and human oversight that have animated centuries of legal thought must now be applied to systems that operate at scales and speeds that challenge traditional regulatory approaches. The practitioner who understands both the technology and the law will be indispensable in the years ahead.
The Foundation Model Question:
Regulatory Classification
1.1 Defining General Purpose AI Models
The EU AI Act introduces a novel regulatory category: the General Purpose AI Model. Article 3(63) defines these as AI models trained using self supervision at scale on broad data, capable of competently performing a wide range of distinct tasks regardless of how the model is placed on the market. This definition captures the essence of what makes foundation models distinctive: their general purpose nature and capacity for adaptation to downstream applications not contemplated at training time.
The regulatory significance of this classification lies in the separation of upstream and downstream obligations. A foundation model provider bears certain responsibilities regardless of how the model is subsequently deployed. A system deployer who integrates the foundation model into a specific application bears additional obligations tied to that particular use. This allocation recognises that the same underlying model might power a low risk customer service chatbot and a high risk medical diagnostic system, with the risk profile determined by the deployment context rather than the model itself.
GPAI Provider Obligations
- •Technical documentation per Annex XI requirements
- •Information sharing with downstream deployers
- •EU AI Office registration and compliance
- •Copyright law compliance documentation
Systemic Risk Thresholds
- •Cumulative compute above 10^25 FLOPS triggers designation
- •Commission may designate based on capabilities assessment
- •Adversarial testing and red teaming requirements
- •Serious incident reporting within 24 hours
1.2 The Indian Regulatory Gap
India presently lacks equivalent statutory classification for AI models. The absence of dedicated AI legislation means that model governance must be constructed from existing legal frameworks: the Information Technology Act 2000 and its associated rules, sector specific regulations, product liability principles, and increasingly, the DPDPA. This patchwork approach creates both flexibility and uncertainty.
The Ministry of Electronics and Information Technology has issued periodic advisories addressing specific AI applications, most notably the March 2024 advisory requiring government approval for deployment of undertested or unreliable AI models. While subsequently clarified and partially withdrawn, this advisory signalled governmental intent to exercise oversight authority. Practitioners advising Indian AI developers must therefore monitor not only formal legislation but also ministerial guidance that may impose de facto compliance requirements.
Regulatory Developments to Monitor
- 1.Digital India Act draft provisions on AI governance
- 2.SEBI guidelines on AI in financial services
- 3.RBI framework for AI in banking and payments
- 4.IRDAI regulations on AI in insurance underwriting
- 5.National AI Mission implementation guidelines
Liability Architecture:
Who Answers for the Machine
2.1 The Liability Chain: Provider, Deployer, User
When an AI system causes harm, the first question any lawyer must answer is: who bears responsibility? The complexity of modern AI deployments means that multiple parties typically participate in bringing a system to operation. The model provider creates and trains the underlying AI model. The system deployer integrates that model into a specific application with particular parameters and guardrails. The end user operates the system and may prompt it in ways that produce harmful outputs.
Traditional product liability frameworks struggle with this distributed responsibility. A defective automobile has a clear chain of manufacture. An AI system that hallucinates false information or produces discriminatory outputs presents a more complex causal picture. Did the harm arise from the model training? From the deployment configuration? From the user prompt? The answer often involves all three, in varying proportions that are difficult to disentangle after the fact.
Key Liability Allocation Mechanisms
- 1.Contractual allocation through terms of service, enterprise agreements, and deployment licenses
- 2.Indemnification provisions shifting risk between parties in the deployment chain
- 3.Insurance requirements mandated by sector regulators or prudent risk management
- 4.Limitation of liability clauses and their enforceability under applicable law
- 5.Documentation standards that establish baseline expectations and duty of care
2.2 Intermediary Liability and Section 79
The safe harbour provisions of Section 79 of the Information Technology Act raise particular questions for AI model providers. Traditionally, intermediaries that merely host or transmit third party content enjoy protection from liability for that content, provided they comply with due diligence requirements and respond appropriately to takedown requests.
The application of intermediary safe harbour to generative AI systems is deeply contested. When a large language model generates content in response to a user prompt, is that content properly characterised as third party information for which the model provider serves merely as conduit? Or has the provider, through its training choices and output generation architecture, become a creator of that content such that safe harbour protection is unavailable? Indian courts have not definitively resolved this question, and practitioners should advise clients accordingly.
The prudent approach for model providers is to assume that safe harbour protection may not be available and to implement appropriate content moderation, output filtering, and user agreement structures that allocate risk accordingly. Reliance on Section 79 as a complete defence would be optimistic given the evolving judicial and regulatory landscape.
2.3 The EU Approach: AI Liability Directive
The European Union has proposed a dedicated AI Liability Directive to address the evidentiary challenges that AI systems create for claimants. The directive introduces a rebuttable presumption of causation where a claimant establishes fault and demonstrates that it was reasonably likely that the AI system caused the harm. This shifts the burden to defendants to prove that their AI system did not cause the damage.
For Indian practitioners, the EU approach is relevant both for clients operating in European markets and as a potential template for domestic reform. The evidentiary difficulties that motivated the directive are not unique to Europe. Indian courts applying traditional burden of proof standards may find AI liability cases similarly challenging. Whether legislative intervention is warranted in India remains an open policy question, but practitioners should be prepared to address these issues within the existing procedural framework.
DPDPA Intersections:
Training Data and Model Rights
3.1 Lawful Basis for Training Data Processing
The training of AI models on personal data engages the full machinery of data protection law. Under the DPDPA, organisations processing personal data of Indian residents must establish a lawful basis for that processing. The Act provides for consent as the primary ground, with additional bases including performance of contract, compliance with legal obligations, and certain legitimate purposes specified in the statute.
The scale at which foundation models are trained creates practical challenges for consent based approaches. A model trained on billions of text samples scraped from the public internet cannot realistically obtain individual consent from every person whose data may have been included. This has led to intense debate globally about whether existing data protection frameworks are fit for purpose in the AI context. Some jurisdictions have introduced specific exceptions for AI research and development. India has not, meaning that practitioners must work within the existing consent framework or identify alternative lawful bases.
3.2 The Machine Unlearning Challenge
Section 12 of the DPDPA establishes the right of data principals to erasure of their personal data. When personal data has been used to train a machine learning model, what does erasure mean? The data itself may have been deleted after training, but its influence persists in the model weights. The model has learned patterns from that data that continue to affect its outputs.
Machine unlearning is an active area of technical research that attempts to remove the influence of specific training data from deployed models without full retraining. Current techniques remain imperfect and computationally expensive. From a legal perspective, the question is whether compliance with erasure rights requires unlearning at the model level or whether deletion of the source data suffices. The DPDPA does not provide explicit guidance, and practitioners should monitor both technical developments and regulatory interpretation.
Technical Approaches
- •Exact unlearning through full model retraining
- •Approximate unlearning via fine tuning techniques
- •Data partitioning to enable selective retraining
- •Differential privacy during initial training
Legal Considerations
- •Definition of erasure under applicable law
- •Proportionality of compliance measures
- •Documentation of reasonable efforts
- •Regulator expectations and enforcement posture
Cross Border
Model Deployment
4.1 Extraterritorial Application of the EU AI Act
Article 2 of the EU AI Act establishes extraterritorial jurisdiction over providers and deployers placing AI systems on the Union market or putting them into service, regardless of whether those providers or deployers are established within the Union. This means that an Indian AI company whose model is used by customers in Europe must comply with the Act's requirements, including GPAI provider obligations where applicable.
The practical implications for Indian developers are substantial. A company developing a foundation model in Bengaluru that is accessed by users in Berlin triggers EU compliance obligations. These include technical documentation requirements, registration with the EU AI Office, and potentially systemic risk assessments if compute thresholds are exceeded. The appointment of an authorised representative in the Union becomes necessary. Contractual arrangements with European customers must address these regulatory requirements.
4.2 Indian Developers in Global Markets
India's AI ecosystem is increasingly export oriented. Indian developers serve global enterprise customers and contribute to open source model development with worldwide reach. This creates a compliance landscape where multiple regulatory frameworks apply simultaneously. A model may need to comply with EU AI Act requirements for European deployment, state level regulations in the United States, and emerging frameworks in markets like Singapore, Brazil, and the United Kingdom.
For practitioners, this necessitates a compliance architecture that identifies the most stringent requirements across applicable jurisdictions and builds to that standard. The alternative, maintaining separate compliance programmes for each market, quickly becomes unmanageable. The emergence of international standards, including ISO work on AI management systems and the OECD AI Principles, provides useful reference points for harmonised compliance approaches.
The Indian AI developer who builds governance into the model development lifecycle from the outset, rather than treating compliance as a post deployment exercise, will be better positioned to navigate this fragmented regulatory landscape. Documentation practices, testing protocols, and transparency measures that satisfy the EU AI Act will generally exceed requirements in less mature regulatory environments.
Practical Governance:
Implementation Framework
5.1 Model Documentation Standards
Effective AI governance begins with comprehensive documentation. The model card concept, pioneered by researchers at Google and now reflected in EU AI Act requirements, provides a template. A model card documents training data provenance, intended uses and out of scope applications, performance metrics across demographic groups, known limitations and failure modes, and ethical considerations addressed during development.
For legal purposes, model documentation serves multiple functions. It demonstrates due diligence in development, establishes the standard of care against which liability will be assessed, enables downstream deployers to make informed integration decisions, and provides evidence of compliance with regulatory requirements. Organisations should treat model documentation as a legal record deserving the same attention given to contract documentation or regulatory filings.
5.2 Ongoing Monitoring and Audit
AI governance is not a point in time exercise but an ongoing process. Models may experience performance drift as the data distribution they encounter in production diverges from training data. User interactions may reveal failure modes not anticipated during development. The broader environment may change in ways that affect model appropriateness. A system trained on pre pandemic data may behave unexpectedly when economic patterns shift.
Robust governance frameworks establish monitoring mechanisms to detect such changes and trigger review processes. They define escalation procedures for serious incidents and establish clear accountability for governance decisions. Periodic audits, whether internal or conducted by independent third parties, provide assurance that governance mechanisms are functioning as intended and identify areas for improvement.
Governance Framework Components
- 1.Model inventory and risk classification system
- 2.Development lifecycle governance with stage gates
- 3.Testing protocols for bias, security, and performance
- 4.Deployment approval processes with appropriate sign off authority
- 5.Incident response procedures for AI specific issues
- 6.Audit trails documenting governance decisions and rationale
Strategic Outlook
The governance of AI models stands at a pivotal moment. The technical capabilities of these systems continue to advance at a pace that challenges regulatory responsiveness. The EU AI Act represents an ambitious attempt to create a comprehensive framework, but its implementation will reveal gaps and generate interpretive questions that take years to resolve. India's approach remains emergent, creating both risk and opportunity for practitioners and their clients.
For organisations developing or deploying AI models, the imperative is clear: governance cannot be an afterthought. The documentation practices, testing protocols, and accountability structures established now will determine compliance posture and liability exposure for years to come. Those who treat governance as integral to model development, rather than a regulatory burden to be minimised, will build more trustworthy systems and face fewer legal challenges.
For practitioners, AI model governance represents both a new domain of expertise and a lens through which traditional legal disciplines must be reconsidered. Contract law, tort liability, intellectual property, and data protection all intersect in the AI governance space. The lawyer who can navigate these intersections while maintaining technical fluency will be indispensable to clients building the AI systems that will define the next generation of technology.
AMLEGALS AI Policy Hub • AI Governance Practice