AMLEGALS Proprietary Framework

India AI Governance &
Compliance Framework

AIGCF™ — India's first practitioner designed, constitutionally grounded AI governance architecture for companies building, deploying, or procuring Artificial Intelligence. Grounded in Indian law. Aligned with global standards. Zero ambiguity.

10
Governance Pillars
5
Risk Tiers
7
Lifecycle Stages
₹250Cr
Max DPDPA Exposure
Part I — Executive Summary

India Stands at an AI Inflection Point

India's AI governance infrastructure remains nascent and fragmented: NITI Aayog principles without enforcement teeth, DPDPA obligations partially applicable to AI, sector specific AI guidance from RBI and SEBI without unified architecture, and a prospective Digital India Act whose AI provisions remain in active consultation. Indian companies deploying AI today operate in a genuine governance vacuum — and that vacuum is filling rapidly with legal, regulatory, and reputational risk.

“AI governance in India is not a technology problem wearing a legal costume. It is a fundamental rights problem wearing a technology costume. Every AI system making a consequential decision about an Indian citizen implicates Articles 14, 19, 21, and 300A of the Constitution. No governance framework that ignores this constitutional dimension is fit for purpose in India.”
— Anandaday Misshra, Founder & Managing Partner, AMLEGALS

The AMLEGALS India AI Governance & Compliance Framework™ (AIGCF™) is India's first comprehensive, practitioner designed architecture for AI governance, built for companies operating under India's multi layered regulatory environment. It integrates constitutional law, statutory obligations, regulatory guidance, international standards, and operational practice into a single, coherent, implementable framework across ten governance pillars, five risk tiers, and seven AI lifecycle stages.

Framework Overview

What is the AIGCF™?

The AMLEGALS India AI Governance & Compliance Framework™ is a ten pillar, lifecycle anchored, risk stratified governance architecture purpose built for Indian companies. It maps every governance obligation to its legal source, operational implementation, and audit evidence requirement.

ComponentDescriptionApplicable ToLegal Basis
Ten PillarsThe foundational governance domains every AI deploying company must operationaliseAll companies deploying, procuring, or building AINITI Aayog, DPDPA, IT Act, Constitutional Law
Risk StratificationFive tier AI risk taxonomy calibrated to Indian regulatory and constitutional rights contextAll AI systems and use casesDPDPA SDF, EU AI Act (reference), MeitY guidance
Lifecycle GovernanceSeven stage governance obligations from ideation through decommissioningInternal AI development and procurementISO/IEC 42001, NIST AI RMF, RBI MRM Guidelines
Accountability PyramidFive tier accountability from Board to Engineering with defined obligations at each levelAll organisational stakeholdersDPDPA, Companies Act 2013, SEBI LODR
Contractual ArchitectureAI specific contract clauses, liability allocation, and vendor governance templatesAll AI vendor and client relationshipsIndian Contract Act 1872, IT Act, DPDPA
Part I — Regulatory Landscape

India's AI Regulatory Constellation

India does not yet possess a single, unified AI statute. What exists is a regulatory constellation — multiple authorities exercising AI relevant jurisdiction through existing and emerging legal instruments. Understanding this constellation is the prerequisite to any governance architecture.

MeitY
Ministry of Electronics & IT

Primary AI policy authority. Responsible for the Digital India Act, IT Act amendments, and India's national AI strategy. Operates the IndiaAI Mission and AIRAWAT cloud infrastructure initiative. Issues AI ethics advisories and oversees data governance infrastructure.

IT Act 2000, IT Rules 2021, IndiaAI Mission, Digital India Act (Proposed)
NITI Aayog
National Institution for Transforming India

Authored India's foundational AI governance principles: Responsible AI for All (2021) and the National AI Strategy. Seven core principles — Safety, Equality, Inclusivity, Privacy, Transparency, Accountability, and Positive Values — form the ethical basis of AIGCF.

Responsible AI for All (2021), National AI Strategy
RBI
Reserve Bank of India

Regulates AI/ML in banking, credit scoring, fraud detection, robo advisory, and algorithmic trading. The Model Risk Management framework mandates explainability and human oversight. Digital Lending Guidelines directly govern AI based credit decisions.

Digital Lending Guidelines 2022, Model Risk Management Framework, Outsourcing Guidelines
SEBI
Securities and Exchange Board of India

Regulates AI in capital markets, algorithmic trading, robo advisory services, and investment decision making. Algo trading systems require SEBI approval with mandatory kill switches and audit trails.

Algo Trading Circulars, Investment Adviser Regulations, LODR Regulations
DPDPA / DPBI
Data Protection Board of India

The most consequential AI governance instrument in force. Significant Data Fiduciary classification imposes DPIA, DPO appointment, and algorithmic accountability obligations. The nexus between AI inference and personal data processing creates pervasive DPDPA exposure.

DPDPA 2023 Sections 8, 9, 10, 11, 12, 14, 16
IRDAI
Insurance Regulatory & Development Authority of India

Regulates AI in insurance underwriting, claims processing, risk assessment, and fraud detection. AI underwriting decisions may not use prohibited personal characteristics including caste and religion.

IRDAI (IT Governance, Risk & Compliance) Guidelines
Constitutional Framework

AI & the Indian Constitution — The Rights Nexus

The Indian Constitution provides the supreme legal framework within which all AI governance obligations must be understood. The Supreme Court's unanimous decision in K.S. Puttaswamy v. Union of India (2017) recognised the right to privacy — including informational self determination — as a fundamental right under Article 21. This constitutional foundation is unique to India and creates governance obligations that no other national framework imposes with equivalent legal force.

Article 14
Equality Before Law

AI systems making consequential decisions must not discriminate arbitrarily. Algorithmic decisions must be rational, non arbitrary, and subject to meaningful review.

Article 19
Freedom of Expression

AI content moderation systems must not violate free speech rights. Automated takedowns require proportionality and due process.

Article 21
Right to Privacy

Post Puttaswamy, AI systems processing personal data must satisfy the three part test: legality, necessity, and proportionality.

Article 300A
Right to Property

AI systems affecting property rights (credit, insurance, employment) require due process protections and contestation mechanisms.

Critical Intersection

DPDPA 2023 & AI — The Critical Intersection

The Digital Personal Data Protection Act 2023 is not an AI statute — but it is the most consequential AI governance instrument currently in force in India. Every AI system that processes personal data of Indian citizens generates DPDPA obligations. The intersection is pervasive, unavoidable, and carries penalties up to INR 250 crore.

AI ActivityDPDPA ProvisionObligation GeneratedPenalty
AI trained on personal dataSection 4 (Lawful Processing)Explicit consent or legitimate grounds required for each training dataset; disclosure of AI purpose at collection; purpose limitation on downstream model useRs.50 to 200 Cr
AI inference on personal dataSection 8(8) (Right to Explanation)Data Principals have the right to obtain an explanation of AI decisions affecting them; system level and decision level explainability requiredRs.50 to 150 Cr
High volume AI data processingSection 10 (Significant Data Fiduciary)If SDF classification applies: mandatory DPO, DPIA for all high risk AI, algorithmic accountability measures, periodic auditingRs.150 to 250 Cr
AI generated profiling and segmentationSection 9 (Accuracy & Correction)Data Principals may challenge AI generated profiles; correction mechanisms required; profile accuracy verification obligationsRs.50 to 100 Cr
AI in cross border data pipelinesSection 16 (Cross Border Transfers)Transfers only to Government approved countries/organisations; data localisation obligations for sensitive AI processing categoriesRs.150 to 200 Cr
GenAI API calls sending Indian user data abroadSection 16 cross border + Section 4 lawful processingEach API call is a cross border transfer; data masking before sending; zero retention agreements with providers; India hosted model deployment where sensitivity warrantsRs.100 to 200 Cr
Part II — Framework Architecture

The Ten Pillars of AIGCF™

The AIGCF™ architecture rests on ten governance pillars. Each addresses a distinct dimension of AI governance — from constitutional compliance through to algorithmic auditing. Together, they constitute a complete AI governance operating system for Indian companies of every size and sector.

Pillar I

AI Constitutional & Legal Compliance

Mapping all AI deployment against fundamental rights obligations under Articles 14, 19, 21, and 300A. Conducting constitutional risk assessments for high impact AI systems. Ensuring proportionality, procedural fairness, and contestability in all AI assisted consequential decisions that affect Indian citizens.

Pillar II

AI Inventory & Risk Classification

Comprehensive registry of all AI systems by type, function, risk level, data inputs, and decision impact. Five tier risk classification (Prohibited to Minimal) aligned with India's regulatory context. Mandatory pre deployment risk assessment and periodic reassessment protocols for all production AI systems.

Pillar III

AI Data Governance & Privacy

DPDPA compliant data governance for AI training, inference, and output. Lawful basis analysis for all AI data processing activities. Training data provenance and lineage documentation. Privacy preserving ML techniques including differential privacy and federated learning. Consent management for AI processing.

Pillar IV

Algorithmic Transparency & Explainability

Explainability obligations calibrated to decision stakes. Mandatory disclosure of AI use in consequential decisions. Layered explanation framework: system level, decision level, and appeal ready explanations. SHAP/LIME methodology for technical model explanation. Explainability by design as a procurement requirement.

Pillar V

AI Fairness & Non Discrimination

Pre deployment and ongoing bias audit requirements. Protected characteristics under Indian law: religion, race, caste, sex, place of birth, disability. Fairness metrics calibrated to Indian demographic context. Redress mechanisms for AI driven discrimination. Regular fairness reporting to the Board and sector regulators.

Pillar VI

Human Oversight & Contestability

Human in the loop architecture for all high risk AI decisions. Override mechanisms, escalation pathways, and accountability for human supervisors. Mandatory contestation rights for affected individuals in credit, employment, insurance, and enforcement contexts. AI decision review workflows with defined SLAs.

Pillar VII

AI Security & Robustness

Adversarial attack resistance covering data poisoning, model inversion, prompt injection, and adversarial examples. AI model security testing protocols including red teaming. Secure model deployment architecture. AI specific incident response plan. CERT In compliance for AI related cybersecurity incidents.

Pillar VIII

AI Governance & Accountability

Board level AI Governance Policy. AI Ethics Committee or Responsible AI Officer designation. RACI matrix for AI decisions at all organisational levels. AI governance reporting cadence covering Board, regulator, and external disclosure. Companies Act and SEBI LODR integration for listed companies.

Pillar IX

AI Contractual & Supply Chain Governance

AI specific contract clauses for vendor, client, and partner agreements. Liability allocation for AI generated harm including discrimination and hallucinations. Model audit rights and change notification obligations. AI sub processor chain governance. Open source AI licence compliance.

Pillar X

AI Audit, Monitoring & Continuous Improvement

Continuous algorithmic monitoring for drift, bias, and performance degradation. Structured AI audit programme: quarterly internal plus annual independent. Regulatory ready AI documentation including Model Cards, DPIAs, and Algorithmic Impact Assessments. Structured sunset and decommissioning protocols.

Part II — Risk Architecture

India AI Risk Stratification Matrix™

The AIGCF™ five tier risk taxonomy is calibrated to India's specific constitutional, statutory, and regulatory context. Unlike the EU AI Act's binary approach, the India matrix incorporates a dedicated Significant Risk tier reflecting the DPDPA's Significant Data Fiduciary architecture — a governance escalation mechanism without parallel in any other national framework.

PROHIBITEDTier 0 — Absolute Ban
Examples

Real time biometric mass surveillance; social scoring systems by state or private entities; subliminal manipulation targeting vulnerable groups; AI exploiting caste, religion, or disability to deny essential services; autonomous lethal weapons systems

Governance Obligations

Absolute prohibition. No deployment under any circumstances. Immediate cessation if currently in operation. Board level accountability. Legal and regulatory reporting obligation.

Regulatory Trigger

Constitutional Violation / Criminal Liability

HIGH RISKTier 1 — Mandated Controls
Examples

AI in credit scoring and lending decisions; recruitment and HR screening AI; healthcare clinical decision support; criminal justice risk assessment; educational evaluation AI; critical infrastructure management

Governance Obligations

Mandatory DPIA before deployment. DPO sign off required. Human in the loop mandatory. Bias audit pre launch and annually. Explainability documentation. Board approval. Annual independent audit.

Regulatory Trigger

DPDPA SDF Risk / RBI / SEBI / IRDAI — Rs.150 to 250 Cr

SIGNIFICANTTier 2 — Enhanced Controls
Examples

AI driven product pricing at scale; employee monitoring AI; marketing AI using sensitive personal data; fraud detection AI with automated blocking; insurance underwriting AI; content moderation systems

Governance Obligations

DPIA recommended (mandatory if SDF). DPO consultation required. Human review for adverse decisions. Bias assessment every six months. Opt out mechanisms mandatory. Quarterly internal audit.

Regulatory Trigger

DPDPA Section 10 / NITI Aayog Principles — Rs.50 to 150 Cr

LIMITED RISKTier 3 — Transparency Controls
Examples

Chatbots and conversational AI; AI generated content with disclosure; recommendation engines without significant individual impact; AI assisted document processing; predictive text and autocomplete

Governance Obligations

Mandatory disclosure that the system is AI or automated. Basic transparency notice. User opt out available. Annual bias review. Standard data protection compliance. DPDPA consent compliance.

Regulatory Trigger

DPDPA Sections 4 to 7 / IT Act

MINIMAL RISKTier 4 — Best Practice
Examples

Spam filters; AI powered search; grammar and spell check tools; AI in video games; weather forecasting AI; inventory management systems; maintenance scheduling AI; internal productivity tools

Governance Obligations

Best practice documentation maintained. Standard security controls. Basic data protection compliance where personal data is involved. Voluntary AI quality standards. Periodic performance review.

Regulatory Trigger

Best Practice / Voluntary Standards

Critical Classification Note — Generative AI

Generative AI does not occupy a single risk tier. Its classification depends entirely on deployment context. A generative AI used for internal drafting assistance is Minimal Risk. The same model generating credit assessment narratives, legal advice, or medical diagnoses is High Risk. Indian companies deploying GenAI must conduct use case specific risk classification — not system level classification. Each deployment of the same underlying model may attract a different tier and a different set of mandatory governance controls.

Part II — Lifecycle Governance

AI Lifecycle Governance — Seven Stages

AI governance is not a point in time compliance event. It is a continuous obligation across the entire AI lifecycle — from the initial decision to build or procure an AI system through to its eventual decommissioning. AIGCF™ maps specific governance obligations to each of seven lifecycle stages, creating a comprehensive cradle to grave accountability framework.

01

Ideation & Use Case Definition

AI use case registry entry. Constitutional rights screen. Initial risk classification. Stakeholder impact scoping. DPDPA lawful basis analysis. Regulatory applicability mapping by sector.

02

Data Collection & Preparation

Consent / lawful basis verification for each data source. Data minimisation audit. Training data bias audit and documentation. Data provenance and lineage records. Sub processor DPAs for data vendors.

03

Model Development & Training

Model Card creation and maintenance. Fairness testing across protected characteristic groups. Adversarial robustness testing. Privacy preserving training techniques. Model version control.

04

Testing & Validation

DPIA completion (mandatory for Tier 1 and 2). Independent model validation. Bias audit report signed off. Performance benchmarking. Explainability testing. DPO sign off. Board approval for Tier 1.

05

Deployment & Integration

Production monitoring infrastructure. User disclosure notices published. Contestation mechanism operationalised. Kill switch configured and tested. CERT In registration if applicable.

06

Operations & Monitoring

Continuous performance and drift monitoring. Quarterly bias review. AI incident logging and response. Annual independent audit. Board reporting on AI governance. Rights request fulfilment.

07

Decommissioning & Sunset

Certified data deletion from model weights. User notification of service discontinuation. Contractual obligations fulfilled. Audit trail preservation. Regulatory notification where applicable.

Part II — Accountability Architecture

The AI Accountability Pyramid™

Responsibility for AI governance cannot reside in a single role or department. AIGCF™ establishes a five tier accountability pyramid that distributes governance obligations across the entire organisational structure, with clearly defined duties at each level and explicit personal liability implications under Indian law.

Board of Directors

Ultimate accountability. Approves AI Governance Policy. Reviews high risk AI deployment decisions. Receives biannual AI governance report. Signs off on SDF classification response strategy. Under Companies Act 2013 and SEBI LODR, directors who authorise AI governance violations may face personal liability.

CEO / C Suite

Operational accountability. CEO owns AI governance culture and tone from the top. CTO/CISO owns AI security architecture. CHRO owns AI in employment decisions. CFO owns AI in financial decision making. Escalation to Board for all Tier 1 AI deployments and material incidents.

Chief AI Officer / DPO / RAO

Governance architecture. Chief AI Officer develops, maintains, and enforces AIGCF implementation. Data Protection Officer provides sign off on all high risk AI DPIAs. Responsible AI Officer chairs the AI Ethics Committee and coordinates regulatory engagement.

Product & Legal Teams

Implementation accountability. Product owners are responsible for risk classification, DPIA completion, and compliance posture of each AI system. Legal counsel advises on regulatory obligations. Privacy teams conduct DPIAs and manage consent architecture.

Engineering & Data Science

Technical accountability. Engineers implement fairness, explainability, and security controls by design. Data scientists document training data, model assumptions, and limitations in Model Cards. All technical staff receive responsible AI development training.

Personal Liability Warning

Under DPDPA 2023, the Data Protection Board of India may investigate individuals within an organisation — not merely the organisation itself. Under the Companies Act 2013, directors who authorise or knowingly permit data protection violations face personal liability. For listed companies, SEBI LODR obligations require disclosure of material regulatory actions. AI governance failure is increasingly a personal risk for company officers, not merely a corporate fine. Directors and officers of AI deploying companies should ensure their D&O insurance specifically covers AI governance liability.

Ready to Implement AIGCF™?

Contact AMLEGALS to discuss how the India AI Governance & Compliance Framework can be tailored and implemented for your organisation.

DISCLAIMER: The AMLEGALS India AI Governance & Compliance Framework™ (AIGCF™) constitutes legal analysis, regulatory mapping, and practitioner guidance for informational and educational purposes only. It does not constitute legal advice for any specific matter or organisation. Organisations should engage qualified legal counsel for advice specific to their regulatory obligations, business circumstances, and AI deployment context.

The AIGCF™ is a proprietary methodology and all framework terms, axioms, indices, and structures are the intellectual property of AMLEGALS. Designed by Anandaday Misshra. © 2025 AMLEGALS. All Rights Reserved. amlegalsai.com