India Parliament
Sovereign AI ArchitectureRepublic of India

The Indian
Code.

Architecting the Global South's most robust statutory barrier between Algorithmic Innovation and Civil Liberty.

ETHICS BILL2025 Framework
MEITYAI Advisories
PbDPrivacy-by-Design
INDIA AINational Mission
Constitutional Pillars

Statutory
Domains.

India's AI strategy is partitioned into five critical domains, each governed by specific ex-ante mandates and post-market oversight bodies.

01
Regulatory Sector 01

Data Privacy & Sovereignty

Active Instruments
DPDP Act 2023 (Section 10)
IndiaAI Mission Data Stack
Sectoral Health Data Policy
Definition of Scope

Encompasses the control of personal data by the Data Principal, localized storage mandates, and the prevention of digital colonialism through strict compute borders.

Access Technical Dossier
02
Regulatory Sector 02

Algorithmic Transparency

Active Instruments
MeitY AI Advisory 2024
IT Rules 2021 (AI Amendments)
E-commerce Dark Pattern Guidelines
Definition of Scope

Mandates for the explicit disclosure of synthetic content, watermarking of deepfakes, and ensuring users are aware of neural interaction points.

Access Technical Dossier
03
Regulatory Sector 03

Liability & Accountability

Active Instruments
AI Ethics Bill 2025 (Chapter 4)
Digital India Act (Proposed)
RBI Algorithmic Credit Guidelines
Definition of Scope

Defines the legal attribution of harm generated by autonomous agents and establishes the vicarious liability of Significant Data Fiduciaries.

Access Technical Dossier
04
Regulatory Sector 04

Systemic Safety & Robustness

Active Instruments
National AI Authority Vetting Protocol
Safe AI Labs (SAIL) Mandates
Critical Infrastructure Protection (NCIIPC)
Definition of Scope

Ex-ante vetting for critical-risk neural systems, mandatory red-teaming for frontier models, and infrastructure-level security protocols.

Access Technical Dossier
05
Regulatory Sector 05

Bias & Demographic Fairness

Active Instruments
Constitutional Fairness Doctrine (Art 14)
SDF Independent Audit Mandate
NITI Aayog Responsible AI Principles
Definition of Scope

Requires the auditing of training sets to ensure linguistic and demographic representation, preventing algorithmic exclusion in public and private sectors.

Access Technical Dossier
Vertical Intelligence

Sectoral
Benchmarks.

Access jurisdictional deep-dives for industry-specific AI compliance and regulatory oversight.

Primary Statute

The 2025
Bill.

A horizontal mandate establishing the National AI Authority and strict ex-ante vetting for 'Critical-Risk' neural systems.

Section 5

Risk Classification

A 4-tier risk model adapted for Indian vernacular training sets and demographic diversity.

Section 18

National AI Authority

Dedicated statutory body for model registration, safety audits, and cross-border alignment.

Section 32

Sovereign Sandbox

Regulatory relief for domestic startups to ensure innovation is not stifled by compliance.

PbD
Privacy-by-Design Mandate

Algorithmic Sovereignty.

The DPDP Act (Section 10) mandates that AI algorithms are architected with native privacy protocols. SDFs must ensure training sets do not bleed PII.

Neural Vetting

Independent audits are required to verify that model weights do not unintentionally reconstruct sensitive personal data points.

Consent Managers

Integration with the DEPA framework ensures that every bit of data in the model's training pipeline has a revocable statutory artifact.

MeitY Statutory Archive

Guideline Record

Full Docket →
REC_AI
Parliament of India / MeitY2025-01-15

AI Ethics & Regulation Bill, 2025 (Proposed)

The definitive horizontal statute for AI. Establishes the National AI Authority and a 4-tier risk classification system, mandating ex-ante vetting for critical-risk neural architectures.

Section 12

Mandatory Registration of Frontier Models with the National AI Authority.

Section 24

Algorithmic Impact Assessments for High-Risk Systems.

Section 45

Establishment of the AI Regulatory Sandbox for MSMEs.

REC_ME
Ministry of Electronics & IT2024-03-01

MeitY AI Advisory (Labeling & Safety)

Mandates that 'under-testing' or unreliable AI models must be explicitly labeled. Imposes strict provenance markers for synthetic content (Deepfakes) to ensure election integrity.

Para 3(b)

Consent Popup requirement for under-tested AI models.

Para 4

Metadata labeling for synthetic content.

REC_DP
MeitY / Parliament2023-08-11

DPDP Act, 2023: Algorithmic PbD

Bypasses general privacy for 'Privacy-by-Design' (PbD) in AI. Section 10 mandates Significant Data Fiduciaries (SDF) to undergo independent audits of neural training sets.

Section 8

Data Quality and Accuracy in Training Sets.

Section 10(2)

Independent Data Auditor for SDF Algorithmic Verification.