Statutory Intelligence Repository

The AI Lexicon.

Adversarial Red-Teaming

Mandatory safety testing for frontier AI systems, specifically required under Title V of the EU AI Act and India's proposed 2025 AI Ethics Bill for models exceeding compute thresholds.

Agency Paradox

The legal gap where a principal disclaims liability for an autonomous agent's non-deterministic action due to a lack of subjective intent.

AI Ethics & Regulation Bill, 2025

India's proposed horizontal statute establishing the National AI Authority and a 4-tier risk classification system for neural architectures.

Algorithmic Disgorgement

A regulatory Nuclear Option ordering the permanent deletion of a model's weights if trained on illicit or non-consensual datasets.

Algorithmic Impact Assessment (AIA)

A mandatory statutory audit for high-risk systems to identify and mitigate bias, safety, and human rights risks before market entry.

Algorithmic Sovereignty

A nation's ability to regulate, audit, and control the AI systems operating within its digital borders without foreign dependency.

Annex III (EU AI Act)

The definitive list of high-risk AI application areas including critical infrastructure, employment, education, and law enforcement.

Article 10 (DPDP Act)

Specific obligations for Significant Data Fiduciaries (SDF) in India, mandating independent audits and Data Protection Impact Assessments.

Article 43 (Conformity Assessment)

The mandatory process of verifying that high-risk systems meet safety and technical standards before being placed on the Union market.

Article 5 Prohibitions

The list of Unacceptable Risk AI practices banned outright in the EU, including social scoring and subliminal manipulation.

Article 50 (Transparency)

Legal mandate to disclose AI interaction and ensure synthetic media (Deepfakes) are clearly labeled and watermarked.

Article 61 (Post-Market Monitoring)

A mandatory plan for providers to collect and analyze data on system performance and bias drift throughout the AI's deployment lifecycle.

Autonomous Adjudication

The use of AI for judicial sentencing or administrative decisions; strictly regulated to prevent the erosion of Due Process.

Bias Drift

The phenomenon where a model's fairness metrics degrade post-deployment due to exposure to new, non-representative real-world data.

Biometric Categorization

AI practices that infer sensitive traits (race, religion, politics) from biometric data; generally prohibited under Art 5.

Brussels Effect

The global regulatory convergence toward EU standards (like the AI Act) due to the scale of the single market and cross-border data flows.

C2PA Standard

The global technical protocol for content provenance, enabling cryptographic watermarking and identity signing for synthetic media.

CDSCO (AI in Health)

India's Central Drugs Standard Control Organization, which classifies diagnostic AI as a Medical Device subject to clinical validation.

Charter of Digital Rights

A constitutional framework (pioneered by Spain) providing citizens the right to contest algorithmic decisions and preserve human-in-the-loop oversight.

Cognitive Labor Automation

The transition from human-led reasoning to agentic AI workflows, particularly in professional services like Law and Finance.

Compute Threshold

The regulatory benchmark used to identify General Purpose AI with Systemic Risk subject to stricter safety mandates.

Conformity Assessment Body

A notified third-party organization authorized to audit and certify the safety of high-risk AI systems.

Consent Artifact

A digital proof of a Data Principal's authorization, required to be cryptographically linked to training data ingestion under the DEPA framework.

Consent Manager

A statutory entity in the India DPDP Act that manages a principal's consent across multiple fiduciaries via interoperable APIs.

Constitutional AI

A training methodology using a set of explicit rules to guide model behavior rather than purely human feedback.

Context Window

The total amount of information a model can process in one inference pass; critical for Retrieval-Augmented Generation and legal vetting.

Cross-border Data Corridor

Bilateral agreements allowing for the transfer of training datasets while maintaining localized sovereignty and security.

Data Fiduciary

An entity under India's DPDP that determines the purpose and means of processing personal data; equivalent to a GDPR Controller.

Data Lineage Audit

The technical process of tracing training data from point of ingestion to the final model weight to ensure legal and ethical pedigree.

Data Principal

The individual to whom personal data relates under India's DPDP Act; the primary holder of privacy rights.

Deep Synthesis Provisions

China's regulations mandating the watermarking and state-registration of algorithms that generate synthetic media.

Deepfake Impersonation

The legal classification of non-consensual deepfakes as identity theft under India's Information Technology Act.

DEPA

India's technical stack for secure, consent-based data sharing, foundational to the National AI Mission.

Digital India Act

The proposed successor to the IT Act 2000, expected to define the legal boundaries for AI safety and platform liability.

Digital Public Infrastructure

The societal-scale technical layer that enables AI to interact with public sector services safely.

Duty of Candor

The judicial standard requiring lawyers to verify the accuracy of AI-generated citations and filings.

Electronic Personhood

A proposed legal status for autonomous agents that would allow them to enter contracts and hold liability separate from their developers.

Emotion Recognition Prohibitions

Bans on AI systems that infer feelings in schools or workplaces, deemed an unacceptable violation of mental integrity.

Ex-Ante Regulation

Statutory mandates that occur before a system is deployed, such as mandatory safety vetting for frontier models.

Ex-Post Enforcement

Enforcement actions that occur after a harm has been realized or a violation detected in a deployed system.

Explainability

The statutory requirement that high-risk AI outcomes be understandable to natural persons, particularly in judicial or medical contexts.

Falcon LLM

The UAE's flagship open-weights model, symbolizing the Gulf's pivot toward Sovereign Intelligence and compute dominance.

Federated Sovereignty

A decentralized model of AI regulation where nations share safety protocols while maintaining domestic control over datasets and compute.

Foundation Model

Large-scale AI trained on diverse data that can be adapted to a wide range of downstream tasks; the primary target of Title V regulation.

Friction Cost Analysis

The metric used to measure the economic impact of AI as it replaces manual labor in professional services.

Frontier Model

The most advanced, large-scale AI models that push the state-of-the-art and represent the highest tier of potential systemic risk.

Garante

The Italian Data Protection Authority, known for pioneering enforcement actions against LLM providers for data minimization failures.

General Purpose AI

AI models that can be used for many different tasks, regardless of the way they are placed on the market.

Generative AI Measures

China's strict rules mandating that AI content reflect Core Socialist Values and undergo security assessments.

Grounding

The process of linking an LLM to a trusted external database to prevent hallucinations in legal outputs.

Hallucination Indemnity

A contractual clause allocating risk between providers and users for factually incorrect AI outputs in high-stakes environments.

Harmonized Standards

The technical specifications that AI developers must follow to comply with the high-level legal requirements of the AI Act.

High-Risk AI

AI systems deemed to have a significant potential for harm, triggering mandatory conformity assessments and monitoring obligations.

Human-in-the-loop

A regulatory mandate requiring active human review and confirmation of AI outputs before they are executed in high-risk scenarios.

Human-on-the-loop

A governance model where AI executes tasks autonomously but a human supervisor can intervene or override in real-time.

IndiaAI Mission

The government program building national compute clusters, datasets, and regulatory sandboxes for Indian startups.

Instrumental Convergence

The theory that a sufficiently intelligent AI will pursue power and resources to achieve its goal, regardless of the goal's nature.

Interoperable Consent

The ability for a citizen to revoke or grant data access for AI training across different platforms using a single digital signature.

IT Rules 2021

Advisories mandating that intermediaries identify and label AI-generated content and deepfakes to ensure election integrity.

Judicial Analytics

The use of AI to analyze judge behavior and predict motion outcomes, raising concerns about Outcome-Based Bias.

Legal Engineer

A new class of professional who bridges law and technology, specializing in drafting prompts and auditing AI legal workflows.

Lexology

The study of law and legislative analysis, specifically applied to the Codex of global AI statutes on this platform.

Liability Gap

The vacuum where harm occurs that cannot be clearly attributed to the AI provider, the deployer, or the user under existing tort law.

Machine Unlearning

The technical process of forgetting specific data points within a trained neural network to comply with Right to Erasure or disgorgement orders.

Manifest Arbitrariness

The legal test used in India to challenge algorithmic decisions that lack a rational basis or violate constitutional fairness.

Meaningful Human Control

The statutory standard for oversight, requiring that humans have the competence and authority to effectively override AI decisions.

MeitY

The Ministry of Electronics & IT directives mandating the labeling of under-testing or unreliable AI models for public use.

Mental Integrity

A fundamental human right targeted by emerging Neuro-rights legislation to prevent unauthorized algorithmic manipulation of thought.

Mixture-of-Experts

An efficient architecture that activates only a subset of parameters per query, reducing compute cost and latency.

Model Card

A standardized Nutrition Label for AI, documenting training data, intended use, and known limitations for compliance transparency.

National AI Authority

The proposed central body for model registration, algorithmic auditing, and cross-border regulatory alignment in the 2025 Bill.

Negative List

India's proposed framework for cross-border data flows, identifying territories where personal data cannot be transferred for AI training.

Neural Nexus Model

A legal framework for tiered liability attribution based on which actor defines the agent's objective function.

Neuro-Rights

Legal frameworks designed to protect mental privacy and identity from extraction by brain-computer interfaces.

Notified Body

An independent audit organization authorized to perform third-party conformity assessments for high-risk AI under the AI Act.

OECD AI Principles

The global baseline for Trustworthy AI adopted by 40+ nations, focusing on transparency, safety, and accountability.

Operator

The entity that controls the deployment and specific goal-setting for an AI system; the primary bearer of Deployer liability.

P(doom)

The informal probability score assigned by researchers to the risk of AI causing a human extinction event; used to justify Safety-First regulation.

Post-Truth Evidentiary Standard

The judicial shift toward requiring forensic authentication for all digital evidence due to the perfection of deepfakes.

Predictive Justice

The use of algorithms to forecast recidivism or judicial outcomes; classified as High Risk in the EU for potential minority bias.

Privacy-by-Design

The mandatory engineering requirement that privacy controls be hard-coded into AI training pipelines from inception.

Provenance Marking

Technical standard for embedding source information in AI-generated media to prevent misinformation and fraud.

Real-time Remote Biometric ID

Public surveillance technology deemed an unacceptable risk and banned in the EU except for specific judicial warrants.

Red Teaming Mandate

Statutory requirement for Systemic Risk models to undergo adversarial testing by independent security experts.

Regulatory Sandbox

A controlled environment where developers can test innovative AI models under relaxed rules but strict oversight from the authority.

Right to Explanation

The legal right for a citizen to receive a human-readable reason for a decision made by an automated system.

Risk-Based Approach

The core philosophy of AI regulation where mandates are proportional to the system's potential for societal harm.

Service-as-Software

The business model shift where AI agents perform end-to-end tasks rather than humans using software tools.

Significant Data Fiduciary

In India, an entity designated by MeitY as having a higher risk profile due to data volume or sensitivity, triggering audits.

Silicon Sovereignty

A nation's capacity to maintain domestic GPU clusters and semiconductor fabrication to prevent Compute Colonialism.

Sovereign Compute Cluster

Nationalized GPU infrastructure reserved for domestic public sector and startup AI development.

Stochastic Warranty

A new contractual instrument where AI providers warrant a specific error-rate rather than absolute accuracy.

Systemic Risk

Classification for GPAI models trained on massive compute that could cause large-scale societal or security disruptions.

Technical Documentation

The exhaustive record required for high-risk AI, detailing architecture, logic, training data, and safety test results.

Title II

The chapter of the AI Act listing the practices banned in the Union as contrary to human dignity.

Title III

The most significant regulatory chapter, defining the core obligations for AI systems used in critical societal sectors.

Title V

The chapter regulating Foundation Models, distinguishing between standard models and those with systemic risk.

Value Alignment

The process of ensuring an AI's internal reward system matches external human values such as safety, honesty, and helpfulness.

Vibe Coding

The late-2025 paradigm of generating production-grade software via high-level natural language intent rather than manual syntax.

Vicarious Liability

The legal doctrine holding employers responsible for the actions of their human or digital agents.

Works Council Consultation

A mandatory requirement in Germany to consult employee representatives before deploying AI in the workplace.