DATA, AI
& DPDPA.
The Digital Personal Data Protection Act 2023 establishes India's first comprehensive data protection framework. For enterprises deploying artificial intelligence systems that process personal data, the Act introduces obligations that fundamentally reshape how AI initiatives must be structured and operated.
EXECUTIVE SUMMARY
The relationship between artificial intelligence and data protection law is inherently complex. AI systems derive their capabilities from data, often vast quantities of it. Where that data includes information about identifiable individuals, the processing activities that enable machine learning, inference, and automated decision making fall within the regulatory perimeter of data protection legislation.
The Digital Personal Data Protection Act 2023 arrived at a moment when AI deployment was accelerating across Indian industry. The Act does not mention artificial intelligence explicitly. Its provisions are technology neutral, applying to any processing of digital personal data regardless of the means employed. Yet the practical effect on AI systems is profound. The consent requirements, purpose limitation principles, and data principal rights enshrined in the Act impose constraints that AI developers and deployers cannot ignore.
The central challenge for AI practitioners is reconciling the operational realities of machine learning with the DPDPA's emphasis on transparency, specificity, and individual control. Models trained on personal data without appropriate legal basis face regulatory risk. Systems making consequential decisions about individuals must accommodate rights of access, correction, and explanation.
Establishing
Legal Basis for Processing
1.1 The Consent Framework
The DPDPA positions consent as the primary legal basis for processing personal data. Section 6 requires that consent be free, specific, informed, unconditional, and unambiguous, given through a clear affirmative action. For AI systems, meeting these requirements demands careful attention to how consent is solicited and documented.
The specificity requirement is particularly challenging. Consent must be sought for each specified purpose, and processing beyond those purposes requires fresh consent. AI systems that contemplate using personal data for training, inference, personalisation, and improvement may need to articulate each of these purposes distinctly and obtain corresponding consent. Blanket authorisations for undefined future uses are unlikely to satisfy the Act's requirements.
Consent Architecture for AI Systems
- 1.Clearly enumerate each processing purpose including training, inference, and model improvement
- 2.Explain how personal data contributes to AI system functionality in accessible language
- 3.Provide granular consent options allowing data principals to opt into some uses but not others
- 4.Implement technical mechanisms to respect consent boundaries throughout the data lifecycle
- 5.Document consent with sufficient audit trail for regulatory accountability
1.2 Legitimate Uses Without Consent
Section 7 of the DPDPA enumerates certain legitimate uses for which personal data may be processed without consent. These include processing necessary for performance of a contract, compliance with legal obligations, response to medical emergencies, employment purposes, and certain public interest grounds.
For AI applications, the contractual necessity ground may provide basis for processing where the AI system is integral to delivering a service that the data principal has engaged. A recommendation engine powering an e-commerce platform might process purchase history under this ground if recommendations are a core feature of the service. However, using the same data to train a general purpose AI model that the enterprise then commercialises separately would likely exceed the contractual necessity justification.
Likely Applicable Grounds
- •Contract performance for AI features
- •Legal compliance obligations
- •Employment related processing
- •Public interest processing by government
Typically Requires Consent
- •Model training on user data
- •Profiling for marketing purposes
- •Behavioural analytics and prediction
- •Third party data sharing for AI
Data Principal Rights
in AI Contexts
2.1 Right of Access and Information About Processing
Section 11 of the DPDPA grants data principals the right to obtain from the data fiduciary a summary of personal data being processed and the processing activities undertaken. For AI systems, responding to such requests requires maintaining comprehensive records of how personal data flows through machine learning pipelines and contributes to system outputs.
This is technically challenging. Neural network models encode learned patterns in millions or billions of parameters. A specific individual's data cannot typically be isolated within trained model weights. Explaining to a data principal how their personal data was used to train a model, and what inferences the model draws about them, requires systems designed from the outset to support such explainability. Retrofitting transparency into opaque AI systems is rarely feasible.
2.2 Correction and Erasure in Machine Learning
Data principals have the right to correction of inaccurate or misleading personal data and to erasure of data that is no longer necessary. These rights present acute difficulties for machine learning systems. Correcting data in a training dataset does not automatically update model weights that were influenced by the original data. Erasing data from storage does not remove its effect on trained models.
The emerging field of machine unlearning seeks to develop techniques for removing the influence of specific data points from trained models without full retraining. These techniques remain experimental and computationally expensive. Enterprises deploying AI systems must consider whether their model architectures can accommodate erasure requests and build compliance processes accordingly.
A pragmatic approach may involve documenting that erasure from source data has been effected and that the data's influence on model weights will be eliminated when the model is next retrained. The adequacy of such an approach under DPDPA enforcement remains to be tested.
Cross Border
Data Transfers for AI
3.1 The Transfer Framework
Section 16 of the DPDPA permits transfers of personal data outside India except to countries that the Central Government may restrict by notification. This permissive approach facilitates international AI collaboration and allows Indian enterprises to use cloud based AI services hosted abroad. However, the government retains the power to restrict transfers to specific jurisdictions, introducing a geopolitical dimension to AI infrastructure decisions.
For AI applications, cross border transfers occur in multiple contexts. Training data may be transmitted to foreign data centres where model development occurs. Inference requests containing personal data may be sent to AI services operated from abroad. Model weights trained on Indian personal data may be deployed on infrastructure located in other jurisdictions. Each of these transfers must be analysed under the DPDPA framework.
3.2 Sectoral Restrictions
Beyond the DPDPA, certain sectors impose additional restrictions on cross border data flows that affect AI deployments. The Reserve Bank of India's data localisation circular requires that payment system data be stored only in India, with processing abroad permitted subject to conditions. The IRDAI has imposed similar requirements for insurance data. The SEBI framework for market infrastructure institutions contains data residency provisions.
Enterprises deploying AI in these regulated sectors must navigate both DPDPA requirements and sectoral rules. A bank wishing to use a foreign AI service for fraud detection must ensure that payment data is not transferred abroad in a manner that violates RBI requirements, even if the transfer would otherwise be permissible under the DPDPA.
Sector Specific Data Localisation Requirements
- Banking:RBI requires payment data storage in India; foreign processing permitted with domestic mirror
- Insurance:IRDAI mandates data residency for policyholder information
- Telecom:DoT requires subscriber data retention within India
- Healthcare:ABDM framework contemplates health data sovereignty requirements
3.3 Contractual Frameworks for AI Services
Where Indian enterprises engage foreign AI service providers, the contractual framework must address DPDPA compliance comprehensively. Data processing agreements should specify the nature of processing activities, security measures, sub-processor engagement conditions, breach notification obligations, and data subject rights facilitation mechanisms.
The contract should also address what happens when the data principal exercises their right to erasure. If training data containing personal data has been transmitted to the AI service provider and used to train models, the erasure obligation extends to that provider. Contracts should specify the technical and procedural steps the provider will take to effect erasure and the timeline for completion.
Significant Data Fiduciary
Obligations
Section 10 of the DPDPA empowers the Central Government to notify certain data fiduciaries as Significant Data Fiduciaries based on criteria including volume and sensitivity of personal data processed, risk to data principal rights, potential impact on sovereignty and integrity of India, and risk to electoral democracy. Large scale AI providers processing personal data of millions of Indians are likely candidates for such notification.
Significant Data Fiduciaries face enhanced obligations including appointment of a Data Protection Officer based in India, appointment of an independent data auditor, and conduct of Data Protection Impact Assessments for processing likely to result in harm. For AI systems, the Impact Assessment requirement is particularly significant as it will require systematic evaluation of AI related risks before deployment.
Standard Fiduciary Obligations
- •Process only for specified purposes
- •Implement reasonable security safeguards
- •Notify breaches to Board and data principals
- •Ensure data accuracy and completeness
- •Erase data when purpose is fulfilled
Additional SDF Obligations
- •Appoint Data Protection Officer in India
- •Engage independent data auditor
- •Conduct Data Protection Impact Assessments
- •Periodic audit and compliance verification
- •Enhanced reporting to Data Protection Board
Building Compliant AI
The DPDPA creates a framework within which AI development and deployment must operate. While the Act's provisions are technology neutral, their application to machine learning systems raises novel questions that will be resolved through regulatory guidance, enforcement action, and ultimately judicial interpretation.
Enterprises that build data protection considerations into AI systems from the design stage will be better positioned than those that treat compliance as an afterthought. Privacy by design principles, meaningful consent mechanisms, transparent processing practices, and robust data subject rights infrastructure should be foundational elements of any AI initiative processing personal data.
The stakes are considerable. The DPDPA authorises penalties up to Rs 250 crore for violations. Beyond financial exposure, regulatory action can damage reputation and disrupt business operations. For enterprises committed to responsible AI development, DPDPA compliance is not merely a legal obligation but an opportunity to build trust with users and differentiate in the market.
AMLEGALS AI Policy Hub • Data Protection Practice