Consent has become the cornerstone of data protection, and the Digital Personal Data Protection Act, 2023, places it at the centre of India's personal data governance framework. For AI systems that process personal data, this consent requirement creates both legal obligations and architectural imperatives. The systems we build must not merely obtain consent but manage it throughout the data lifecycle, respond to consent withdrawal, and demonstrate compliance to regulators. This is consent architecture: the intersection of legal requirement and technical implementation.
The DPDPA Consent Standard
The Act requires consent that is free, specific, informed, unconditional, and unambiguous. Each element imposes design constraints. "Free" consent cannot be coerced or bundled with unrelated services in ways that eliminate meaningful choice. "Specific" consent must identify particular processing purposes rather than authorising open-ended data use. "Informed" consent requires disclosure of processing activities in comprehensible terms. "Unconditional" consent cannot be contingent on acceptance of unrelated terms. "Unambiguous" consent requires affirmative action rather than silence or pre-checked boxes.
For AI systems, achieving this standard presents challenges. AI processing purposes may be difficult to specify in advance. The complexity of AI systems may defy lay comprehension, complicating informed consent. The iterative nature of AI development may require consent for uses that were not contemplated when data was initially collected. These challenges do not excuse non-compliance; they require creative solutions.
Granular Consent Design
The specificity requirement points toward granular consent designs that separate different processing purposes. Rather than obtaining consent for "AI processing" generally, organisations might obtain separate consents for model training, inference generation, and analytics. This granularity respects data principal autonomy and creates defensible consent records. It also creates user experience challenges: too many consent requests may overwhelm users and reduce engagement.
Effective consent design balances legal rigour with user experience. Layered consent approaches provide summary information with detailed disclosures available on request. Just-in-time consent requests processing permissions at the moment they become relevant rather than front-loading all requests. Consent dashboards enable ongoing management of permissions rather than treating consent as a one-time transaction. These approaches require investment but produce more meaningful consent than traditional wall-of-text disclosures.
Consent Manager Integration
The DPDPA introduces consent managers as registered entities that help data principals manage consent across multiple data fiduciaries. This infrastructure has significant implications for AI systems. When a data principal withdraws consent through a consent manager, that withdrawal must propagate to all downstream processing, including AI training pipelines that may have incorporated the individual's data.
Technical integration with consent managers requires APIs that can receive consent status updates, internal systems that can act on those updates, and data architectures that can identify which processing activities depend on which consents. These requirements are easier to address during system design than through retrofitting. Organisations planning AI systems should anticipate consent manager integration from the outset.
Withdrawal and Machine Unlearning
The right to withdraw consent creates particular challenges for AI systems. When consent is withdrawn, continued processing becomes unlawful. For AI models trained on the withdrawn data, this may require model modification or retraining to remove the data's influence. The emerging field of machine unlearning addresses this challenge, but techniques remain immature and computationally expensive.
Organisations must develop policies for handling consent withdrawal in AI contexts. At minimum, withdrawn data should be excluded from future training. Depending on risk assessment and technical capability, organisations may also attempt to mitigate the influence of withdrawn data on existing models. Documentation of these efforts becomes important evidence if consent withdrawal handling is later questioned.
Consent Records and Audit Trails
Demonstrating consent requires records. These records must establish what consent was obtained, when, for what purposes, and based on what disclosures. They must track consent modifications and withdrawals. They must link consent records to data processing activities so that any processing can be traced to its legal basis. This record-keeping is not merely good practice; it is essential for responding to regulatory inquiries and data principal requests.
AI systems should generate audit trails that connect processing activities to consent records. When a model is trained, the training process should log which data subjects contributed data and what consents authorised inclusion. When inference is generated using personal data, the inference process should verify that necessary consents remain in effect. These audit capabilities enable compliance demonstration and support response to individual rights requests.
Beyond Consent
While consent dominates discussion, the DPDPA also recognises legitimate uses that do not require consent. Processing for compliance with legal obligations, performance of state functions, medical emergencies, and employment purposes may proceed without individual consent. AI systems operating under these bases face different architectural requirements: rather than managing consent, they must document the basis that applies and ensure processing remains within that basis's scope.
The practitioner advising on AI consent architecture must understand both the legal requirements and the technical possibilities. Solutions that are legally compliant but technically impractical fail in implementation. Solutions that are technically elegant but legally deficient fail in compliance. The sweet spot lies at the intersection: consent architectures that satisfy legal requirements through technically feasible implementations. This synthesis defines the new discipline of privacy engineering for AI.