AI Impact Analysis
01Contract Law

Algorithmic Accountability
in Indian Contracts

When an algorithm makes a decision that breaches a contract, who bears the liability? The answer is reshaping commercial jurisprudence.

The Indian Contract Act of 1872 was drafted in an era when the parties to a contract were human beings exercising human judgment. One hundred and fifty years later, we find ourselves confronting a fundamentally different reality: algorithms now make decisions that bind parties, reject applications, terminate relationships, and allocate resources. The question of algorithmic accountability in contracts is not a futuristic concern; it is the lived reality of commercial practice in 2026.

The Attribution Problem

Consider a lending platform that deploys a machine learning model to approve or reject loan applications. The model, trained on historical data, makes a decision that results in wrongful rejection of a creditworthy applicant. Under Section 73 of the Indian Contract Act, damages flow from breach. But where does the breach originate? The algorithm acted according to its training. The platform deployed technology it reasonably believed was fit for purpose. The data scientists who built the model followed industry standard practices. Yet somewhere in this chain, a contractual obligation to assess applications fairly has been violated.

The courts have not yet definitively resolved this attribution problem, but the trajectory is clear. Liability will attach to the deploying entity, not the algorithm itself. This principle follows from basic agency theory: an agent's actions bind the principal, regardless of whether the agent is human or artificial. The algorithmic decision-maker is a tool, and tools do not bear legal responsibility. The hand that wields the tool does.

Foreseeability and Algorithmic Opacity

The doctrine of foreseeability, crystallised in Hadley v. Baxendale and adopted into Indian jurisprudence, requires that damages flow from breaches that were reasonably foreseeable at the time of contracting. Algorithmic systems complicate this analysis in two ways. First, the deploying party may genuinely not foresee certain failure modes of their AI system. Neural networks, by design, discover patterns that humans cannot anticipate. Second, even when failure modes are theoretically foreseeable, the opacity of many AI systems makes specific predictions impossible.

This presents a drafting challenge for commercial lawyers. Standard limitation of liability clauses must now account for algorithmic failures. Indemnification provisions must specify whether AI-generated decisions fall within their scope. Force majeure clauses must address whether an unforeseen algorithmic output constitutes an event beyond the parties' control. The practitioner who ignores these questions does so at the client's peril.

The Standard of Care

When a contract requires a party to exercise "reasonable care" or "professional diligence," what does this mean when the care is exercised by an algorithm? The standard cannot be perfection; no AI system achieves zero error rates. But nor can it be mere deployment; simply having an AI system does not discharge one's obligations. The emerging standard appears to be one of reasonable algorithmic governance: documented development processes, appropriate testing regimes, ongoing monitoring, and responsive remediation when failures occur.

This standard has practical implications. Organisations deploying AI in contractual contexts should maintain records of model development, validation testing, and performance monitoring. They should establish escalation pathways for algorithmic failures. They should conduct periodic audits of algorithmic outputs against contractual requirements. These practices are not merely best practices; they are increasingly the baseline for demonstrating reasonable care.

Remedies and Enforcement

The remedial framework for algorithmic breach remains unsettled. Specific performance, traditionally disfavoured in Indian law, becomes more complex when the breach stems from an automated system. Can a court order a party to retrain its algorithm? To deploy a different model? To revert to human decision-making? These questions have no clear answers, but they are being asked with increasing frequency in commercial disputes.

Damages calculation also presents novel challenges. When an algorithmic system affects thousands of transactions simultaneously, how does one calculate the quantum of harm? Class action mechanisms, still nascent in India, may become increasingly relevant for AI-related contractual disputes. The Consumer Protection Act, 2019, with its provisions for class actions and product liability, offers one pathway, though its application to B2B contexts remains limited.

Strategic Implications

For the practitioner advising clients on AI deployment, the message is clear: algorithmic accountability begins at the contracting stage. Contracts deploying AI should explicitly address liability allocation for algorithmic failures. They should specify testing and validation requirements. They should establish audit rights and remediation procedures. They should define what constitutes acceptable algorithmic performance and what triggers breach.

The Indian Contract Act remains remarkably flexible, capable of absorbing technological change as it has for generations. But flexibility requires conscious adaptation. The courts will develop doctrine as disputes arise. The prudent lawyer anticipates these developments and structures relationships accordingly. In the age of algorithmic decision-making, contractual foresight is not merely advisable; it is essential.