The rules of evidence, refined over centuries of common law development and codified in the Indian Evidence Act, 1872, rest on assumptions about the nature of testimony and proof that artificial intelligence fundamentally challenges. When a machine learning model outputs a prediction, is this evidence? When an AI system generates a document, can this document be authenticated? When an algorithm identifies a pattern, can this pattern be presented to a court? These questions are no longer hypothetical; they arise in litigation today.
Electronic Records Under Section 65B
Section 65B of the Indian Evidence Act, introduced by the Information Technology Act, 2000, provides the gateway for electronic evidence. The section permits computer output to be admitted as evidence of the contents of the original electronic record, provided certain conditions are satisfied. The computer must have been in regular use, the information must have been fed in the ordinary course of activities, the computer must have been operating properly, and a certificate attesting to these facts must accompany the evidence.
The Supreme Court's decision in Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal (2020) clarified that Section 65B certification is mandatory, not merely directory. This requirement applies to AI-generated outputs as much as to traditional electronic records. The party seeking to introduce AI evidence must provide certification addressing the statutory requirements. Failure to comply results in inadmissibility regardless of the evidence's probative value.
The Authentication Challenge
Traditional document authentication relies on signatures, seals, and chain of custody. Electronic document authentication relies on hash values, digital signatures, and audit trails. AI-generated content presents a different challenge: the output may have no human author to authenticate it, may exist in multiple versions as models iterate, and may not have a clear "original" in any meaningful sense.
Consider a large language model that generates a summary of a contract. The summary is not a copy of the original contract; it is a new document created by algorithmic processing. Authenticating this summary requires demonstrating not only the integrity of the input document but also the reliability of the processing that created the summary. This is a different and more complex authentication task than traditional evidence law anticipates.
Expert Opinion and AI
Section 45 of the Evidence Act permits expert opinion on matters requiring specialised knowledge in science, art, or any profession. AI outputs increasingly resemble expert opinions: diagnostic algorithms provide medical assessments, forensic algorithms provide identification conclusions, predictive algorithms provide risk assessments. The question is whether AI-generated analysis can qualify as "expert opinion" under the statute.
The better view is that AI outputs are not themselves expert opinions but may be the basis for human expert opinions. A medical expert might testify that an AI diagnostic tool, which operates on certain principles and has demonstrated certain performance characteristics, generated a particular output when applied to particular inputs. The expert's opinion is that this output is reliable. This framing preserves the human accountability that evidence law requires while incorporating AI capabilities.
The Reliability Inquiry
American evidence law, particularly the Daubert standard, requires courts to assess the reliability of expert methodology before admitting expert testimony. Indian law has no direct equivalent, but courts retain discretion to exclude unreliable evidence. For AI evidence, reliability inquiries might address: Has the model been validated for the use case at hand? What is the model's error rate? Has the model been tested on populations similar to the case at hand? Is the model's reasoning interpretable?
Parties seeking to introduce AI evidence should anticipate reliability challenges. Documentation of model development, validation testing, and performance monitoring becomes evidence of reliability. Expert testimony explaining how the model works, where it has been validated, and what its limitations are supports admissibility arguments. The absence of such documentation and testimony creates vulnerability to exclusion motions.
Hearsay and AI
Hearsay rules exclude out-of-court statements offered to prove the truth of the matter asserted, subject to numerous exceptions. AI outputs are not "statements" in the traditional sense because they are not assertions by a human declarant. However, to the extent AI outputs communicate information derived from human inputs, hearsay concerns may arise. A chatbot that reports what a customer said presents hearsay issues even if the chatbot's own processing does not.
The characterisation of AI outputs for hearsay purposes remains unsettled. Some jurisdictions treat machine-generated records as non-hearsay because machines cannot have the testimonial intent that animates hearsay concerns. Others apply hearsay rules to machine outputs on the theory that the humans who programmed and trained the system are the true declarants. Indian courts have not definitively addressed this question, leaving litigants to argue by analogy and first principles.
Practical Implications for Litigators
For the litigation practitioner, AI evidence presents both opportunities and risks. On the opportunity side, AI can process vast document collections, identify relevant materials, and surface patterns that human review would miss. AI-assisted e-discovery is now standard practice in complex litigation. AI-generated analysis can support expert opinions and strengthen case narratives.
On the risk side, AI evidence faces admissibility challenges that require careful preparation. Section 65B certification must be obtained. Authentication must be established. Reliability must be demonstrated. Expert witnesses must be prepared to explain AI systems to judges unfamiliar with the technology. Opposing counsel will increasingly develop expertise in challenging AI evidence. The litigator who understands both the power and the limitations of AI evidence will serve clients better than one who treats AI as either magic or menace.