The transformation of due diligence from a labour-intensive human exercise to an AI-augmented process raises fundamental questions about the legal validity of findings, the standard of care expected from advisors, and the allocation of liability when algorithmic analysis fails to identify material risks. For the M&A practitioner in India, these are not theoretical questions but operational realities confronting every significant transaction.
The Nature of Due Diligence Obligations
Due diligence is not a statutory requirement under Indian law; it is a prudential practice that serves multiple purposes. It informs pricing negotiations, shapes representations and warranties, identifies deal-breakers, and provides a defence against subsequent allegations of negligence or fraud. The SEBI (Substantial Acquisition of Shares and Takeovers) Regulations, 2011, require acquirers to make informed decisions, implicitly mandating some form of investigative process. The question is whether AI can satisfy these implicit requirements.
The traditional due diligence report carries legal weight precisely because human professionals stand behind it. When a law firm issues a due diligence report, it accepts professional responsibility for the accuracy of its findings within the scope of review. When an AI system generates analysis, who accepts this responsibility? The answer shapes everything from engagement letter drafting to professional liability insurance coverage.
AI Capabilities and Limitations
Modern AI systems excel at certain due diligence tasks and fail at others. Document review, contract analysis, and data extraction benefit enormously from machine learning capabilities. AI can process thousands of contracts in hours, identifying change of control provisions, assignment restrictions, and unusual terms that human reviewers might miss in documents they never reach. Natural language processing enables semantic search across document repositories, surfacing relevant materials that keyword searches would overlook.
However, AI systems struggle with contextual judgment, business implications, and the synthesis of disparate findings into coherent risk narratives. They cannot interview management. They cannot assess the credibility of representations. They cannot weigh the materiality of findings against strategic objectives. These limitations define the appropriate scope of AI deployment: as a force multiplier for human judgment, not a replacement for it.
SEBI and Disclosure Standards
SEBI's regulatory framework imposes disclosure obligations that interact with due diligence in complex ways. The Listing Obligations and Disclosure Requirements Regulations require material fact disclosure. The Takeover Regulations mandate specific disclosures by acquirers. These obligations presuppose a diligence process capable of identifying material facts. If an AI system fails to identify a material issue that human diligence would have caught, has the disclosure obligation been breached?
The answer likely depends on whether the AI deployment met the applicable standard of care. If the technology was state-of-the-art, properly configured, and appropriately supervised, its failure may not constitute a breach. If the deployment was negligent, cut-corner, or unsupervised, liability follows. SEBI has not issued specific guidance on AI in due diligence, but the general principles of market integrity and investor protection provide interpretive frameworks.
Professional Liability Considerations
For law firms and investment banks deploying AI in due diligence, professional liability exposure requires careful management. Engagement letters should specify the role of AI in the diligence process, delineate the scope of human review, and allocate risk for algorithmic failures. Professional indemnity insurers are beginning to ask detailed questions about AI deployment, and policies may exclude or limit coverage for AI-generated work product.
The prudent approach combines AI efficiency with human oversight at critical junctures. AI can generate initial document summaries, but human reviewers should validate findings on material contracts. AI can flag anomalies in financial data, but human analysts should interpret their significance. This hybrid model preserves the benefits of automation while maintaining the professional accountability that gives due diligence its legal weight.
Evidentiary Considerations
When disputes arise post-acquisition and the quality of due diligence is questioned, AI-generated work product becomes evidence. The Indian Evidence Act's provisions on electronic records apply, requiring authentication and establishing chain of custody. More fundamentally, courts will assess whether the due diligence process was reasonable given the circumstances. Documentation of AI systems used, configurations applied, and human oversight exercised becomes crucial evidence of reasonable care.
Organisations should maintain records of AI system performance metrics, validation testing results, and known limitations at the time of deployment. These records serve dual purposes: they inform ongoing system improvement and provide evidentiary support if the adequacy of diligence is later challenged. The absence of such records creates adverse inference risks that sophisticated counterparties will exploit.
Best Practices for AI-Augmented Diligence
The integration of AI into due diligence processes should follow structured protocols. Scope definition should explicitly address which tasks will be AI-assisted and which require pure human review. Quality assurance checkpoints should validate AI outputs at defined intervals. Escalation procedures should ensure that anomalous findings receive human attention regardless of AI confidence scores. Final reports should disclose the role of AI in the process, enabling recipients to assess reliance appropriately.
For the transaction lawyer, AI is transforming due diligence from an exercise limited by human bandwidth to one limited primarily by data availability and analytical imagination. This transformation brings efficiency gains but also new risks. The lawyer who masters AI-augmented diligence while maintaining appropriate professional standards will serve clients better than either the technophobe who refuses to adapt or the enthusiast who delegates judgment to machines.