The Bench vs. The Bot.
Global judicial bodies are rapidly codifying the permissible boundaries of Generative AI in courtrooms. From mandatory disclosure orders in the US to the UK's cautious acceptance of summarization.
India: Supreme Court & High Courts
Supreme Court of India (SUVAS & SCI-Interact)
The Supreme Court actively deploys AI for translation (English to vernacular languages) via SUVAS (Supreme Court Vidhik Anuvaad Software). However, usage is strictly confined to administrative efficiency and translation. The SC has flagged the risk of "hallucinations" in legal research.
Punjab & Haryana High Court
In a landmark (and controversial) move, Justice Anoop Chitkara used ChatGPT to assess the "worldwide view on bail in cases of assault with cruelty."The Caveat: The Judge explicitly stated the AI output was "only intended to present a broader picture" and did not form the basis of the legal reasoning.
Key Directive
- 1. Verification: Counsel are personally liable for any cited precedents. The excuse "ChatGPT made it up" is not a valid defense against contempt.
- 2. Translation Disclaimer: AI-translated judgments (ESCR) always carry a disclaimer that the English version remains authoritative.
- 3. Deepfake Warning: CJI DY Chandrachud has repeatedly warned against deepfakes in judicial proceedings, advocating for digital signatures on all court orders.
United Kingdom: Judicial Office Guidance
In December 2023, the UK Judicial Office issued the first formal "Artificial Intelligence (AI) Guidance for Judicial Office Holders."
Judges may use GenAI to summarize large bodies of text or for administrative tasks (drafting emails), provided they verify the output.
Judges must not use AI for legal research or analysis. The guidance explicitly warns that AI can fabricate cases that sound plausible.
Courts must be vigilant for fake citations submitted by litigants-in-person who may have unknowingly relied on AI chatbots.
United States: The "Mata" Effect
Mata v. Avianca (S.D.N.Y.)
The watershed moment for Legal AI. Lawyers submitted a brief citing non-existent cases (e.g., Varghese v. China Southern Airlines) generated by ChatGPT. Judge Castel imposed sanctions, establishing that Rule 11 (duty of candor) applies strictly to AI outputs.
Standing Orders & Certifications
Canada
Federal Court Practice Direction
Requires parties to make a "Declaration" if AI was used to generate content in a document filed with the court. This aligns with the principle of transparency in judicial proceedings.
Focus: Transparency
Singapore
SupCo & SAL Guidelines
The Singapore Academy of Law (SAL) and Courts emphasize "Generative AI for the Legal Profession." They explicitly prohibit the input of sealed or confidential court documents into public LLMs.
Focus: Data Security
Global Consensus Matrix
| Jurisdiction | Use of AI for Research | Disclosure Required? | Primary Concern |
|---|---|---|---|
| India | Cautioned (Human Verification Mandatory) | No specific rule, but liability is strict | Deepfakes & Accuracy |
| United Kingdom | Prohibited for Judges / Cautioned for Counsel | Not explicitly, but counsel liable | Fabricated Cases (Hallucination) |
| United States | Allowed with Verification | Yes (In specific courts like N.D. Tex) | Rule 11 (Candor) & Confidentiality |
| Canada | Allowed with Declaration | Yes (Federal Court) | Transparency of Process |