Why does India urgently need practical guidance on legal drafting for AI?

Anand Kumar
By
Anand Kumar
Anand Kumar
Senior Journalist Editor
Anand Kumar is a Senior Journalist at Global India Broadcast News, covering national affairs, education, and digital media. He focuses on fact-based reporting and in-depth analysis...
- Senior Journalist Editor
7 Min Read
#image_title

Generative artificial intelligence (AI) has become a popular aid for lawyers, and the efficiency gains are real. But the technology has a well-documented weakness: it causes hallucinations. It produces case names, citations, and quotations that appear to be original but are not. This is happening with increasing frequency in Indian courts. Recently, a bench led by Chief Justice of India, Justice Surya Kant, pointed out the problem directly, noting that even when citations are genuine, fabricated citations are attributed to judgments that impose an additional burden of verification on judges who are already time-pressed. Similar concerns have been expressed in other judicial forums.

Thoughtful guidelines, arrived at now and in good faith, will allow India to leverage the benefits of AI. (Actor's file photo)
Thoughtful guidelines, arrived at now and in good faith, will allow India to leverage the benefits of AI. (Actor’s file photo)

One incident deserves special attention. The Bengaluru Income Tax Appellate Tribunal has issued an order in Buckeye Trust v. PCIT citing four rulings. Three were invented; The fourth exists but is irrelevant. The request was withdrawn within a week. But the deeper concern is not the mistake itself, but rather who made it. When a judicial official uses AI to render a ruling or worse, decide a dispute, he strikes at the trust that litigants place in the adjudication process itself.

Emptiness is the problem

Generative AI does not “know” the law. It is designed to produce plausible and useful text, and sometimes this text is fiction. When hallucinogens enter court records, whether through a file or a warrant, they make a mockery of prior determination by replacing binding precedent with invented authority. A court can reject a flawed application, but it cannot easily undo, rely on, and perhaps appeal an order that has been published while it carries a jurisdictional statement. None of this means that AI should be banned. But its use in legal proceedings cannot rely on good faith alone.

What other jurisdictions have done

Many jurisdictions have responded to AI-assisted legal drafting with concrete guidelines based on familiar principles: educating users about the risks of AI on the one hand, and requiring authorities to verify and hold accountable for filings on the other.

Singapore’s “Guide on the Use of Generative AI Tools by Court Users,” which applies across the court system, does not prohibit the use of AI but requires that material be independently verified, accurate and appropriate, with consequences ranging from costs to disciplinary action for non-compliance.

In the United States, courts such as the Eastern District of Texas have adopted disclosure and certification frameworks that require lawyers to certify personal verification from authorities, recognize that generative AI can produce false legal authority and insist on verification and accountability.

Regarding the use of AI on the judicial side, the Federal Court of Canada has published principles limiting the use of AI in judgments and orders without public consultation, reflecting the higher threshold of legality when AI impinges on the arbitration itself. The UK has gone further, issuing guidance specifically for judicial office holders, which links any use of AI to the obligation to protect the integrity of the administration of justice, and categorically states that AI is not recommended for legal research or legal analysis.

In contrast, India has seen only sporadic, case-by-case responses. AI is being used widely, often without a proper understanding of its limitations, and without system-level safeguards to prevent hallucinating authorities from entering the registry.

Rulebook required

The practice guidance India needs doesn’t have to be complicated. Four principles drawn from international experience will be sufficient.

First, reaffirm non-delegable responsibility. Artificial intelligence does not replace professional duties. The suing lawyer remains responsible and bound by the duty of frankness in what is presented before the court. “AI did it” should not become a defense.

Second, authorize verification through short testimony by providing counsel that each quote and quote has been personally verified from reliable sources. This strikes a practical balance between the convenience of AI and professional responsibility.

Third, setting a clear ladder for implementation. Unintentional errors may require correction and costs; Recklessness should lead to exemplary costs and, if appropriate, to disciplinary referral. India should standardize consequences rather than rely on ad hoc judicial resentencing.

Fourth, and most urgent in light of recent developments, is to address the judicial and judicial use of artificial intelligence. The Canadian and British approaches reflect a fundamental point: that artificial intelligence cannot decide the matter. If any artificial intelligence tool is used in drafting, full responsibility falls on the judicial officer. The reasoning must be independent, traceable to history, and defensible without reference to any tool.

Bottom line

The legal system works on trust. Lawyers are trusted to present accurate material before courts, and judges are trusted to decide what the record shows. Generative AI puts both forms of trust at risk. The Supreme Court sounded an alarm. This warning must now turn into action. Clear, short and consistently applied practice guidelines will give the Bar certainty, protect judicial time, and reassure litigants that the administration of justice is not quietly being rewritten by software that can invent a case as easily as it formats a paragraph. Thoughtful guidelines, arrived at now and in good faith, will allow India to harness the benefits of AI without importing its most serious flaws into the judiciary.

Share This Article
Anand Kumar
Senior Journalist Editor
Follow:
Anand Kumar is a Senior Journalist at Global India Broadcast News, covering national affairs, education, and digital media. He focuses on fact-based reporting and in-depth analysis of current events.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *