India’s top court has flagged a judgment that relied on AI-generated, non-existent case law as misconduct, raising fresh alarms about the use of chatbots and legal AI amid a massive court backlog and risks of bias.
India's Supreme Court has called a lower-court ruling that cited four non-existent precedents—later found to be generated by an AI tool—more than a simple error, labeling it "misconduct" and issuing notices to the attorney general, solicitor general and the Bar Council of India.
The problem surfaced on appeal in a land dispute from Andhra Pradesh. The fabricated citations were believable enough to shape a judgment before they were exposed, underscoring how easily large language models can invent authoritative-sounding legal material.
The episode is part of a wider trend: judges and lawyers around the world are experimenting with AI to cope with huge caseloads, but the technology can both hallucinate facts and reproduce entrenched biases. In India, one judge publicly admitted using ChatGPT during a 2023 bail hearing, while elsewhere lawyers have been sanctioned for submitting briefs that cited cases the chatbot had made up.
Pressure to adopt AI is intense: roughly 55 million cases are pending across India’s courts, and decades-long backlogs push officials to seek speed. Experts warn that speed cannot come at the cost of accuracy or fairness, since legal datasets reflect social inequalities and can train models to reproduce them—potentially affecting bail, sentencing and other liberty-related decisions.
At the same time, Indian courts and developers are testing more limited, assistive tools such as InLegalLLaMA and the Supreme Court’s SUPACE system, designed to find precedents and summarize law without making decisions. Developers and jurists stress these tools require rigorous verification and must not replace human judgment.
Source: World | Deutsche Welle
Comments
Post a Comment