UK Barrister Found Using AI to Prepare Legal Case, Cited Fictitious Precedents

TL;DR: An immigration barrister has been found to have used ChatGPT-like software to prepare for a tribunal hearing, citing cases that were “entirely fictitious” or “wholly irrelevant”, and then attempted to hide the fact from the judge.

Chowdhury Rahman has been discovered using generative AI to prepare his legal research for an immigration tribunal hearing, with Upper Tribunal Judge Mark Blundell ruling that Rahman not only failed to verify the accuracy of AI-generated content but actively tried to conceal its use.

Context and Background

The matter emerged during an asylum case involving two Honduran sisters who claimed they were being targeted by a criminal gang. Rahman represented the sisters in the upper tribunal, where he submitted grounds of appeal citing 12 legal authorities.

Judge Blundell noticed significant problems when reviewing the paperwork: some cited authorities did not exist, whilst others did not support the legal propositions for which they were cited. During the hearing, Rahman appeared to know nothing about any of the authorities he had supposedly cited and had not intended to reference them in his submissions.

Critical Context: Judge Blundell observed that one of the fictitious cases cited by Rahman “has recently been wrongly deployed by ChatGPT in support of similar arguments”, directly connecting the barrister’s submission to known AI hallucinations.

The judge ruled that Rahman’s explanation—that the inaccuracies resulted from his “drafting style”—was implausible, stating “the problems which I have detailed above are not matters of drafting style”.

Looking Forward

Judge Blundell said he is considering reporting Rahman to the Bar Standards Board, marking a significant development in professional accountability for AI misuse in legal practice. The ruling, made in September and published on 16 October, establishes clear expectations that legal professionals must verify AI-generated content and cannot attempt to disguise its use.

The case highlights mounting concerns about generative AI in professional settings where accuracy and proper verification are paramount, particularly in legal contexts where fictitious case citations can waste tribunal time and potentially undermine access to justice.

Source Attribution:

Share this article