Judge orders costs after fake AI case citations

TL;DR:

  • Birmingham City University has obtained a wasted costs order against a claimant law firm that filed AI-generated authorities.
  • His Honour Judge Charman found the firm’s conduct improper, unreasonable, and negligent under the Ayinde guidance.
  • The ruling reinforces expectations that legal teams verify generative AI outputs before they reach the court record.

Birmingham City University has won a wasted costs order after a claimant firm admitted submitting fictitious case citations generated by artificial intelligence. His Honour Judge Charman ruled that the firm’s conduct in county court proceedings was improper and negligent following a July 2025 application that relied on two non-existent authorities. The decision underscores how UK judges are enforcing the Divisional Court’s Ayinde guidance on verifying AI outputs.

Context and Background

Barrister Alexander Bradford of St Philips Chambers, who represented the university, reported that the unnamed law firm cited the fabricated cases in an application lodged on 10 July 2025. The defendant’s solicitors, JG Poole & Co, queried the citations the next day after failing to locate them, prompting the claimant team to withdraw and re-file the application without the bogus material. Judge Charman subsequently struck out the claim and application on 30 July with indemnity costs, deferring the question of wasted costs.

Witness statements later confirmed that an administrative staff member used the built-in AI research tool of a legal software suite to draft the application, signed a statement of truth in the supervising solicitor’s name, and filed the document without authorisation. The court found the explanation inadequate, concluding that the firm had failed to exercise basic verification checks before submitting authorities generated by large language models.

Regulatory Note: The Divisional Court’s June Ayinde guidance requires lawyers to certify that AI-generated material has been verified against primary sources before it is placed before the court.

This is the latest in a series of UK disciplinary flashpoints involving automated drafting tools, following last month’s referral of a barrister to the Bar Standards Board for similar reliance on ChatGPT. Legal insurers and professional bodies have warned that unchecked AI citations risk breaching duties of candour and exposing practitioners to sanctions.

Looking Forward

The wasted costs order will likely prompt firms to revisit governance around AI-enabled research features, from access controls to supervisor sign-off. Risk teams are expected to tighten audit trails so that any AI-generated content is double-checked against authoritative databases before filings are made.

Regulators including the Solicitors Regulation Authority have already signalled that professional negligence frameworks apply equally to technology-enabled errors. Firms serving higher education clients and other regulated sectors may now accelerate investment in training that balances AI adoption with robust human oversight.

Source Attribution:

Share this article