This article is from the source 'guardian' and was first published or seen on . The next check for changes will be

You can find the current article at its original source at https://www.theguardian.com/technology/2025/jun/06/high-court-tells-uk-lawyers-to-urgently-stop-misuse-of-ai-in-legal-work

The article has changed 4 times. There is an RSS feed of changes available.

Version 1 Version 2
High court tells UK lawyers to stop using AI after fake case-law citations High court tells UK lawyers to stop misuse of AI after fake case-law citations
(32 minutes later)
Ruling follows two cases blighted by actual or suspected use of artificial intelligence in legal workRuling follows two cases blighted by actual or suspected use of artificial intelligence in legal work
The high court has told senior lawyers to take urgent action to prevent the misuse of artificial intelligence after dozens of fake case-law citations were put before the courts that were either completely fictitious or contained made-up passages.The high court has told senior lawyers to take urgent action to prevent the misuse of artificial intelligence after dozens of fake case-law citations were put before the courts that were either completely fictitious or contained made-up passages.
Lawyers are increasingly using AI systems to help them build legal arguments, but two cases this year were blighted by made-up case-law citations which were either definitely or suspected to have been generated by AI.Lawyers are increasingly using AI systems to help them build legal arguments, but two cases this year were blighted by made-up case-law citations which were either definitely or suspected to have been generated by AI.
In a £89m damages case against the Qatar National Bank, the claimants made 45 case-law citations, 18 of which turned out to be fictitious, with quotes in many of the others also bogus. The claimant admitted using publicly available AI tools and his solicitor accepted he cited the sham authorities.In a £89m damages case against the Qatar National Bank, the claimants made 45 case-law citations, 18 of which turned out to be fictitious, with quotes in many of the others also bogus. The claimant admitted using publicly available AI tools and his solicitor accepted he cited the sham authorities.
When Haringey Law Centre challenged the London borough of Haringey over its alleged failure to provide its client with temporary accommodation, its lawyer cited phantom case law five times. Suspicions were raised when the solicitor defending the council had to repeatedly query why they could not find any trace of the supposed authorities.When Haringey Law Centre challenged the London borough of Haringey over its alleged failure to provide its client with temporary accommodation, its lawyer cited phantom case law five times. Suspicions were raised when the solicitor defending the council had to repeatedly query why they could not find any trace of the supposed authorities.
It resulted in a legal action for wasted legal costs and a court found the law centre and its lawyer, a pupil barrister, were negligent. The barrister denied using AI in that case but said she may have inadvertently done so while using Google or Safari in preparation for a separate case where she also cited phantom authorities. In that case she said she may have taken account of AI summaries without realising what they were.It resulted in a legal action for wasted legal costs and a court found the law centre and its lawyer, a pupil barrister, were negligent. The barrister denied using AI in that case but said she may have inadvertently done so while using Google or Safari in preparation for a separate case where she also cited phantom authorities. In that case she said she may have taken account of AI summaries without realising what they were.
In a regulatory ruling responding to the cases on Friday, Dame Victoria Sharp, the president of the King’s bench division, said there were “serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused” and that lawyers misusing AI could face sanctions, from public admonishment to facing contempt of court proceedings and referral to the police.In a regulatory ruling responding to the cases on Friday, Dame Victoria Sharp, the president of the King’s bench division, said there were “serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused” and that lawyers misusing AI could face sanctions, from public admonishment to facing contempt of court proceedings and referral to the police.
She called on the Bar Council and the Law Society to consider steps to curb the problem “as a matter of urgency” and told heads of barristers’ chambers and managing partners of solicitors to ensure all lawyers know their professional and ethical duties if using AI.She called on the Bar Council and the Law Society to consider steps to curb the problem “as a matter of urgency” and told heads of barristers’ chambers and managing partners of solicitors to ensure all lawyers know their professional and ethical duties if using AI.
“Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect,” she wrote. “The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.”“Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect,” she wrote. “The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.”
Ian Jeffery, the chief executive of the Law Society of England and Wales, said the ruling “lays bare the dangers of using AI in legal work”.Ian Jeffery, the chief executive of the Law Society of England and Wales, said the ruling “lays bare the dangers of using AI in legal work”.
“Artificial intelligence tools are increasingly used to support legal service delivery,” he added. “However, the real risk of incorrect outputs produced by generative AI requires lawyers to check, review and ensure the accuracy of their work.”“Artificial intelligence tools are increasingly used to support legal service delivery,” he added. “However, the real risk of incorrect outputs produced by generative AI requires lawyers to check, review and ensure the accuracy of their work.”
Sign up to First EditionSign up to First Edition
Our morning email breaks down the key stories of the day, telling you what’s happening and why it mattersOur morning email breaks down the key stories of the day, telling you what’s happening and why it matters
after newsletter promotionafter newsletter promotion
The cases are not the first to have been blighted by AI-created hallucinations. In a UK tax tribunal in 2023, an appellant who claimed to have been helped by “a friend in a solicitor’s office” provided nine bogus historical tribunal decisions as supposed precedents. She admitted it was “possible” she had used ChatGPT, but said it surely made no difference as there must be other cases that made her point.The cases are not the first to have been blighted by AI-created hallucinations. In a UK tax tribunal in 2023, an appellant who claimed to have been helped by “a friend in a solicitor’s office” provided nine bogus historical tribunal decisions as supposed precedents. She admitted it was “possible” she had used ChatGPT, but said it surely made no difference as there must be other cases that made her point.
The appellants in a €5.8m (£4.9m) Danish case this year narrowly avoided contempt proceedings when they relied on a made-up ruling that the judge spotted. And a 2023 case in the US district court for the southern district of New York descended into chaos when a lawyer was challenged to produce the seven apparently fictitious cases they had cited. The simply asked ChatGPT to summarise the cases it had already made up and the result, said the judge was “gibberish” and fined the two lawyers and their firm $5,000.The appellants in a €5.8m (£4.9m) Danish case this year narrowly avoided contempt proceedings when they relied on a made-up ruling that the judge spotted. And a 2023 case in the US district court for the southern district of New York descended into chaos when a lawyer was challenged to produce the seven apparently fictitious cases they had cited. The simply asked ChatGPT to summarise the cases it had already made up and the result, said the judge was “gibberish” and fined the two lawyers and their firm $5,000.