When AI Hallucinations Impact Legal Decisions

When a California man faced being held without bail for illegal gun possession charges, his defense team argued that the charges simply did not warrant the harshness of keeping him behind bars while awaiting trial. Prosecutors had a long list of reasons to support their argument, but, unfortunately, their pages and pages of explanation were rampant with errors.
AI to Blame
It turns out that the prosecutor’s office was using AI to beef up their briefs, and in each case there were grave misinterpretations of law, in addition to quotations that did not ever appear in cited text. It was a clear indication that AI was the culprit. That led defense attorneys to take the case to the California Supreme Court, in hopes that they would find a pattern of fictional legal interpretations and case citations. That led to some interesting conclusions.
Problems
Defense attorneys had 22 technology researchers and legal scholars by their side in court who cautioned that the unbridled use of artificial intelligence in the legal field could result in wrongful convictions. They noted that legal documents were full or errors due to the use of Gemini and ChatGPT, which are commonly used to prepare anything from emails and essays to legal briefs. When the use of AI goes unchecked, the pitfalls can be catastrophic, since these tools have been proven to invent answers to legal questions that are complete fiction.
One Arizona State University law professor acknowledged that errors in court papers that are the result of AI tend to be an indication of negligence, not intended deception. Nevertheless, because AI is programmed to be sycophantic, it often stretches the truth to find an answer that supports a particular argument. Commonly referred to as hallucinated content, nearly 600 cases have been detected worldwide, over 60 percent of which occurred in U.S. courts — which leads to some compelling questions:
- With 75 percent of lawyers planning to use AI in their work, how will it impact legal outcomes?
- With studies indicating that up to 82 percent of legal queries on chatbots result in hallucinations–prompting a 2023 warning from Supreme Court Chief Justice Roberts for lawyers to beware — can court documents created with AI be trusted?
- When even AI tools that claim to reduce such errors produce errors anywhere from 17 to 34 percent of the time, should there be restrictions on the use of AI in legal work?
Protecting Your Rights
The experienced Las Vegas criminal defense attorneys at Lobo Law always fight for the best outcomes for our clients. To discuss your case, schedule a confidential consultation in our Las Vegas office today.
Source:
nytimes.com/2025/11/25/us/prosecutor-artificial-intelligence-errors-lawyers-california.html?smid=nytcore-ios-share