- AI In Real Life
- Posts
- The Fate of the Lawyer Who Cited Fake Cases from ChatGPT
The Fate of the Lawyer Who Cited Fake Cases from ChatGPT
It says a lot about how society (and the law) will treat AI's mistakes
Learn how to make AI work for you
AI won’t take your job, but a person using AI might. That’s why 1,000,000+ professionals read The Rundown AI – the free newsletter that keeps you updated on the latest AI news and teaches you how to use it in just 5 minutes a day.
When attorney Thad Guyer filed a legal brief in an employment law case, it seemed like the kind of thing a lawyer would do thousands of times over their career.
Only this time was different. Guyer was about to land himself in hot water and risk his entire career.
Guyer had used ChatGPT to help him and his client prepare the brief. Not understanding how these models work, he had given the brief a cursory read-through, but hadn’t thoroughly checked it.
The brief cited a variety of legal cases. It turned out that many of them were misquoted or totally irrelevant—they were blatant hallucinations dreamed up by the LLM.
The response was immediate and incredibly serious. Guyer faced sanctions from the court, as well as the possibility of being disbarred and unable to practice law.
Now, we’ve finally learned about Guyer’s fate. A federal judge, Thomas Cullen, reviewed the case and decided whether sanctions should proceed.
Here’s my video about it:
In a nutshell, the judge decided not to sanction Guyer.
He cited several reasons, primarily that Guyer was a good person, an experienced attorney, and had made an honest mistake that he fully owned up to and tried to correct.
But the most interesting aspect of this decision is the broader context the judge pointed to in making his ruling.
In the judge’s words, these kinds of errors are “a quirk” and simply “one of the downsides of generative AI.”
He also called generative AI “the new normal” and acknowledged that attorneys were using it all the time. In his words,
“This court has neither the authority nor the inclination to curb that practice, even it wanted to,” he said. “If I did that, I would justifiably be perceived by some as overstepping and unwisely decreeing that litigants can’t use this groundbreaking technology in a court of law.”
This decision hints at how society may treat AI-based errors, especially as AI is used in more mission-critical applications.
Boring prose or overhyped language in a marketing email is one thing. But as LLMs are increasingly used for critical tasks like legal work, medical care, and more, AI hallucinations will become more consequential—sometimes even life-threatening.
Judge Culllen’s opinion in this case suggests a measured approach to balancing these risks with the potential upsides of generative AI.
Yes, he appears to acknowledge, LLMs make dumb mistakes. But so, of course, do humans.
Cullen could have easily thrown the book at Guyer, depriving him of his right to practice law over such a massive mistake.
But he didn’t. Harshly punishing Guyer would likely have created a chilling effect, where attorneys avoid using generative AI because the risks of making a mistake—and the costs associated with those mistakes—are too high.
Instead, Cullen seems to be encouraging the continued use of generative AI, acknowledging that lawyers must do a better job checking their work but also allowing for the fact that mistakes will happen.
This is still just one judge’s opinion. But it suggests a world where we collectively accept both the risks and benefits of LLMs.
Yes, LLMs like ChatGPT might make idiotic—and potentially consequential—mistakes. But they also offer huge opportunities to streamline some of the most challenging, tedious, and expensive aspects of fields like law and medicine.
In this specific instance, Guyer was representing a client who was a whistleblower in a major employment case.
Many such whistleblowers come forward—very few end up with experienced attorneys to represent their interests.
If more lawyers like Guyer use tools like ChatGPT, that could expand the pool of legal help available to people who might not otherwise be able to afford it.
That’s clearly a big benefit to society.
Likewise, if LLMs can be tuned to provide accurate medical diagnoses without the need for a human specialist, that could open up much better medical care to millions of people—especially those in the developing world.
Guyer isn’t quite out of the woods yet—he still faces a review from his local bar association.
But the decision in this case points to a positive future for generative AI. Yes, we humans must ensure that we’re in the loop and take responsibility for the outputs of AI systems.
But that shouldn’t prevent us from using them.
Even if they’re prone to errors, these systems can massively improve the way we work—and open up new, potentially life-altering (or life-saving) opportunities for people who wouldn’t otherwise have them.