Managing AI Hallucinations
- Colin Levy
- 6 days ago
- 1 min read
In this guide, I explain how AI systems used in legal practice can generate “AI hallucinations,” meaning confident but fabricated or unsupported legal content that looks real, especially fake case citations, misdescribed holdings, and invented statutes. I stress that these AI hallucinations are not rare glitches but a structural result of next-token prediction, training data gaps, and architectural limits, and that even specialized legal tools still show measurable hallucination rates on straightforward research tasks.
I show how AI hallucinations have already led to sanctions in cases like Mata v. Avianca, where lawyers relied on unverified AI-generated citations, and I connect this risk to existing duties under Rule 11, Rule 1.1, Rule 3.3, and ABA Formal Opinion 512, which together require AI literacy, verification, and supervision rather than blind trust in AI tools. I offer a concrete verification workflow (checking every citation in trusted databases, reading the actual opinions, validating statutes, and never asking AI to verify its own outputs) and introduce a red–yellow–green risk framework so firms can scale their review of AI hallucinations based on use case.
At the organizational level, I urge firms to build AI governance: an internal committee, approved tool lists, incident logging, training on AI hallucinations, and clear client disclosures about how AI is used and how its outputs are checked. I close by noting that AI hallucinations will not disappear, so legal teams must treat AI output as a draft that always requires human verification, backed by thoughtful insurance coverage and carefully negotiated AI vendor contracts.


