Yourun a brief through a plagiarism checker. It comes back clean. So you file it.Then opposing counsel points out that two of your cited cases don't exist. Notmisquoted. Not paraphrased. Just entirely made up by the AI tool you used forresearch.
This is the hallucinatedcitation problem, and it's happening more than most lawyers realize. AI hallucinations occur when large languagemodels produce content that appears authoritative and well-reasoned but isfactually incorrect or entirely fabricated. In legal work, that means inventedcase names, non-existent judicial opinions, and made-up precedents that lookcompletely real on the page.
The dangerous part isn't justthat these citations are false. It's that the tools most lawyers reach for tocheck their work, standard plagiarism detectors, are completely blind to them.Here's why.
The Limits of Traditional Plagiarism Checkers
Plagiarism checkers do onething well: they find copied text. They break your document into segments, scanthose segments against a database of existing content, and flag matches. Thehigher the match percentage, the more likely something was lifted from anothersource.
That's a useful function forcatching copied essays or recycled briefs. But it's built on a core assumption:that the problem content already exists somewhere. The tool needs something tomatch against.
Fabricated citations breakthat assumption entirely. A hallucinated case name has never appeared inWestlaw, LexisNexis, or any court database. There's no source to match it to.So the plagiarism checker sees original text, finds no matches, and reportseverything as clean. From its perspective, the document is fine.
As one expert put it, thesetools aren't tracking accuracy at all. They're just flaggingmatching text. If the text doesn't match anything, it passes, regardless ofwhether it's true.
How Hallucinated Citations Arise
To understand why hallucinated citations are so common, you need tounderstand how AI language models actually work. They don't look things up.They predict text. Based on patterns in their training data, they generate thewords most likely to follow the words before them.
When you ask a generic AItool for case law supporting a legal argument, it doesn't search a verifieddatabase. It produces text that looks like a legal citation because it has seenthousands of legal citations in its training data. The result can be a perfectlyformatted, entirely fictional case.
Generic AI models are trainedon vast amounts of internet content, not on authoritative legal sources. Theyrarely have access to the actual case law, statutes, and regulations thatlawyers need. So when pressed for specific legal authority, they fill in thegaps with plausible-sounding fabrications.
Speed pressure makes thisworse. AI tools promise fast research results, and lawyers under deadlinepressure may not slow down to verify every citation before it ends up in afiling. The Mata v. Avianca case in2023is the most cited example: a lawyer used ChatGPT for research, the AI generatedsix completely fabricated case citations, and the lawyer filed them. The courtsanctioned the attorney and imposed a $5,000 fine.
That case made headlines. Butit wasn't a one-off. More than 300 cases ofAI-driven legal hallucinations have been documented since mid-2023, with atleast 200 recorded in 2025 alone.
Why Standard Tools Fail to Detect Hallucinated Citations
The core problem is that hallucinated case law is original content. Thecase names sound plausible. The citation format looks correct. But when yousearch for them in Westlaw, LexisNexis, or any authoritative database, theydon't exist.
Because AI generates thesecitations by predicting text patterns rather than retrieving real information,the false citations follow the same stylistic and structural conventions asreal ones. They're indistinguishable from legitimate citations to any tool thatchecks only form, consistency, or similarity to existing text.
Plagiarism checkers have noaccess to specialized legal databases. They can't query Westlaw to confirm acase exists. They can't check whether a cited holding accurately reflects whata court actually said. They operate entirely at the surface level, looking forcopied strings of text, and hallucinated citations aren't copied from anywhere.
Even sophisticated citationformatting tools have limits here. Our own CiteCheck feature in BriefCatch Next is an automated citationcorrection engine. It fixes capitalization, punctuation, spacing, court andreporter abbreviations, and pinpoint page references. It's designed to beprecise and rule-based, like a grammar checker for citations. But it correctsformatting and style, not authenticity. A hallucinated citation with perfectformatting will pass through a formatting check without issue.
That's not a flaw in thetool. It's a clear boundary. And it's why we're explicit with lawyers: verify the substance ofeach citation. Check that cases exist, that they support your argument, andthat they haven't been overruled. No automated system replaces thatstep.
The numbers reinforce howserious this is. Stanford research foundthat Westlaw's AI-Assisted Research produced incorrect information more than34% of the time. Even specialized legal AI tools show hallucination ratesbetween 17% and 34%. That's not a rare edge case. That's a systemic risk thatdemands a systemic response.
Finding Smarter Solutions for Reliable Citations
The answer isn't to stopusing AI. It's to stop treating AI output as finished work product. Here's whatactually helps:
Verify every citation in anauthoritative database. Before any brief, memo, or filing leaves your desk,every cited case should be confirmed in Westlaw or LexisNexis. Check that itexists, that it says what you claim it says, and that it hasn't been overruled.This is non-negotiable.
Use dedicatedcitation-verification technology. Tools designed specifically to flaghallucinated citations go further than plagiarism checkers. They check whethercitations actually exist in legal databases, not just whether the text matchessomething online. Look for platforms that provide audit trails showingsystematic verification of every citation in a document.
Treat AI output like a draftfrom a junior associate. That means thorough review, not a quick skim. AI canhelp you identify relevant areas of law and draft initial arguments, but everyfactual assertion and every cited authority needs human verification before itgoes anywhere official.
Choose legal-specific AIplatforms over generic ones. Generic AI tools createserious risks because they're trained on general internet content, notauthoritative legal sources. Specialized legal AI platforms are built on caselaw, statutes, and regulations, and they're designed with confidentialityrequirements in mind.
At BriefCatch, every rule andrecommendation reflects techniques drawn from thousands of elite legaldocuments and judicial opinions, not scraped internet data. And our AI featuresare completely optional and defaulted off, because whether to enable those features is always yourcall.We also don't store your data, so your work product stays yours.
Advocating for a New Standard of Accuracy
The legal profession hasalways demanded accuracy. AI doesn't change that standard. It just creates newways to fall short of it.
Hallucinated citations arenow a known, systemic risk. Courts are sanctioning lawyers who fileAI-generated fake citations. Reputations are taking hits. And the tools mostlawyers assume will catch these errors, plagiarism checkers, simply aren'tbuilt for the job.
The new standard isstraightforward but demanding. Assume hallucinations are possible in anyAI-generated content. Verify every citation independently, regardless of howpolished it looks. And understand that tools working at the level of style orformatting are not substitutes for substantive research.
Even if hallucination ratesdrop significantly, that doesn't reduce the verification requirement. As we'venoted before, if error rates drop to1%, that still means 100% of AI-generated answers need verification.
Professional judgment can'tbe delegated to software. Lawyers remain accountable for all work product,regardless of how it was generated. That accountability is what makesverification not just a best practice but an ethical obligation under ABA ModelRule 1.1.
If you want a platform that helps you write more clearly and preciselywhile keeping you in control of every editorial decision, try BriefCatch free or book a demo to see how it fits into yourworkflow. Better writing and reliable citations aren't competing goals. They'reboth part of doing the job right.



