AI fails silently. That's the problem.
Why good lawyers keep citing cases that don't exist.
A lawyer cites a made-up case in a brief. Gets sanctioned. The story hits the news. Everyone shakes their head and asks, “How could they be so careless?”
Wrong question.
The better question: why does this keep happening to lawyers who’ve been careful their whole careers?
The old deal with technology
For decades, tech taught lawyers a simple rule: if something’s wrong, you’ll know.
The software would crash
The file wouldn’t open
The error message would pop up
Spellcheck would flag a typo with a red squiggle
Something would freeze, stall, or clearly refuse to work
Lawyers learned to trust those signals. If the tech didn’t blow up, it probably did the job. If it did blow up, they’d feel the friction right away — and often stop using it out of frustration.
That pattern held for 30+ years. Word processing, email, e-discovery, document management, cloud storage, e-filing. Broken tech looked broken.
AI doesn’t play by that rule
AI flips the script. When it’s wrong, it doesn’t warn you.
The output is clean
The formatting is perfect
There are no typos
The reasoning sounds measured and sensible
The citations look exactly like real citations
Nothing in the response says, “Hey, I made this up.” Because nothing in the system knows it did.
It’s a mirage. And it’s going to keep tripping up lawyers — especially the ones who aren’t using AI often enough to spot the seams.
The perfect storm
Here’s the setup that catches even careful lawyers:
They’re under time pressure (when aren’t they?)
They don’t use AI often enough to have calibrated instincts yet
They ask a legal question and get a polished, confident answer
The answer passes every pattern-recognition check they’ve built up over a career
They don’t have the minutes to verify every citation
So they go with it
They’ve heard the warnings about hallucinations. Those warnings just don’t feel relevant in the moment, because the output looks too good to be wrong.
What to do
You can’t un-condition decades of experience with old tech. But you can build a new habit:
Treat every case citation from AI as unverified until you check it in a real database
Do the check before the draft leaves your hands — not after
Don’t rely on AI’s confidence level. It has no idea what it doesn’t know.
If you’re rushed, that’s exactly when to slow down on citations. AI is at its most seductive when you have the least time to vet it.
The lawyers who’ll get burned aren’t the reckless ones. They’re the ones whose instincts from the old tech world are still running the show.
Adjust your instincts. The rules changed.
;-)
Ernie
P.S. In the Inner Circle, we practice spotting AI’s failure modes together — so you’re not relying on old instincts that don’t apply anymore.
→
https://innercircle.ernietheattorney.net/


