Getting the best AI results (it's not what you think)
It's not about the prompts. It's about something much simpler.
There’s a pattern I keep noticing in my AI Lab sessions.
The lawyers getting the best results from AI aren’t the ones with the fanciest prompts or the most expensive tools.
They’re the ones who treat AI like a new associate.
What that means in practice
When you hand work to a new associate, you don’t say “write me a brief.” You say:
Here’s the issue
Here’s the relevant background
Here’s the standard I expect
Here’s what I don’t want
You give context. You set expectations. You review the output and give feedback.
That’s exactly how AI works best.
Where most lawyers go wrong
They give AI a vague instruction, get a mediocre result, and conclude the tool isn’t useful.
That’s like handing a first-year associate a one-sentence email and being disappointed with what comes back.
The problem isn’t the tool. It’s the briefing.
Three things to try this week
1. Pick one task you do regularly — a client email, a discovery request, a memo outline
2. Before you type a prompt, write down the context you’d give a human doing the same task
3. Paste that context into ChatGPT or Claude and see what comes back
You’ll probably be surprised. Not because the AI is magic, but because most people have never given it enough to work with.
The skill isn’t “prompt engineering.” It’s clear communication — something lawyers are already supposed to be good at.
;-)
Ernie
P.S. In my AI Lab, we do exercises like this every week — and lawyers share their actual prompts and results so everyone learns faster. Check it out here.


