AI Is Getting More Expensive Than Your Team
Anthropic spent $7 billion on compute last year. Your next AI bill is going to be bigger too.
“Dad, you’re still telling Claude what to do?”
Howard Getson’s oldest son, half-laughing. The kind of teasing that lands because it’s accurate.
Howard runs Capitalogix and writes one of the sharper weekly newsletters in my inbox. Last week he posted this piece on the new AI leaderboard and what it actually costs to keep up. The argument is worth walking through.
Howard had Claude open. Three prompts deep into walking it through a task, step by step. The way you’d brief a new associate who needed to be told which file to open and what to do with it.
His son just types one line. Claude figures the rest out.
The kid’s right. He’s not the only one.
Howard sees this everywhere now. The people moving fastest with these tools aren’t writing better prompts. They’re writing fewer. They’ve stopped managing the model and started trusting it to manage itself.
That shift is happening in the same season the entire AI leaderboard is rearranging itself. Again.
Here’s what Howard covers
Why he runs six AI tools at the same time — and why that strategy still works in April 2026.
How Claude went from “interesting alternative” to a model other models defer to.
What the latest Mensa Norway benchmarks actually show about the top of the field.
Why the real moat at the frontier isn’t talent anymore. It’s compute.
What that means for your bill — and why AI is starting to cost more than the people it was supposed to replace.
He pays for all of them. On purpose.
ChatGPT. Claude. Perplexity. Microsoft Copilot. Limited subs to Gemini and Grok. Plus Grammarly, Granola, and Wispr Flow on the side.
His wife thinks this is insane.
She has a point. Most people would pick one and commit. That’s the rational play if you want to keep your monthly bill down.
But Howard has watched enough technology cycles to know something. The first tool that does something cool is rarely the one that ends up winning. Sometimes it’s the second. Sometimes it’s the fourth. Sometimes it’s the quiet second-place finisher for two years that ships one update and flips the whole category.
In a contested space — and large language models are the most contested category in the history of software — the early leader is a snapshot, not a verdict.
So he subscribes to all of them. Treats them as a panel.
Here’s what that looks like in practice. He starts something in ChatGPT. ChatGPT has been his default for a while — projects start there, end there, get refined there. Then he takes the output and hands it to Perplexity. “Here’s what I built. What would you change?”
Perplexity comes back with a different angle. He takes that angle back to ChatGPT. “Perplexity recommended this. What do you think?”
That’s the loop. It’s not efficient. It’s better than efficient.
Cross-pollination across models is how you land on the answer none of them would have given you alone.
Claude is taking more of his cycles
Howard wrote this post because something shifted. Claude is winning more of his work. Not all of it. More.
The answers are sharper. The interface stays out of the way. The integrations are getting deep enough that he can use Claude to actually do work, not just talk about work.
ChatGPT just shipped a new interim release to slow the bleed. That’s not Howard reading tea leaves — that’s the standard move when momentum starts heading the other way.
There’s a tell that’s harder to fake.
When Howard shows Claude something he built in ChatGPT, Claude critiques it. Hard. Expected. Models are trained to prefer their own outputs — every model rates itself higher than the alternatives.
What’s not expected: when he shows ChatGPT something he built in Claude, ChatGPT is impressed.
That’s the inversion. A model elevating a competitor’s output above its own ceiling. That doesn’t happen by accident.
The gap at the top is closing fast. And one reason it’s closing is that everyone doing serious work with these tools is running outputs from one model through another. Users are the cross-pollination. The models are learning from each other through us.
The April 2026 leaderboard
Visual Capitalist published the latest cut of the Mensa Norway benchmark, tracked by TrackingAI.
Top of the list as of April 2026:
Grok-4.20 Expert Mode: 145
OpenAI GPT 5.4 Pro (Vision): 145
A tie. Separated by zero points.
A year ago, the top score was 135.
Then look at the spread under the leaders. The top tier is bunched. A handful of models within a few points of the front. The frontier isn’t a single peak anymore — it’s a plateau, and everyone with serious capital is standing on it.
Worth noting: ChatGPT 5.5 dropped this week. It’s not even on the chart yet.
That’s the cadence now. The “smartest model” list is obsolete the week it publishes.
Using a frontier model isn’t a differentiator. It’s the cost of admission. The interesting question isn’t who’s winning the leaderboard. It’s who can afford to keep showing up to it.
The real bottleneck moved
Early AI development was a talent fight. Whoever could hire the best researchers, the best engineers, the best applied scientists — that’s who shipped the next jump.
That’s not the constraint anymore.
Visual Capitalist’s breakdown of AI company costs shows where the money actually goes now. Compute is eating everything.
Anthropic spent almost $7 billion on compute in 2025.
Read that again. Not on people. On chips, power, and data centers.
Talent still matters. Of course it does. But you can hire the best ML researcher on the planet and it doesn’t matter if you can’t afford to train the next model. The bottleneck moved from human capital to physical infrastructure, and the gap between “we can play” and “we can’t” is now denominated in billions, not millions.
This is why the leaderboard is bunched at the top. Only a handful of companies have the capital to even compete. They’re all training on roughly the same scale of compute, with roughly the same data, hiring from roughly the same pool. Of course the scores cluster.
The interesting question for the next two years isn’t who’s smartest. It’s who can afford to keep paying the compute bill.
And then your bill shows up
Here’s where it gets uncomfortable for the rest of us.
When you first start using AI seriously, it feels like leverage. Twenty bucks a month. Maybe forty. You ship more work, faster. The math is laughable in your favor.
Then usage creeps in.
You stop manually copying prompts. You wire up agents. You let Claude run multi-step tasks. You build automations that run all night. The tool stops being a thing you reach for and starts being a teammate that’s always on.
Every token costs.
The meter is always running.
The shift is subtle. AI moves from “free leverage” to “always-on teammate that bills you for every task — and bills more when you ask it to show its work.” That’s not a knock on the technology. That’s what industrial-scale AI usage looks like once you’re in deep.
Axios reported last week that for some categories of work, AI now costs more than the human workers it was supposed to replace.
That headline is going to land for a lot of operators this quarter.
It’s tempting to think of AI as a pure efficiency gain — a thing that just improves margins. In practice, it’s both sides of the equation. It produces more output. It also adds cost. The labs building these tools have known this for years. The companies using them are starting to feel it now.
The real question
Howard is fully committed to AI. He’s running six tools. He’s pushing further into the deep end every month.
And the deeper you go, the more important it gets to stop and ask whether the activity is actually pointed at something.
Running prompts isn’t progress.
Subscribing to every tool isn’t a strategy.
Doing more, faster, in the wrong direction is just expensive momentum.
The leaderboard will keep churning. The compute bills will keep climbing. ChatGPT 5.6 will ship next month and 5.7 the month after.
The question isn’t whether you’re keeping up.
The question is whether you know what you’re keeping up for.
That’s Howard’s piece. Read the original here. And while you’re there, subscribe to his newsletter — it’s excellent.


