Right now a $20/month plan gets you around $200 worth of API tokens. A $200 Claude plan reportedly nets you closer to $2,000. Free Gemini trials have image generation so cheap it basically rounds to nothing. This is the blitz-scale phase: burn money, grab users, sort out margin later. Everybody in the industry knows it. Almost nobody building on top of it is asking the obvious question.
If tokens cost ten times more, is your project still worth releasing?
The 10x test
Before you ship whatever slopsmithing session just wrapped up, run it through the 10x test. If your maintenance token bill multiplied by ten, would you still care? Would the numbers still work?
For a lot of things, yes. I've been building website frontends with AI help and that work passes easily. Tasks that used to take days now take hours, and a 10x price increase still leaves you well ahead.
But I've also been messing around with vibe coded games. Those are not passing the 10x test. Some of them aren't passing the 1x test. They cost roughly $200 each to make and I'm not sure any of them were worth it. Cheap tokens made it easy to rationalize. That's the trap.
Smart startups are cranking, and they should be
None of this means stop building. The cheap-code sausage machine is running and you should be feeding it raw material. The arbitrage is real. You can ship things today that would have needed a full team two years ago.
The smart play is to use the window to establish yourself, not to build a business that only works because the tokens are subsidized. Keep cranking. Just keep one eye on what the unit economics look like when the gravy train reprices itself. It will.
Scaling and the people arguing about it
There's a bigger argument under all of this. The labs have started quietly admitting that scaling is hitting diminishing returns. Gary Marcus has been saying for a long time that hallucinations aren't a bug to be engineered away: they're structural. I think he's been more right than his critics wanted to admit. Throwing more tokens at a problem can paper over a lot of things, but it doesn't fix the underlying architecture.
Ed Zitron has been making the economic case loudly: the numbers don't add up, the people saying otherwise are the ones selling you the compute. I'm not fully in his camp. I've seen enough real value to believe these tools earn their keep in the right contexts. But I'm not in the "this scales forever at the same price" camp either. I'm still not sure, and that uncertainty is itself a reason to be careful about what you build dependencies on.
The case for efficient, smaller models gets stronger every month. A model that costs a tenth as much and does the job reliably is more interesting than one that does everything brilliantly at a price that requires VC oxygen to survive.
The wheel we keep reinventing
Here's something worth sitting with. When tokens get genuinely expensive, doesn't it make a lot more sense to build on top of stacks that already solved the hard parts? WordPress themes and plugins exist because someone already did authentication, routing, content management, all of it. You're not spending tokens re-solving those problems. You're spending them on the actual thing you're trying to build.
Right now we're using cheap tokens to approximate solutions to problems that already have good answers. The code that comes out is less battle-tested, often less secure, and in a lot of cases just a blurrier version of something that already existed. An AI scalpel working inside a secure, established system is a much stronger story than an AI that built the system from scratch and probably got the security model mostly right.
tl;dr: run the 10x test before you ship. The things that fail it probably belong back in the "not worth it" jar, where most ideas live anyway.