Context is like a fish. A goldfish has a small memory. A salmon has a bigger one. The training data is the pond, the lake, or the ocean.
Yes, making the fish bigger might help it find a single drop in that ocean.
But most of the time, you don't need a bigger fish.
You need the goldfish in a bowl that contains the right answer.
We need to slow down.
Right now the dominant instinct in AI is scale: bigger models, bigger datasets, bigger "world models," bigger context windows. Just keep feeding it more water and eventually it will understand everything.
That sounds impressive. It also misses the point.
If GPT-3 was a goldfish and GPT-4 was a suckerfish, we're probably somewhere around a salmon now. Maybe a tuna. Definitely not a dolphin. Generously, an octopus.
We are not at mammalian intelligence. We are not simulating reality. We are not building The Matrix.
We are building extremely sophisticated pattern predictors.
Parrots and crows can repeat things. That doesn't make them economists.
The "world model" fantasy assumes that if we just gather enough data — satellite imagery, cars driving around filming everything, every public text ever written — we can simulate the whole planet.
But scale matters in the wrong direction.
You can map every street from space. You can train on millions of miles of driving data like Waymo does. That's great for self-driving.
You cannot and will not get the data sitting inside the accounting system of a mid-sized manufacturing firm. You won't get the contracts. The warranty records. The private CRM notes.
And you shouldn't need to.
What does the average order history at McDonald's have to do with a robotaxi?
Nothing.
The real opportunity isn't "simulate the universe."
It's constraint.
Domain-based training. Constrained cognition. Smart retrieval.
Most business value won't come from a trillion-parameter generalist trying to reason about everything. It will come from smaller systems pointed at the right slice of private data — the data that will never be in a public training set.
And here's the uncomfortable part:
The bottleneck isn't the model.
It's the humans.
There are thousands of business users being told, "Use AI." They log into a chatbot and stare at a blinking cursor. They don't know what to ask. They don't know how to structure data. They don't know how to break problems into queries.
Meanwhile, companies are spending bajillions chasing bigger foundation models at places like OpenAI, Google DeepMind, and Anthropic.
The returns aren't showing up in the average operations department.
Because intelligence without constraint is noise.
Here's what would actually move the needle:
Teach non-technical employees how to:
- Break private datasets into composable chunks
- Summarize intelligently
- Build domain-specific AI "skills"
- Query instead of chat
- Constrain context on purpose
That's not a bigger ocean.
That's a fishbowl with clean water.
The future of business AI probably isn't a godlike world simulator. It's disciplined retrieval, summarization pipelines, and narrowly scoped reasoning over proprietary data.
That's not flashy.
It works.
Until then, I'll be keeping my goldfish in a fish tank where it can find the food pellets without having to search the entire ocean.
And I suspect that's where most real ROI will come from.