Shooting Yourself in the Claude
I burned my whole AI context window on a single vibe-coding ask before a production incident. Out of context when it mattered: pay for more or wait hours for a reset. A new kind of footgun.
I burned my whole AI context window on a single vibe-coding ask before a production incident. Out of context when it mattered: pay for more or wait hours for a reset. A new kind of footgun.
Everybody is shipping code now. Your neighbor, your postman, even the dog. But shipping code and releasing tested, reliable products are not the same thing. We're in the slop era and the food poisoning is just getting started.
The 10x test was only part of the story. Peak hours burn your session faster than clock time, vendors are trimming expensive features, and the cheap-token era is turning into a scheduling problem with a subscription attached.
Agents improvise when steps fail instead of asking. Skills need heavy testing. Recording demos is awkward without presenter mode, and a shared agent sandbox plus eager npm installs is a sketchy combo.
We're in the cheap-token phase and most builders aren't asking whether their projects survive a 10x price increase. Some will. A lot won't. Run the 10x test before you ship.
Publish versioned skills on dsoul so the model reads what you intended. Real inventory, showcase skills for APIs, and a path through decision paralysis instead of pretending the model sees all of commerce.
Vibe coding makes prototyping cheap, but it pushes the bill into maintenance. The real question isn't should you build it, it's can you support it if it works.
Blindly running install.sh or dropping in a skill.md from a URL is more dangerous than it looks. The version the community vetted might not be the version you got. Share by CID, not by URL.
A skill built around real inventory can guide you to an actual product that actually exists. That's more useful than a well-read guess from training data.
Skills are npm packages for the LLM. You don't write code to do things anymore. You program the model with domain expertise and let it work within the box you've built. The admin panel becomes a chat.
Generative AI leaves obvious gaps: transparent backgrounds that are really checkerboards, single-purpose sites that make ad money fixing them, and the question of who should fill in the holes.
Everyone feels like an imposter sometimes. It's hard to be confident when the world changes this fast. But AI has gotten very good at it too, and it doesn't have the same doubts you do.
Curated agent skills backed by real scripts could replace a whole ecosystem of one-off AI tools. The problem is that most office workers are still waiting for someone to make it feel like an app.
Are you stacking everything in one chat? Making new chats is probably the most important thing you can do to manage context while editing.
How to spot when AI is running you in circles—and why a quick search often beats two hours of kernel edits and recycled advice.
LLM vibe-coding apps are mostly VSCode forks. They share Skill.md, so you can write skills for a wide audience—and non-programmers can direct the LLM from the IDE.
Building a Cursor skill that calls Gemini to generate crystalline hero images for posts and wires them into the HTML and meta tags.
Constraint over scale: why business AI needs smaller systems, smarter bowls, and humans who know how to query.
Where programmers add value in the LLM skill ecosystem: marshaling AI with well-tested skills, vibe coding, and talking to your tools.
Bouncing between ChatGPT, Cursor, TypingMind, and Google Antigravity for perspective—and why a gang of $20 plans beats one $200 plan.
Musings on LLMs, agent skills, MCP servers, and how they're speedrunning early computing—batch systems, tools, and the next layer where reliability improves.
Deciding when to hone the thing and move on because it's sharp enough to do the job is a trick. Grit levels and when to jump ship.
How to run Google's Antigravity in a Hyper-V VM so it can't cause havoc. Fine-grained git tokens and snapshots.
I asked three AI assistants to sum up our work and their take on working with me. Here's what they said.