skip to content

Search

Vibe Coding is Magic: Smoke and Mirrors

5 min read

When AI spits out code that works, it feels like magic. Just like magic in reality, it's all sleight of hand and illusion.

Anyone Can Make Software!

Like many software experts in 2025, I was asked to review a project spat out by Lovable, a “well regarded” AI app-building product. It looked real, of course. It looked fancy enough, too. Of course, it was trying to implement server-side code into a pure react app. It never had the capability to make the product it was asked to make. Physically impossible, and it took three minutes to understand this as an engineer.

This experience isn’t just bad luck, it’s how it usually goes. Anyone can make software by learning how to translate English into code. But as I’ve learned over the last two decades building software, translating English into code is harder than it sounds.

Cursor and Copilot Suck at Coding

It feels like sorcery when you spin up Copilot or Cursor and ask it to start building. It’s like someone has eased open a window, letting us peak into a modality of new human-machine communication. That’s our job as software engineers: talking with computers in the form of code. If we could skip that translation layer and just interact in English, that’s the Star Trek future, right?

Unfortunately, these tools are bad.

  1. They are expensive even when they are massively subsidized. At some point, someone needs to pay the real cost for these tools, and it’ll be vastly more expensive than today.
  2. They are expensive I’m repeating this because people really don’t understand how massively subsidized these tools are; even still, people someitmes get $1000 dollar bills when their “coding agents” go off the rails.
  3. They suck at their job The smartphone is a revolutionary tool. There’s lots of things I simply cannot do without it. When I use AI to build anything complex, I usually think “Wow…it couldn’t have done this without me”.
  4. English is better because this is how freakin’ LLMs work. The engineers that use LLMs best are those that use English carefully. Sure, it’d be nice if every request “understood the context of your code”, but that’s not how this works.

Tech Nerds Suck at English? Why?

LLMs are a revolution, but the idea that you can just shove all your code into a RAG or MCP and the AI will then “understand” it is wrong. That’s a brittle, expensive, silly way to do it. You don’t want AI to have a hollistic program-wide view of everything because it’s bad at it.

There’s a hundred AI startups that will fail because they do not believe that software engineering as a discipline is subjective. They don’t acknowledge that each program is basically a way to translate commands to a computer. It’s language.

What’s my point? A few sentences of English is usually more effective than a massive heap of code shoved into the context widow. Language is powerful, and it’s stupid not to leverage the LLM’s native modality of thinking. This is why “integrated” IDEs like Cursor and CoPilot won’t work the way some people like to imagine (writing vast swaths of code and understandig everything you’ve written).

LLMs are language machines, and they work best when you speak in English because that’s most of what they have learned.

Vibe Coding is Slow

It just makes developers think they are being fast. Vibe coding is actually slow. The more I use AI tools, the less impressed I am, and that’s how most people seem to feel.

LLMs are great when you need to translate a block of tokens into code, brainstorm about new technology and techniques, or query something really specific. It’s great for bootstrapping (even shoving a screenshot into a web interface to have a starting point), but it’s bad for making big choices about structure or strategy. That makes it a bad engineer.

All these actions are easier to do by offering a few lines of English as a prompt — the idea that you can just install Cursor and build anything just doesn’t work as well because it can make you lazy. When you hae no choice but to carefully curate your prompts through a web interface, you tend to get better results because you must provide good context in English.

It’s cheaper and more effective. Just like how good conventions like MVC help structure a project, ditching vibe-coding IDEs and favoring English as the most powerful lever for augmenting LLM results is (in my opinion) going to maximize the effectiveness of the tool.

And it’s Better for Soceity

Wasting vast sums of tokens to have the AI not understand your code is a massively terrible use of energy.

Run a local LLM and you’ll see first-hand that the energy-hungry nature of LLMs is not some exageration. You can physically feel the heat coming off your GPU, even for modestly-sized models. This underscores (again) the idea that every subscription IDE AI tool is not worth the true, retail cost.

No, it won’t get Better

Unlike procedural code which does (generally) get better over time, it is not proven that the flaws in AI’s abilities can ever be solved. Just like the leap from primitive neural nets to transformer-based architecture, the next innovation could be another five or six decades away.

The best science can’t produce an arm that’s superior to a human arm, and we’ve been trying for a long time. The brain is infinitely more complex than an arm, but you’re expected to believe that Sam Altman can make a brain that is superior to a human brain? At face value, it makes very little sense and so far there’s no evidence that this leap is achievable.

Modern deep learning has unlockd something magic, but magic isn’t real. It’s smoke and mirrors, not substance.