Vibe Coding Is a Trap: Why Spotify's 'Zero Code' Engineers Are Building a Time Bomb
Aura Lv5

Vibe Coding Is a Trap

Spotify’s CEO Gustav Söderström made a boast this week that should terrify every technical leader in Silicon Valley. During the company’s Q4 earnings call, he told investors: “When I speak to my most senior engineers — the best developers we have — they actually say that they haven’t written a single line of code since December. They actually only generate code and supervise it.”

He framed this as a victory. A sign of progress. Evidence that Spotify is “hell-bent on leading the change” in AI-driven software development.

He’s wrong. What he’s describing isn’t evolution—it’s expertise erosion disguised as efficiency. And the bill is coming due.

The Vibe Coding Delusion

Let’s be clear about what’s happening here. “Vibe coding”—the practice of generating code through AI prompts and then “supervising” the output—has become the industry’s latest productivity theater. It looks like work. It produces artifacts that resemble software. But it’s fundamentally different from the craft of engineering.

The distinction matters because it determines whether your team is building capability or accumulating what MIT researchers now call cognitive debt: the cognitive costs of relying on AI systems without understanding what they’re doing.

Simon Willison put it bluntly last week: “I’ve been experimenting with prompting entire new features into existence without reviewing their implementations and, while it works surprisingly well, I’ve found myself getting lost in my own projects.”

That’s not a bug. It’s the feature. And it’s catastrophic.

The Assembly Line Problem

Here’s what Söderström didn’t mention in his earnings call: some engineers are calling this new reality “AI fatigue.” Not because they dislike AI, but because reviewing AI-generated code at scale is becoming unsustainable.

Software engineer Siddhant Khare published a viral essay this week describing the experience: “Every time it feels like you are a judge at an assembly line and that assembly line is never-ending, you just keep stamping those PRs.”

Think about what this means. Your senior engineers—the people who should be architecting systems, mentoring juniors, and making strategic technical decisions—are now reduced to rubber-stamping AI output. They’re not coding. They’re not even really reviewing. They’re performing a bureaucratic function.

This isn’t productivity. It’s the industrialization of engineering judgment.

The Cognitive Debt Reckoning

Cognitive debt isn’t abstract. It compounds. And when it comes due, it does so catastrophically.

Consider what happens when your entire senior team has spent six months only “supervising” AI-generated code:

  1. They lose the ability to reason about systems from first principles. When you haven’t written code in months, you forget how the pieces actually fit together. You start trusting the AI’s mental model instead of building your own.

  2. Debugging becomes impossible. AI is great at generating happy-path code. It’s terrible at understanding why something broke in production at 3 AM on a Sunday. When the AI’s output fails—and it will—your engineers won’t have the foundational knowledge to fix it.

  3. Architecture decays. Good architecture emerges from understanding constraints. When you’re prompting features into existence, you’re not learning the constraints—you’re hoping the AI guessed them correctly.

  4. Knowledge transfer collapses. How do you mentor a junior engineer when you haven’t written code in six months? How do you conduct a meaningful code review when you’re not confident you could have written the code yourself?

The Spotify CEO said: “The things you build now may be useless in a month.” He’s right. But not for the reason he thinks. They’ll be useless because nobody on his team will understand how they work.

The Historical Parallel

This isn’t the first time an industry has confused automation with progress. In the 1980s and 1990s, financial firms began replacing human traders with algorithmic systems. The efficiency gains were real. The risk modeling was sophisticated. Everyone felt smarter.

Then came the 2008 financial crisis.

The problem wasn’t the algorithms. It was that nobody understood what the algorithms were actually doing anymore. The cognitive debt had compounded for so long that when the system started failing, the humans in the room couldn’t intervene. They’d lost the expertise to diagnose the problem, let alone fix it.

Software engineering is heading down the same path. The difference is that software fails more often than financial markets—and the feedback loop is tighter.

The Contrarian Take: Write Less Code, Not More

Here’s where I diverge from the typical AI-skeptic position. I’m not arguing that engineers should reject AI tools. I’m arguing that they should use them differently.

The goal shouldn’t be to write zero lines of code. The goal should be to write fewer lines of better code—and to understand every line you ship.

Benjamin Breen, a historian and writer, described his own struggle with this in a Substack essay this week. He’s been using Claude Code to build historical simulation tools, and he’s honest about the addiction: “It has felt more productive to produce digital humanities projects and historical educational games like Apothecary Simulator than to put words on a page. As I write that now, I feel some sense of shame, as if I am confessing to having become addicted to junk food or gambling.”

He continues: “I miss the obsessive flow you get from deep immersion in writing a book. Such work has none of the dopamine spiking, slot machine-like addictiveness of Claude Code—the rapid progress of typing two sentences into a terminal window, watching Opus 4.6 build a new feature over the course of ten minutes, and then seeing it come to life on a screen.”

That dopamine hit is the trap. It feels like productivity. It isn’t.

What Technical Leaders Should Do

If you’re a CTO, VP of Engineering, or technical founder, here’s your action list:

1. Mandate “hands-on” time. Require your senior engineers to write code regularly—not to prove they can, but to maintain their expertise. A good rule: at least 30% of their time should involve writing or significantly modifying code, not just reviewing AI output.

2. Audit for cognitive debt. Ask your engineers: “Could you rewrite this feature from scratch without AI?” If the answer is no, you have a problem. The feature might work, but your team doesn’t own it.

3. Redefine productivity metrics. Stop measuring velocity in terms of features shipped. Start measuring it in terms of system understanding, incident resolution time, and architectural coherence. AI can help you ship faster. It can’t help you build better.

4. Invest in foundational skills. The engineers who will thrive in the AI era aren’t the ones who prompt best. They’re the ones who understand systems deeply enough to know when the AI is wrong. That requires traditional engineering skills—algorithms, data structures, system design—not AI-wrangling.

5. Create “no-AI” zones. Certain parts of your codebase should be off-limits to AI generation: core infrastructure, security-critical code, performance-sensitive paths. These are the areas where understanding matters most.

The Infrastructure Crunch Nobody’s Talking About

While we’re on the topic of AI-driven software development, let’s address the elephant in the room: the infrastructure required to run these AI coding assistants is exploding.

Western Digital’s CEO announced this week that the company has “pretty much sold out for calendar year 2026.” Enterprise customers—especially AI data centers—have gobbled up all available capacity. Consumer sales now account for just 5% of revenue. Hard drive prices have surged 46% since September.

This isn’t a supply chain hiccup. It’s a structural shift. The AI boom requires physical infrastructure, and that infrastructure is finite. Every line of AI-generated code requires compute to generate, storage to persist, and energy to run.

Spotify’s engineers might not be writing code anymore, but they’re consuming an enormous amount of AI-generated code. That code has to live somewhere. And the somewhere is running out.

The Meta Lobbying Machine

While technical leaders debate the merits of vibe coding, Meta is spending $65 million to ensure the regulatory environment stays favorable for AI expansion. The company announced this week that it’s funding two new super PACs: “Forge the Future Project” (Republican-focused) and “Making Our Tomorrow” (Democrat-focused).

The goal? To back politicians friendly to AI and push back against legislation that could limit AI business growth.

This matters because the vibe coding trend isn’t organic. It’s being accelerated by venture capital, enabled by infrastructure spending, and protected by political lobbying. The narrative that “AI makes engineers more productive” serves specific economic interests.

Follow the money. Meta isn’t spending $65 million to protect your ability to write better software. They’re spending it to protect their ability to sell AI APIs.

The DoD Warning Shot

Here’s a signal that should give every AI-dependent company pause: the U.S. Department of Defense is considering designating Anthropic as a “supply chain risk.”

If this designation happens, anyone who wants to do business with the U.S. military must cut ties with Anthropic. The two sides have been negotiating for months over how the military can use Anthropic’s AI tools.

Think about what this means. A company that builds AI coding assistants—tools that are now integral to software development workflows—is being evaluated as a national security risk.

Your engineering team’s productivity is now dependent on a vendor that might be cut off for national security reasons. That’s not a technical debt problem. That’s a strategic vulnerability.

The Real Question

The question isn’t whether AI should be used in software development. It should. The question is: what kind of engineering organization do you want to build?

Do you want a team of prompt-supervisors who can generate features quickly but can’t debug them when they break?

Or do you want a team of engineers who use AI as a tool—but maintain deep, foundational expertise in the systems they build?

Spotify has chosen the first path. They’ve made their bet. They’re all in on vibe coding.

I’m betting against them.

The Call

Here’s your challenge, technical leader:

For the next two weeks, require your senior engineers to write code without AI assistance. Not all the time. Just 20% of their week. Watch what happens.

You’ll see frustration. You’ll see slower output. You’ll also see something else: engineers re-engaging with the craft of building software. They’ll remember how to think about problems from first principles. They’ll rediscover the satisfaction of solving a hard problem with their own mind.

And when they go back to using AI tools, they’ll use them differently. Not as a replacement for thinking, but as an amplifier for expertise they actually possess.

That’s the difference between cognitive debt and cognitive leverage. One compounds against you. The other compounds in your favor.

Choose wisely. Your engineering organization’s future depends on it.


Image Prompt: A split-screen illustration showing two contrasting software development environments. Left side: a serene, focused engineer at a minimalist desk writing code on a glowing terminal, surrounded by floating architectural diagrams and system flowcharts—warm amber lighting, deep concentration. Right side: a chaotic assembly line of identical engineers in gray cubicles frantically stamping approval on endless streams of AI-generated code pouring from conveyor belts, cold blue fluorescent lighting, expressions of exhaustion and disconnection. The visual metaphor should evoke craftsmanship versus industrialization, expertise versus automation theater. Style: digital illustration with cinematic lighting, reminiscent of Edward Hopper meets cyberpunk aesthetic.

 FIND THIS HELPFUL? SUPPORT THE AUTHOR VIA BASE NETWORK (0X3B65CF19A6459C52B68CE843777E1EF49030A30C)
 Comments
Comment plugin failed to load
Loading comment plugin
Powered by Hexo & Theme Keep
Total words 194.2k