The vibe coder’s career path is doomed

A colorful, cartoon-style digital illustration shows a cheerful young male programmer sitting on a red stool, typing on a slot machine–shaped computer labeled “CLAUDE.” The machine features a screen filled with color-coded code lines and a large red lever. Surrounding the character are numerous speech bubbles that all say “You’re absolutely right.” The background is a vibrant rainbow gradient of bold, saturated colors, creating an energetic and whimsical atmosphere.

Let me get one thing out of the way immediately: LLMs are helpful. This isn't about whether LLMs can write code. They can. It's about why vibe coding might be the worst career investment you'll make.

I started noticing this shift when developer conversations changed completely. Now it's all about getting Claude to write code for you. Or the holy grail: getting AI to do everything without your intervention.

Until recently, I'd been mostly ignoring the hype. I'd read headlines, occasionally ask Claude or ChatGPT to help me debug something, but not much else. Time to learn vibe coding!

“What are you vibe coding?”

A Telegram bot. It's a greenfield project. Some dashboards with real-time updates. Nothing too complex but not trivial either. Just a standard REST API with a React frontend.

I set up the full AI coding workflow: Claude, Playwright and Postgres MCPs, multiple agents working on different branches and comprehensive documentation files. Then I started vibe coding.

Claude was updating schemas, writing endpoints, clicking buttons in Chrome, checking Postgres data and opening pull requests. It worked. My first reaction:

"Holy shit! This is crazy!"

There's clearly going to be a gold rush. I don’t have to spend time writing code anymore. I need more agents, more automation. The factory must grow! I now have an army of junior devs available 24/7.

I was easily adding 2 -3 features every day. The barrier between thinking and implementing just disappeared. It was so satisfying.

As the project got more complex, things changed. Claude kept repeating the same mistakes, getting stuck in loops. The context switching became brutal.
I went from 4-5 parallel branches down to 2, sometimes just one. I couldn't simply ask for features anymore. I had to stop to think things through.

In the end, I'm still limited by mental energy. Context switching between multiple AI-generated branches only works for small tasks. For complex problems, I still had to think through the solution myself. Claude was just doing the typing for me.

I spent more time testing and writing instruction files for Claude than I've ever spent on any project of this size. I've worked with junior devs straight out of bootcamp and none required this level of hand-holding.

Anyway, I shipped it to my 3 test users and everything caught fire. Messages wouldn't sync, users got assigned to the wrong accounts. I found myself begging Claude to fix bug after bug. How did I get myself into this? This sucked. It was just chaos.

The last time this happened to me was when I worked with an offshore team. Nobody really cared about the code quality and everyone was solely focused on shipping fast. I had too many PRs to review, opened by 5 different people who didn’t really know or care about what they were doing. I only had a surface-level understanding of what was going on, turning into some kind of orchestrator who… wait. This sounds familiar.

Is this the future of software engineering? Was I missing something? Why would anyone want to invest in this?

“Early adopters will have an advantage”

Vibe coding skills aren't particularly hard to acquire. I went from knowing nothing to being competent in a few weeks. Even if it became the industry’s standard, anyone can be up to speed pretty quickly.
LLMs aren’t a new abstraction layer. It's just a different interface paradigm. You're trading syntax for natural language and determinism for uncertainty.

Meanwhile, whatever I learned about vibe coding is already obsolete. I checked Hacker News this morning. Companies are shipping products that automate away the exact workflows I just mastered. There's no first-mover advantage when the entire playing field gets bulldozed.

There's no lasting competitive advantage. No deep technical skills to master.

The vibe coding barrier to entry is collapsing so fast that “early adopters” are just beta testers. You're subsidising R&D for tools that will commoditise your skills.

“It’s all about knowing how to prompt”

My prompting approach? I switch to plan mode and describe what I want. Then keep replying with "If anything is ambiguous or unclear, please ask for clarification" until I'm satisfied. That's it. It works.

Compare that to learning something like Rust, which I've been struggling with for months now. It’s not just syntax. It's completely new concepts. That's something you can't just pick up.

Prompting is not a sophisticated skill requiring extensive training.

People spend thousands of hours mastering how to write code. They learn how to design data schemas that can adapt to new requirements, structure systems where bugs are easy to hunt down and fix. That's nowhere near prompting skills.

“I don’t care, it makes me 10x faster”

Faster at what? Prototyping? Boilerplate? That's very short-lived. The vast majority of software engineers work on production systems, not greenfield projects.

What LLMs are really good at is writing code very fast. Imagine you have two novelists. One types 50wpm, the other 200wpm. Does the fast typist finish 4x sooner? No. Because they spend most of their time on the plot, characters and creating a coherent story.

Did you ever work on a project where nothing moves forward? Everything's just slow. The app is slow. Adding features is slow. Fixing bugs takes forever. Did you think “this is because devs can't write code fast enough”? Or was it wrong architecture, wrong culture, broken communication, unclear requirements, poor technology choices?

At the very least, the assumption that AI makes development dramatically faster deserves scrutiny.
The testing burden alone destroys many of its gains. You need significantly more tests to make sure it doesn't break anything. Way more than usual.
It moves the effort required to build software from writing code to building safeguards and context-switching.

It's a different way to build software, which comes with its own trade-offs.

“It makes my job easier”

Vibe coding trades clarity for velocity.

You ship fast but lose your mental map. It’s a delicate balance. During my experiment, I watched myself building resistance to manually changing code. It was easier to tell the LLM "It doesn't work" and paste a stack trace. I found myself asking for tiny changes like "Now make it blue".

Why? Because I lost track of where things were and what they did. I don’t even remember in which file this button is. Yes, of course I reviewed the PRs. Do you know how hard it is to properly review code? To build a mental model of what’s going on? Now there are a dozen PRs in your queue.
Are you really reviewing all of them or just clicking approve and hoping for the best?

At some point, I hit a wall. Claude couldn't fix a bug after begging it many times. I was forced to jump in. And fuck me, this was hard work. Thinking is hard work and I've been avoiding it for a while now. Like trying to run a marathon after months on the couch. It took me so long to get up to speed that I lost all my productivity gains.

As I'm writing this, I just received another bug report. I have zero idea why it happens or where to start.

So much for making my job easier.

“So you don’t use LLMs then?”

After reading this, you might think I'm a die-hard anti-AI. I'm not.

AI helped me write this. English is my second language and my writing skills aren't that great. I used it to clean up grammar, improve sentence flow and make my ideas clearer. But I'm not a professional writer and I'm not claiming to be one. This is just a blog post, not an essay or a book.

I also use AI for coding. Shocking, I know.

But that's nothing like vibe coding. I don't mind using Claude Code on a VERY short leash with a specific purpose and I understand it costs me more than tokens.
I don't set it loose like a Roomba and walk away, hoping I won't find it stuck eating a shoelace when I return.

I’m not even against vibe coding itself. Sometimes you just have to cut corners and get shit done. Perfect can wait because you need the feature yesterday. Tech debt is a tool and it’s totally reasonable to use it. I've watched too many products slowly die while developers polished code that no one ever used. But full-time vibe coding? Cut me some slack.

The idea of autonomous AI development is just a fantasy. You can’t just replace expertise with tools. The most valuable developers are the ones with a strong mental map of where things are and what they do.

Using LLMs isn't the same as writing code. It doesn't create the same value and certainly doesn't produce better results. It’s technical debt.

“Soon, everyone will be a developer”

I’ve seen some amazing businesses built on top of Excel and no-code. Of course, you can build an app with Claude. It doesn’t make you a software engineer. Yes, I am gatekeeping. It’s for your own good because...

Unlike people assembling tools to create products, the vibe coder creates a huge mess. I've lost count of the horror stories from people who've inherited AI-generated codebases. No one's thinking about anything. After all, why bother when the AI can just do everything anyway?

The real difference is what professional developers actually do: architecture, creating and debugging complex systems, security, maintenance. They don't get six-figure salaries because they can quickly spin up an MVP.

Creating something special still takes domain knowledge, acquired through time and effort with or without AI.

“AI Won't Take Your Job, Someone Using AI Will”

This is yet another empty claim telling people to quickly rush into using AI. This implies that if you don’t learn how to use AI today, you’ll be irrelevant.

I don’t believe this is true but if you truly believe AI is soon to become good enough to handle complex development work, why are you investing in it? What happens to your salary when the skills required drop significantly? If AI writes better code than you, why would anyone hire you specifically?

Either AI is years away from writing production-quality code and there's no urgency or it will soon make coding so trivial that it becomes minimum-wage work. There's no lucrative middle ground where 'AI whispering' is a high-value skill.

If the future is “AI-augmented” development, even with gradual adoption, you're not coding anymore. You're babysitting. Your day consists of reviewing AI-generated PRs you barely understand and working on a codebase you can't mentally model.

That's not engineering. It's middle management cosplaying as QA reviewing tickets they can't solve from workers who can't think.

“It’s only going to get better”

For LLMs to keep improving, we need one of three things: more data, more power or a breakthrough.

Data is getting harder to find. Regulatory constraints, ethical considerations and public scrutiny move much slower than technology. They are also likely to run out of high-quality text data between 2026 and 2032 and synthetic data (using LLMs to generate more data) causes model collapse and bias amplification.

Power isn't unlimited either. Data centres are concentrated in specific regions. We don't have the electrical grid infrastructure to deliver power to them at the scale they need. Other energy-intensive technologies, like electric vehicles, are also competing for grid capacity. And if we take a broader view, like meeting our climate goals, diverting more power to GPUs may not be the most pressing political priority.

Breakthroughs are rare. Modern LLMs are based on Google's papers: "Attention Is All You Need" (2017) and BERT (2018). Almost a decade ago. Since then, improvements have come from scaling, not new architectures. Every new release is becoming less and less impressive because transformers are just hitting fundamental limitations that incremental improvements can't solve.

We can be hopeful for breakthroughs but they're unpredictable by nature. More likely, we'll see smaller models become more capable rather than dramatically more powerful ones.

“You’re just a sceptic”

The AI industry is built on subsidised resources while burning VC money with no clear path to profit. Datacenters get discounted land, tax breaks and infrastructure upgrades paid by the public.

They socialised the costs, privatised the profits and they're still nowhere near profitability.

They claim to make everyone 10x, even 100x, more productive but they have no path to profit. Why are they all failing to capture that value?

When Google launched, it had better algorithms. Yahoo and AltaVista, despite their vastly superior resources, couldn’t keep up. After Apple released the iPhone, it was such a great product that Blackberry and Nokia just slowly died.

Today, every billionaire has their own pet AI. None are significantly better. Each release slightly one-ups the others on arbitrary benchmarks, quickly followed by similar open-source models. This can't keep going forever.

“What if you’re wrong?”

If AI soon becomes good enough at building software on its own, software engineering as we know it is dead. I have no interest in becoming a glorified project manager, orchestrating AI agents all day long. If it does happen, I am now competing with anyone who can type a prompt. I’m not betting my career on being slightly better at prompting than millions of others.

Since I see no clear path to this happening any time soon, my bet is that we're actually much further away from this scenario than AI companies want us to think and they keep making extraordinary claims to raise more funding.

If I'm right, I didn't waste my time learning temporary skills instead of building real expertise.

Sources

Will we run out of data? Limits of LLM scaling based on human-generated data
What drives progress in AI? Trends in Data (MIT)
Best Practices and Lessons Learned on Synthetic Data
The rising costs of training frontier AI models
On The Computational Complexity of Self-Attention
Lawsuit Developments in 2024: A Year in Review
AI's energy impact is still small—but how we handle it is huge
We did the math on AI's energy footprint. Here's the story you haven't heard.
AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking (MDPI)
Your Brain on ChatGPT (MIT)
The impact of ChatGPT on student performance in higher education