The Evolution of Programming: Why AI Won't Replace Developers
25 Feb 2026Lately, doom-mongering is everywhere you look: “In six months, 90% of code will be written by AI!” or “The software engineering profession is dead!” If you take these claims at face value, it sounds pretty unsettling. But let’s take a breath and separate the hype from reality.
Is it true that AI is going to type 90% (or even 100%) of the code? Oh, absolutely. But does that mean it’s going to write it — design the architecture, figure out the business logic, solve the real problems? That’s complete nonsense.
Sure, we’re not going to be furiously mashing keyboards to hammer out endless boilerplate anymore. But make no mistake — this isn’t the end of programming. It’s just the next step in an evolution we’ve already been through. Several times, actually.

From Assembly to Prompts: History Repeats Itself
Think back to the shift from assembly to high-level languages. Developers were literally writing machine instructions by hand — every address, every register. Then abstractions started rolling in. For a while it was this awkward dance between assembly and early high-level languages, with plenty of people loudly insisting that “real programmers” would always need the bare metal. Spoiler: they were wrong. That need just… faded away. We got interpreted languages, massive frameworks, IDEs with LSPs, code completion and smart refactoring tools. The tools got better, the abstractions got higher, and the amount of software we build also increased dramatically.
“Software Is Eating the World”
— Marc Andreessen
Did we stop programming? Nope. Did the number of developers drop? Actually, pretty much the exact opposite happened. The higher the abstraction, the cheaper and faster it is to build software — and when software gets cheaper, businesses just demand more of it. More code, more developers to build it, more engineers to deploy and maintain it.
We’re at that exact same crossroads right now. Except instead of jumping from assembly to C (or Python), we’re leaping from rigid syntax and having to write a lot of boilerplate to plain natural language. And as of early 2026, the boilerplate can be written by a model you can run locally. i.e. we got to the stage where even very small and not very capable (relative to the biggest frontier models) are able to produce most of the boring boilerplate.

Death of Syntax, Long Live the Logic!
Up until now, the biggest roadblock between the software idea in a someone’s head and an MVP of the actual product was the syntax and the technical implementation and deployment details. You had to memorize exactly where the semicolons went, precisely how to declare every class, and which flavor of brackets to use. You had to be a technician, one that has to be very precise with the implementation details, not just an architect of the solution.
What’s happening right now with LLMs and the break neck speed of development of models and capabilities is that we’re bulldozing that roadblock. You don’t need to memorize syntax anymore. You get to just dictate your thoughts. It is very helpful to have a good understanding of the fundamentals. Sure, right now we still have to jump in and manually massage the generated code every so often. But give it a year, two, or three, and that awkward back-and-forth will be mostly a thing of the past. We’ll literally just be speaking our architecture and logic into existence.
The developer of the future isn’t the person typing 120 words per minute. It’s the thinker, the engineer, the architect—the person who can formulate their ideas with crystal clarity and give precise instructions to the machines. We’re about to have a lot more code, and it’s going to get much more complex.
Many studies have shown that the majority of software developer’s time roughly 90% is spent on reading code, not writing new code. With LLMs taking care of the syntax and boilerplate, developers will be freed up to focus on the higher-level abstractions, logic and design.
In line with the Jevons Paradox, the more efficient and cheaper something becomes, the more we end up using it. So as it becomes easier and faster to generate code, we’re going to see an explosion in the amount of software being created. And with that explosion comes a need for more developers to manage and maintain all that new code.
So as the amount and complexity of software grows, the need for skilled “developers” (we might be calling them “software architects” or “logic engineers”) to manage and maintain that code will only increase. The demand for such people is not going to decrease; it’s going to skyrocket.
This means we’ll need more developers than ever before to manage and maintain all this new code. But what kind of developers? The ones who can think critically, design robust architectures, and solve complex problems. The ones who can communicate their ideas clearly and work effectively with AI tools.
Fundamentals Still Matter (more than ever)
If anything, I think this swings the pendulum back toward fundamentals. Sure, you can ask an AI to suggest an algorithm or explain a design pattern. But you need to understand those concepts yourself — well enough to plan ahead, hold the whole mental model of a system in your head, reason through trade-offs, and make the call on which approach actually fits. That part doesn’t get automated. If you don’t know what good looks like, you won’t know when the AI is handing you something bad.
The age of syntax is over. The age of logic and architecture is just beginning. And honestly — I can’t wait to see where it takes us.

