AI Coding Is Based on a Faulty Premise
I love using AI to help me code. But companies who view current AI as a replacement for expensive human programmers are forgetting the painful lessons that we learned in the 1980s and 90s.
I am glad to be living in a time where AI is becoming mainstream. When I write code, I often use AI to complete code and to lookup the way libraries and tools can be used.
At the same time, I am increasingly distressed by the race to replace human developers, particularly the more junior ones, with AI assistants.
There is good reason to believe that this is a regressive trend, even in the short-to-medium term. That reason is linked to the history of software development, the software crisis of 1990s, and the movement towards a more agile style of development.
Make Code, Not War
In the late 1960s, when software development was young and people felt it was a little out of control, Nato ran two conferences to address what was then viewed as a brewing crisis. There was much discussion about principles, methodologies, design, and so on, but the biggest deliverable was a phrase, Software Engineering.
During the 1970s, academics (and some larger companies) tried to refine what this phrase meant.
There was clearly a large chasm between what a customer wanted and the delivery of software to satisfy them. So software engineering came to mean a process of narrowing the gap by dividing it into lots of phases, where the conceptual gap between each phase was smaller (and therefore more tractable) than a single large gap.
The result was variations of what has come to be called a waterfall approach. In one of the more ironic twists of software methodologies, the idea of the waterfall approach comes from an article in the 1970s by Winston Royce titled Managing the Development of Large Software Systems. The first diagram in the paper looks like this.
It shows the way many people felt that software should be developed; a set of steps, where the output of each is a refinement of its input. Each step would produce an increasing amount of specification, until finally we reached coding, testing, and delivery.
This diagram defined a generation of software development practices. Unfortunately, its advocates didn’t bother to read the paper past this nice simple picture. Royce points out that each step is likely to find errors in the preceding step, and so the diagram should look like this:
Then he points out that the reality is likely to be more anarchic. Errors found towards the end of the process may well have been caused by problems many steps back.
His solution to this was regrettable: increase the level of detail of the specifications passed between steps. By a lot. Fortunately, because no one bothered to read the whole paper, we were spared that additional process insult.
It Didn’t Work
Fast forward to 1995, where the Standish Group published their Chaos Report, a summary of the state of the software development industry. They reports that almost a third of software projects ended up being cancelled. Only one in six projects were completed on time and to budget.
In retrospect, the reason was pretty damn obvious:
People Don’t Know What They Want
The root of the problem is that the people at each phase of this approach cannot express accurately what they want. A small inconsistency, a minor oversight, at the beginning of the process will snowball as it is elaborated by the following steps. It’s like a party game of telephone, but without the cake. And without any way of checking as you go along, the discrepancies pile up at the end, leading to a massive amount of rework (and project cancellations).
In that case, it would be reasonable to ask why any software project succeeded. The answer is also fairly obvious:
People Don’t Do What They Should Do
Teams in the 1990s knew intuitively that this kind of snowballing cascade of small errors was unsustainable, so they subverted the project development structures imposed on them. They chose to ignore things that were patently incorrect. They talked to each other, and adjusted what they did in order to produce bits of code that would actually work together. And they got better and better at padding estimates.
Along Comes The Manifesto
The Manifesto for Agile Software Development, created in 2001, explicitly acknowledged that specifications at all levels were suspect, and that the only true measure of a project is the value it produces. It insisted that people should be in the loop, applying their intuition and skill to take small steps, and then using feedback to asses how successful they’d been. Each step was like a mini-waterfall, tacitly running from requirements to tests in a matter on minutes, often publishing code to the end users many times a day.
(I’m resisting the temptation here to rant about the current state of “agile.” That’ll be another article.)
What Has This to Do With AI?
Used properly, nothing.
But that’s not what I’m seeing. Companies are jumping on AI as a way of removing those messy (and expensive) humans from the process of developing software.
Right now (early 2025), we’re starting see a move from AI as coding assistant to AI as program writer. Nonprogrammers are working with Claude and ChatGPT to create simple programs. Companies are experimenting more and more with content produced largely by AI.
And the problem with that is the same as the problem with the poorly-applied waterfall approach: people don’t know what they want. They’ll ask AI for a solution to their perceived need, and then run what they are given, often without understanding what it does. This applies equally to end users and (amazingly) to developers.
So the AI produces code based on at best a partial and at worst an inaccurate description, and then that code becomes part of a larger system, integrating with other chunks of imaginary code.
That’s OK, people say. We’ll have the AI write tests, too.
Have you gone mad??
Shape The Tool To Your Hand
Don’t get me wrong. AI is a world-changing tool.
But good software developers have already changed the world beyond recognition. And they’ve done that by taking uncertain, inaccurate ideas and using their experience, intuition, and communications skills to hone them into something that changes people’s lives.
Remove these people from the equation, and will be back in the 1990s, in a world full of poor software and unmet needs.
Or am I just being a Luddite? Let’s discuss below.
Have fun
Dave
I think AI still replace jobs, simply because now you need less software engineers to produce the same results
You are being a Luddite, but that is a compliment. Understanding and discussing the human skill and value present, as well as what technology is good at and not good at is all the Luddites were doing.