15 Comments
User's avatar
Alex Jukes's avatar

I actually find TDD to be an excellent practice when it comes to working with LLMs. Defining the outcome and keeping the context window small tends to produce the best outcomes, and also means I can be delighted by the code that gets produced to achieve that outcome (which usually happens with TDD even without the help of AI)

Expand full comment
Pragdave's avatar

I feel that's reasonable. But I also worry about TDD in general being too narrowly focused on solving the immediate next problem. There's a really great example of this in Ron Jeffries posts about writing a Sudoku solver using TDD, where he spends a number of interactions coming up with a board representation and then stalling, because he had nowhere to go next. When you know what the solution is, when it's a simple matter of coding, then I think TDD has a lot of merit, both with human and AI coders. But when the outcome is less clear, I suspect AI and TDD will just help you go down the rabbit hole faster.

Expand full comment
Alex Jukes's avatar

Yes I agree with that. An example that springs to mind was when I needed to write a solution to the Knights Travail problem for a tech interview. I TDD’d getting the board set up, wrote a test saying knight moves to X square in Y moves, and then… spent hours if not days reading up on heuristics and then wrote at A* algorithm implementation (I’m not a CS grad so I had to reason and google from scratch) to then get it to pass. TDD eased the anxiety somewhat of getting started, but I realised pretty quickly I just needed to figure out the solution and build it, tests weren’t going to help. I got the job, although the feedback was that Djikstras ‘s would have been sufficient, which I then read up about 😄

Expand full comment
Asher's avatar

I think AI still replace jobs, simply because now you need less software engineers to produce the same results

Expand full comment
Pragdave's avatar

That also depends on whether the current number of developers is limited by the demand, or the demand is limited by the number of developers. I suspect, given the ridiculously high salaries some devs get, it's the latter. If so, after a period of panic, I suspect the number of devs will increase, but that there'll be a new category of lower-pay developer jobs for what are effectively "prompt and context engineers."

Expand full comment
Alex Jukes's avatar

I would agree if today’s results were all we were trying to simulate on the future. But we wont be. I actually think there will be greater demand for engineers than ever, because there will be more software created than ever, increasingly by people who aren’t engineers and don’t know how to maintain the optionality of that software. Just like Excel led to an expansion in the number of accountants, not a diminution, so do I see it going with AI and software tools

Expand full comment
Kin Lane's avatar

You are being a Luddite, but that is a compliment. Understanding and discussing the human skill and value present, as well as what technology is good at and not good at is all the Luddites were doing.

Expand full comment
Tom Gosling's avatar

I think this misconception is a product of too many non-technical business people listening to the loudest and brashest voices, most of whom are relatively new to software and don't know what they don't know. LLMs require training data to learn from, if no data is being produced because no humans write code (or produce creative content) then there's no evolution of the models and presumably they become stale. Also it's the human condition leaking into work that makes art, music, literature, software, products etc enjoyable to consume and appreciate so it's hard to imagine a world where generic AI produced dross will compete with a skilled human in anything but the most repetitive and monotonous tasks.

Expand full comment
Chris's avatar

I'm glad to hear someone besides myself explicitly decry the idea of using AI to write tests specifically. If anything, that's the part where human intention is most critical to spell out in detail, and should the implementation be written in part by AI (not my preference, but whatever), having written the tests myself would give me some level of confidence in its output.

Expand full comment
Pragdave's avatar

Thanks!

But I also think that having correct output is only half the battle. Much of the AI code I see is pretty gnarly, and I'd hate to have to maintain it....

Expand full comment
Chris's avatar

Yes, more broadly I think it's useful to think about LLM code output as akin to copy/pasting some solution from Stack Overflow. Maybe it works, maybe it doesn't, but it's irresponsible in either case to commit it without fully understanding it and likely giving it a refactoring pass or two.

Personally, I don't reach for LLMs much at all. The one time it helped me with a coding task, it wasn't that it gave me working code (to the contrary, the code it provided could never work), but that it indirectly led me to the right general area of the Erlang documentation that I actually needed. I also like that I can practically use Vim with my eyes closed, and a bunch of Copilot or Cursor output, eh...writing code is much more fun than reading someone else's code IMO! I'd rather spend more time doing the former and less doing the latter.

Expand full comment
Pragdave's avatar

I really like Copilot in Vim for creating fairly rote structures, mappings, and so on. And when I'm using a language I haven't touched for a few years, it really helps me find the libraries I need. But I don't think I've ever put any code it has generated live...

Expand full comment
Konrad's avatar

Agreed! LLMs can be used by senior devs as a productive support tool that speeds up efficiency of an engineer but still the output of it should be verified by experienced programmer and if solution is poor needs to be redirected to the right path. Now I am wondering if companies will stop to hire juniors and mids how they can expect in the future to have those competent experienced devs to supervise what AI will be generating... :)

Expand full comment
Pragdave's avatar

Yeah; the apparent purge on the more junior devs is clearly unsustainable.

My feeling is that we're already at or just past peak "AI will replace developers." I'm waiting for the first "we had to shut down the company because AI-generated code stopped working and no one knows why...

I love using AI as a tool, and love that it is getting better seemingly every day. But even if it becomes a perfect code generation machine, true development requires conversation and intuitive insight.

Expand full comment
Ash's avatar

I'm waiting for the first "we had to shut down the company because mid-level devs no longer exist, and the seniors are in too high demand"

Expand full comment