12 Comments

I actually find TDD to be an excellent practice when it comes to working with LLMs. Defining the outcome and keeping the context window small tends to produce the best outcomes, and also means I can be delighted by the code that gets produced to achieve that outcome (which usually happens with TDD even without the help of AI)

Expand full comment

I think AI still replace jobs, simply because now you need less software engineers to produce the same results

Expand full comment

I would agree if today’s results were all we were trying to simulate on the future. But we wont be. I actually think there will be greater demand for engineers than ever, because there will be more software created than ever, increasingly by people who aren’t engineers and don’t know how to maintain the optionality of that software. Just like Excel led to an expansion in the number of accountants, not a diminution, so do I see it going with AI and software tools

Expand full comment

You are being a Luddite, but that is a compliment. Understanding and discussing the human skill and value present, as well as what technology is good at and not good at is all the Luddites were doing.

Expand full comment

I think this misconception is a product of too many non-technical business people listening to the loudest and brashest voices, most of whom are relatively new to software and don't know what they don't know. LLMs require training data to learn from, if no data is being produced because no humans write code (or produce creative content) then there's no evolution of the models and presumably they become stale. Also it's the human condition leaking into work that makes art, music, literature, software, products etc enjoyable to consume and appreciate so it's hard to imagine a world where generic AI produced dross will compete with a skilled human in anything but the most repetitive and monotonous tasks.

Expand full comment

I'm glad to hear someone besides myself explicitly decry the idea of using AI to write tests specifically. If anything, that's the part where human intention is most critical to spell out in detail, and should the implementation be written in part by AI (not my preference, but whatever), having written the tests myself would give me some level of confidence in its output.

Expand full comment

Thanks!

But I also think that having correct output is only half the battle. Much of the AI code I see is pretty gnarly, and I'd hate to have to maintain it....

Expand full comment

Yes, more broadly I think it's useful to think about LLM code output as akin to copy/pasting some solution from Stack Overflow. Maybe it works, maybe it doesn't, but it's irresponsible in either case to commit it without fully understanding it and likely giving it a refactoring pass or two.

Personally, I don't reach for LLMs much at all. The one time it helped me with a coding task, it wasn't that it gave me working code (to the contrary, the code it provided could never work), but that it indirectly led me to the right general area of the Erlang documentation that I actually needed. I also like that I can practically use Vim with my eyes closed, and a bunch of Copilot or Cursor output, eh...writing code is much more fun than reading someone else's code IMO! I'd rather spend more time doing the former and less doing the latter.

Expand full comment

I really like Copilot in Vim for creating fairly rote structures, mappings, and so on. And when I'm using a language I haven't touched for a few years, it really helps me find the libraries I need. But I don't think I've ever put any code it has generated live...

Expand full comment

Agreed! LLMs can be used by senior devs as a productive support tool that speeds up efficiency of an engineer but still the output of it should be verified by experienced programmer and if solution is poor needs to be redirected to the right path. Now I am wondering if companies will stop to hire juniors and mids how they can expect in the future to have those competent experienced devs to supervise what AI will be generating... :)

Expand full comment

Yeah; the apparent purge on the more junior devs is clearly unsustainable.

My feeling is that we're already at or just past peak "AI will replace developers." I'm waiting for the first "we had to shut down the company because AI-generated code stopped working and no one knows why...

I love using AI as a tool, and love that it is getting better seemingly every day. But even if it becomes a perfect code generation machine, true development requires conversation and intuitive insight.

Expand full comment

I'm waiting for the first "we had to shut down the company because mid-level devs no longer exist, and the seniors are in too high demand"

Expand full comment