I've been programming for 20 years and GPT-4 (the one from early 2023) does it better than me.
I'm the guy other programmers I know ask for advice.
I think your metaphor might be a little uncharitable :)
For straightforward stuff, they can handle it.
For stuff that isn't straightforward, they've been trained on pattern matching some nontrivial subset of all human writing. So chances are they'll say, "oh, in this situation you need an X!", because the long tail is, mostly, where they grew up.
--
To really drive the point home... it's easy to laugh at the AI clocks.[0] But I invite you, dear reader, to give it a try! Try making one of those clocks! Measure how long it takes you, how many bugs you write. And how well you'd do it if you only had one shot, and/or weren't allowed to look at the output! (Nor Google anything, for that matter...)
I have tried it, and it was a humbling experience.
Now tell the AI to distill a bunch of user goals into a living system which has to evolve over time, integrate with other systems, etc etc. And deliver and support that system
I use Claude code every day and it is a slam dunk for situations like the one above, fiddly UIs and the like. Seriously , some of the best money I spend. But it is not good at more abstract stuff. Still a massive time saver for me and does effectively do a lot of work that would have gotten farmed out to junior engineers.
Maybe this will change in a few years and I'll have to become a potato farmer. I'm not going to get into predictions. But to act like it can do what an engineer with 20 years of experience can do means the AI brain worm got you or it says something about your abilities.
right, but this is akin to arguing why the table saw also does not do x/y/z — I don't know why we only complain about AI and how it does NOT do everything well yet.
Maybe it's expectations set by all the AI companies, idk, but this kind of mentality seems very particular to AI products and nothing else.
I'm OK pondering the right use for the tool for as long as it'll take for the dust to settle. And I'm OK too trying some of it myself. What I resent is the pervasive request/pressure to use it everywhere right now, or 'be left out'.
My biggest gripe with the hype, as there's so much talk of craftmanship here, is: most programmers I've met hate doing code reviews and a good proportion prefer rewriting to reading and understanding other people's code. Now suddenly everyone is to be a prompter and astute reviewer of a flood of code they didn't write and now that you have the tool you should be faster faster faster or there's a problem with you.
well that's the issue. The table saw is a tool, we can very clearly agree it's good at cutting a giant plank of wood but horrible at screwing a bolt in. A carpenter can do both, but not a table saw. We never try to say the table saw IS the carpenter.
All this hype and especially the AGI talks want to treat the AI as an engineer itself. Even an assuredly senior engineer above is saying that it's better than them. So I think it's valid to ask "well can it do [thing a senior engineer does on the daily]" if we're suggesting that it can replace an engineer.
I'm not complaining about it, I said in my post that it's a huge time saver. It's here to stay, and that's pretty clear to see. It has mostly automated away the need for junior engineers, which just 5 years ago would have been a very unexpected outcome, but it's kind of the reality now.
All that being said:
There's a segment of the software eng population that has their heads in the sand about it and the argument basically boils down to "AI bad". Those people are in trouble because they are also the people who insist on a whole committee meeting and trail of design documents to change the color of a button on a website that sells shoes. Most of their actual hard skills are pretty easy to outsource to an AI.
There's also a techbro segment of the population, who are selling snake oil about AGI being imminent, so fire your whole team and hire me in order to outsource your entire product to an army of AI agents. Their thoughts basically boil down to "I'm a grifter, and I smell money". Nevermind the fact that the outcome of such a program would be a smoldering tire fire, they'll be onto the next grift by then.
As with literally everything, there are loud, crazy people on either side and the truth is in the middle somewhere.
Junior engineers will be fine; OpenAI is actually choosing to hire juniors now because they just learned all their theory and structure, and are way more willing to push the LLMs to see what they can do.
Bad code is bad code. There’s been bad code since day one; the question is how fast are you willing to fail, learn, fail again, learn more, and keep going.
LLMs make failing fast nearly effortless, and THAT is power that I think young people really take to.
AI doesn’t program better than me yet. It can do some things better than me and I use it for that but it has no taste and is way too willing to write a ton of code. What is great about it compared to an actual junior is if i find out it did something stupid it will redo the work super fast and without getting sad
Too willing to write a ton of code - this is absolutely one of the things that drives me nuts. I ask it to write me a stub implementation and it goes and makes up all the details of how it works, 99% of which is totally wrong. I tell it to rename a file and add a single header line, and it does that - but throws away everything after line 400. Just unreliable and headache-inducing.
Some tools are table saws, and some tools are subcontracting work out to lowest cost bidders to do a crap job. Which of the two is AI?