HN loves this "le old school" coder "fighting the good fight" speak but it seems sillier and sillier the better and better LLM's get. Maybe in the GPT 4 era this made sense but Gemini 3 and Opus 4.5 are substantively different, and anyone who can extrapolate a few years out sees the writing on the wall.
A year ago, no reasonable person would use AI for anything but small-scoped autocomplete. Now it can author entire projects without oversight. Inevitably every failure case for LLM's is corrected, everything people said LLM's "could never do" they start doing within 6 months of that prognostication.
Great comment. I'll add that despite being a bit less powerful, the Composer 1 model in Cursor is also extremely fast - to the point where things that Claude would take 10+ minutes of tool calls now takes 30 seconds. That's the difference between deciding to write it yourself, or throwing a few sentences in Cursor and having it done right away. A year ago I'd never ask AI to do tasks without being very specific about which files and methodologies I want it to use, but codebase search has improved a ton and it can gather this info on it's own, often better than I can (if I haven't worked on particular feature or domain in a few months and need to re-familiarize myself with how it's structured). The bar for what AI can do today is a LOT higher than the average AI skeptic here thinks. As someone who has been using this since the GPT4 era, I'd say that I find a prompt about once a week that I figured LLMs would choke on and screw up - but they actually nail it. Whatever free model is running in Github Copilot is not going to do as well, which is probably where a lot of frustration comes from if that is all someone has experienced.
Yeah the thing about having principles is that if the principle depends on a qualitative assessment, then the principle has to be flexible as the quality that you are assessing changes. If AI was still at 2023 levels and was improving very gradually every few years like versions of Windows then I'd understand the general sentiment on here, but the rate of improvement in AI models is alarmingly fast, and assumptions about what AI "is good for" have 6-month max expiration dates.
Most "low hanging fruits" have been taken. The thing with AI is that it gets worse in proportion to how new of a domain it is working in (not that this is any different than humans). However the scale of apps made that utilize AI have exploded in usefulness. What is funny is that some of the ones making a big dent are horrible uses of AI and overpromise its utility (like cal.ai)
A year ago, no reasonable person would use AI for anything but small-scoped autocomplete. Now it can author entire projects without oversight. Inevitably every failure case for LLM's is corrected, everything people said LLM's "could never do" they start doing within 6 months of that prognostication.