I don't like the kind of programming that an LLM can easily accomplish.
For instance, I recently had to replace a hard-coded parameter with something specifiable on the command line, in an unfamiliar behemoth of a Java project. The hard-coded value was literally 20 function calls deep in a heavily dependency-injected stack, and the argument parser was of course bespoke.
Claude Code oneshotted this in about 30 seconds. It took me all of 5 minutes to read through its implementation and verify that it correctly called the custom argument parser and percolated its value down all 20 layers of the stack. The hour of my time I got back from having to parse through all those layers myself was spent on the sort of programming I love, the kind that LLMs are bad at: things like novel algorithm development, low-level optimizations, designing elegant and maintainable code architecture, etc.
wait you were unfamiliar with a behemoth Java project to the point of dreading making the change yourself, and yet only spent 5 minutes reviewing "someone else's" PR?
Yup. Replacing a single hard-coded parameter with a command line argument is hardly an earth shattering change. It's trivial to verify that the argument properly gets passed down the stack (and that passing it has no side-effects), but figuring out that stack in the first place would have taken a much longer time. Think of it like an NP-complete problem: hard to solve, but easy to check that a solution is correct.
For more complex modifications, I would have taken the time to better internalize the code architecture myself. But for a no-brainer case like this, an LLM oneshot is perfect.
Sorry if I haven't been clear: it's one variable, used exactly once at the very bottom of the call stack. The change only required adding a corresponding extra argument or class member to all of the functions/classes upstream. In fact, there were other variables in the caller of the bottom function that get passed down from the command line, a pattern that the LLM likely picked up on (and exactly what clued me in to the fact that the LLM would likely make this change very easily, a hunch that proved correct).
You raise a good point: an important skill in effectively using LLMs for coding is both being able to recognize ahead of time that cases like this are indeed simple, but also recognizing after the fact that the code is more complex than you initially realized and you can't easily internalize the (side) effects of what the LLM wrote, warranting a closer look.
For instance, I recently had to replace a hard-coded parameter with something specifiable on the command line, in an unfamiliar behemoth of a Java project. The hard-coded value was literally 20 function calls deep in a heavily dependency-injected stack, and the argument parser was of course bespoke.
Claude Code oneshotted this in about 30 seconds. It took me all of 5 minutes to read through its implementation and verify that it correctly called the custom argument parser and percolated its value down all 20 layers of the stack. The hour of my time I got back from having to parse through all those layers myself was spent on the sort of programming I love, the kind that LLMs are bad at: things like novel algorithm development, low-level optimizations, designing elegant and maintainable code architecture, etc.