Hacker Newsnew | past | comments | ask | show | jobs | submit | mekoka's commentslogin

The explanation is one sentence prior.

> I started using a Windows machine fairly recently for work.


I think it's worth framing things back to what we're reacting to. The top poster said:

> I really really want this to be true. I want to be relevant. I don’t know what to do if all those predictions are true and there is no need (or very little need) for programmers anymore.

The rest of the post is basically their human declaration of obsolescence to the programming field. To which someone reacted by saying that this sounds like shilling. And indeed it does for many professional developers, including those that supplement their craft with LLMs. Declaring that you feel inadequate because of LLMs only reveals something about you. Defending this position is a tell that puts anyone sharing that perspective in the same boat: you didn't know what you were doing in the first place. It's like when someone who couldn't solve the "invert a binary tree" problem gets offended because they believed they were tricked into an impossible task. No, you may be a smart person that understands enough of the rudiment of programming to hack some interesting scripts, but that's actually a pretty easy problem and failing to solve it indeed signals that you lack some fundamentals.

> Considering those views are shared by a number of high profile, skilled engineers, this is obviously no basis for doubting someone's expertise.

I've read Antirez, Simon Willison, Bryan Cantrill, and Armin Ronacher on how they work or want to work with AI. From none I've got this attitude that they're no longer needed as part of the process.


Indeed, discussions on LLMs for coding sound like what you would expect if you asked a room full of people to snatch up a 20 kg dumbbell once and then tell you if it's heavy.

> I think the real risk is that dumping out loads of boilerplate becomes so cheap and reliable that people who can actually fluently design coherent abstractions are no longer as needed.

Cough front-end cough web cough development. Admittedly, original patterns can still be invented, but many (most?) of us don't need that level of creativity in our projects.



> I think pretending it to be a machine, on the same level as a coffee maker does help setting the right boundaries.

Why would you say pretending? I would say remembering.


I won't go into too much details on the topic, as it's loaded with triggering elements. Let's just say that if you were to study how different cultures apprehend and conceptualize life and death (whether philosophically or religiously), I'm fairly sure that you'd come out the other end questioning a lot of your original assumptions (which I only presume you hold based on your comment). Our collective outlook can have significant and far reaching influence in individual decisions.


You're both right, but talking past each other. You're right that shared dependencies create a problem, but it can be the problem without semantically redefining the services themselves as a shared monolith. Imagine someone came to you with a similar problem and you concluded "distributed monolith", which may lead them to believe that their services should be merged into a single monolith. What if they then told you that it's going to be tough because these were truly separate apps, but that used the same OS wide Python install, one ran on Django/Postgres, another on Flask/SQLite, and another was on Fastapi/Mongo, but they all relied on some of the same underlying libs that are frequently updated. The more accurate finger should point to bad dependency management and you'd tell them about virtualenv or docker.


> I guess the word contemporary has been misused to the point of just meaning current or modern and I shouldn't nitpick it!

According to at least a few references, it very clearly applies to the two meanings. I couldn't find a single dictionary that excludes or seems to favor one over the other.


As they said, it depends on the task, so I wouldn't generalize, but based on the examples they gave, it tracks. Even when you already know what needs done, some undertakings involve a lot of yak shaving. I think transitioning to new tools that do the same as the old but with a different DSL (or newer versions of existing tools) qualifies.

Imagine that you've built an app with libraries A, B, and C and conceptually understand all that's involved. But now you're required to move everything to X, Y, and Z. There won't be anything fundamentally new or revolutionary to learn, but you'll have to sit and read those docs, potentially for hours (cost of task switching and all). Getting the AI to execute the changes gets you to skip much of the tedium. And even though you still don't really know much about the new libs, you'll get the gist of most of the produced code. You can piecemeal the docs to review the code at sensitive boundaries. And for the rest, you'll paint inside the frames as you normally would if you were joining a new project.

Even as a skeptic of the general AI productivity narrative, I can see how that could squeeze a week's worth of "ever postponed" tasks inside a day.


> but you'll have to sit and read those docs, potentially for hours (cost of task switching and all).

That is one of the assumptions that pro-AI people always bring. You don't read the new docs to learn the domain. As you've said, you've already learn it. You read it for the gotchas. Because most (good) libraries will provide examples that you can just copy-paste and be done with it. But we all know that things can vary between implementations.

> Even as a skeptic of the general AI productivity narrative, I can see how that could squeeze a week's worth of "ever postponed" tasks inside a day.

You could squeeze a week inside a day the normal way to. Just YOLO it, by copy pasting from GitHub, StackOverflow and the whole internet.


The debugger is fine, but it's not the key to unlock some secret skill level that you make it out to be. https://lemire.me/blog/2016/06/21/i-do-not-use-a-debugger/


I didn't say it's some arcane skill, just that it's a useful one. I would also agree that _reading the code_ to find a bug is the most useful debugging tool. Debuggers are second. Print debugging third.

And that lines up with some of the appeals to authority there that are good, and that are bad (edited to be less toxic)


Even though I'm using the second person, I actually don't care at all to convince you particularly. You sound pretty set in your ways and that's perfectly fine. But there are other readers on HN who are already pretty efficient at log debugging or are developing the required analytical skills and I wanted to debunk the unsubstantiated and possibly misleading claims in your comments of some superiority in using a debugger for those people.

The logger vs debugger debate is decades old, with no argument suggesting that the latter is a clear winner, on the contrary. An earlier comment explained the log debugging process: carefully thinking about the code and well chosen spots to log the data structure under analysis. The link I posted was to confirms it as a valid methodology. Overall code analysis is the general debugging skill you want to sharpen. If you have it and decide to work with a debugger, it will look like log debugging, which is why many skilled programmers may choose to revert to just logging after a while. Usage of a debugger then tends to be focused on situations when the code itself is escaping you (e.g. bad code, intricate code, foreign code, etc).

If you're working on your own software and feel that you often need a debugger, maybe your analytical skills are atrophying and you should work on thinking more carefully about the code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: