Hacker Newsnew | past | comments | ask | show | jobs | submit | rednafi's commentslogin

I come from a developing country where the only OS people know is Windows. Macs used to be too expensive, and Linux didn’t have any of the applications people would use (read: pirate) for work.

Typically, college students and teachers would get $500 dingy laptops from Asus, Acer, and Dell. A decade ago, those machines were fine. My mom used one for 7 years, right until they retired Windows 7.

Then the machines started becoming absolutely useless with Windows 8, 10, and now 11. 8GB machines are barely usable now, with constant Windows updates and all the background telemetry services maxing out the disk all the time.

Sure, people can turn off some of these rogue processes. But my point is - an OS should just disappear from the user’s view and let them work.

I don’t live in my home country and haven’t visited in a long time, but I’ve heard that people are really opting for second-hand MacBook Airs. Now with the MacBook Neo, more people will go that route.

Students are opting for cheap Windows machines and flashing them with Ubuntu to make them usable.


The world could use one less "how I slop" article at this point.

This reminds me of the early Medium days when everyone would write articles on how to make HTTP endpoints or how to use Pandas.

There’s not much skill involved in hauling agents, and you can still do it without losing your expertise in the stuff you actually like to work with.

For me, I work with these tools all the time, and reading these articles hasn’t added anything to my repertoire so far. It gives me the feeling of "bikeshedding about tools instead of actually building something useful with them."

We are collectively addicted to making software that no one wants to use. Even I don’t consistently use half the junk I built with these tools.

Another thing is that everyone yapping about how great AI is isn’t actually showing the tools’ capabilities in building greenfield stuff. In reality, we have to do a lot more brownfield work that’s super boring, and AI isn’t as effective there.


I have always enjoyed the feeling of aporia during coding. Learning to embrace the confusion and the eventual frustration is part of the job. So I don’t mind running in a loop alongside an agent.

But I absolutely loathe reviewing these generated PRs - more so when I know the submitter themselves has barely looked at the code. Now corporate has mandated AI usage and is asking people to do 10k LOC PRs every day. Reviewing this junk has become exhausting.

I don’t want to read your code if you haven’t bothered to read it yourselves. My stance is: reviewing this junk is far more exhausting. Coding is actually the fun part.


> Now corporate has mandated AI usage and is asking people to do 10k LOC PRs every day.

That's a big red flag if I ever saw one. Corporate should be empowering the engineering team to use AI tooling to improve their own process organically. Is this true or exaggeration? If it's true I'd start looking for a more balanced position at more disciplined org.


True at Doordash, Amazon, and salesforce - speaking from experience.

Mandates are becoming normal. Most devs don’t seem to want to but they want to keep their jobs.

Definitely a sign that workers aren't being exploited.

10k LoC per day? Wow, my condolences to you.

On a different note: something I just discovered is that if you google "my condolences", the AI summary will thank you for the kindness before defining its meaning, fun.


>Reviewing this junk has become exhausting.

Nitpick it to death. Ask the reviewer questions on how everything works. Even if it looks good, flip a coin and reject it anyway. Drag that review time out. You don't want unlucky PRs going through after all.

Corporate is not going to wake up and do the sensible thing on its own.


Ha ha I wish. Then both corporate and your coworkers hate you.

Also, there is no point in asking questions when you know that they just yoloed it and won't be able to answer anything.

We have collectively lost our common sense and reasonable people are doing unreasonable things because there's an immense amount of pressure from the top.


It's their share price. Vibe code gets vibe reviews. #shipit

I always wonder where HNers worked or work; we do ERP and troubleshooting on legacy systems for medium to large corps; PRs by humans were always pretty random and barely looked at as well, even though the human wrote it (copy/pasted from SO and changed it somewhat); if you ask what it does they cannot tell you. This is not an exception, this is the norm as far as I can see outside HN. People who talk a lot, don't understand anything and write code that is almost alien. LLMs, for us, are a huge step up. There is a 40 nested if with a loop to prevent it from failing on a missing case in a critical Shell (the company) ERP system. LLMs would not do that. It is a nightmare but makes us a lot of money for keeping things like that running.

I currently work at one of the biggest tech companies. I’ve been doing this for over 20 years, and I’ve worked at scrappy startups, unicorns, and medium size companies.

I’ve certainly seen my share of what I call slot driven development where a developer just throws things at the wall until something mostly works. And plenty if cut and paste development.

But it’s far from the majority. It’s usually the same few developers at a company doing it, while the people who know what they’re doing furiously work to keep things from falling apart.

If the majority of devs were doing this nothing would work. My worry is that AI lets the bad devs produce this kind of work on a massive scale that overwhelms the good devs ability to fight back or to even comprehend the system.


I also work at a huge company, and this observation is true. The way AI is being rammed down our throats is burning out the best engineers. OTOH, the mediocre simian army “empowered” by AI is pushing slop like there’s no tomorrow. The expectation from leadership, who tried Claude for a single evening, is that you should be able to deliver everything yesterday.

The resilience of the system has taken a massive hit, and we were told that it doesn’t matter. Managers, designers, and product folks are being asked to make PRs. When things cause Sev0 or Sev1 incidents, engineers are being held responsible. It’s a huge clown show.


> The expectation from leadership, who tried Claude for a single evening, is that you should be able to deliver everything yesterday.

"Look, if the AI fairy worked like that our company would be me and the investors."

I should make t-shirts. They'll be worth a fortune in ironic street cred once the AI fairy works like that.


Tech companies. How about massive non software tech companies. I don't know where it is not the norm and I have been in very many of them as supplier for the past 30 years. Tech companies are a bit different as they usually have leadership that prioritizes these things.

None tech companies too. You can’t build large scale software with everyone merging PRs like that. My guess is that if you’re a supplier your are getting a pretty severe sampling bias.

I would hope that most people who are technically competent enough to be on HN are technically competent enough to quit orgs with coding standards that bad. Or, they're masochists who have taken on the chamllenge of working to fix them

Half the posts here are talking about how they 100xd their output with the latest agentic loop harness, so I'm not sure why you would get that impression.

Neither of those. The pay is great and if all leadership cares about is making the whole company "AI Native" and pushing bullshit diffs, I'll play ball.

Claude has a built-in /simplify command

I think you just need to add a /complexify one with the same pattern, ask the AI to make everything as complex and long-winded as possible, LOC over clarity


I do “TDD” LLM coding and only review the tests. That way if the tests pass I ship it. It hasn’t bitten me in the ass yet.

The one thing I don't quite get is how running a loop alongside an agent is any different from reviewing those PRs.

If you run a loop alongside the agent and make PRs that are tractable, then there isn’t much difference. But to me, it seems like we have collectively lost our minds and think it’s okay to make a 10k LOC PR and ask someone else to review it.

In my experience LLMs also suffer massively from "not invented here" syndrome. I've seen them copy whole interfaces just to implement a feature that was already implemented in a dependency.

All with verbose comments that are just a basic translation of the code next to it.


10k, really? Are you supposed to understand all that code? This is crazy and a one way street to burnout.

Yep and now we are encouraged to use AI to review the code as well. But if shit hits the fan then you are held responsible.

Use AI to review.

Shhh...you're only supposed to unilaterally praise it to get along with your clueless leadership.

Soon, we'll start seeing Claude certs getting listed on LinkedIn alongside Coursera courses.

People with titles like

Giga Chad, MBA, CSS, CKAD, XXX, PQRS

are gonna love this.

In no time, HRs will start slapping “10 years of certified Claude Code experience required” on job listings.


it’s crazy how you could easily lie about having 10 years experience because your results are not that much different from someone who has only used Claude Code for like a week.

I think the older AI users are even held back because they might be doing things that are not neccessary any more, like explaining basic things, like please don't bring in random dependencies but prefer the ones that are there already, or the classic think really strongly and make a plan, or try to use a prestigious register of the language in attempts to make it think harder.

Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do, where it originated, and looks into causes. Often times I come back to the chat and see a working explanation together with a fix.

Before I had to actually explain why I want it to change some implementation in some direction, otherwise it would refuse no I won't do that because abc. Nowadays I can just pass the raw instruction "please move this into its own function", etc, and it follows.

So yeah, a lot of these skills become outdated very quickly, the technology is changing so fast, and one needs to constantly revisit if what one had to do a couple of months earlier is still required, whether there's still the limits of the technology precisely there or further out.


I’ve no idea what you’re using, but “I simply paste” isn’t giving me good results at all.

An hour ago Gemini decided it needed to scan my entire home folder to find the test file I asked it to look into. Sonnet will definitely try to install new dependencies, even though I’m doing SDD and have a clear AGENTS.md.

I’m always baffled at people’s magic results with LLMs. I’m impressed by the new tools, but lots of comments here would suggest my Gemini/Sonnet/Opus are much worse than yours.


> I think the older AI users are even held back because they might be doing things that are not neccessary any more

As the same age as Linus Torvalds, I'd say that it can be the opposite.

We are so used to "leaky abstractions", that we have just accepted this as another imperfect new tech stack.

Unlike less experienced developers, we know that you have to learn a bit about the underlying layers to use the high level abstraction layer effectively.

What is going on under the hood? What was the sequence of events which caused my inputs to give these outputs / error messages?

Once you learn enough of how the underlying layers work, you'll get far fewer errors because you'll subconciously avoid them. Meanwhile, people with a "I only work at the high-level"-mindset keeps trying to feed the high-level layer different inputs more or less at random.

For LLMs, it's certainly a challenge.

The basic low level LLM architecture is very simple. You can write a naive LLM core inference engine in a few hundred lines of code.

But that is like writing a logic gate simulator and feeding it a huge CPU gate list + many GBs of kernel+rootfs disk images. It doesn't tell you how the thing actually behaves.

So you move up the layers. Often you can't get hard data on how they really work. Instead you rely on empirical and anecdotal data.

But you still form a mental image of what the rough layers are, and what you can expect in their behavior given different inputs.

For LLMs, a critical piece is the context window. It has to be understood and managed to get good results. Make sure it's fed with the right amount of the right data, and you get much better results.

> Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do

That's exactly the right thing to do given the right circumstances.

But if you're doing a big refactoring across a huge code base, you won't get the same good results. You'll need to understand the context window and how your tools/framework feeds it with data for your subagents.


I think GP meant 'longer time users of AI', not 'older aged users of AI'.

Their point being that it's not really an advantage to have learnt the tricks and ways to deal with it a year, two years ago when it's so much better now, and that's not necessary or there's different tricks.


Yeah I meant it in the context of the comment I was replying to, to be precise in the context of the comment that one was replying to, i.e. "10 years of certified Claude Code experience required".

The technology is moving so fast that the tricks you learned a year ago might not be relevant any more.


Thanks, I agree 100% with that.

I still see people doing "you are a world class distributed systems engineer" think. Never fails me to chuckle.

I hope it’s at least a little tricky, since Claude was released only 3 years ago. That said, I would not be surprised to see companies asking for 10 years experience, despite that inconvenient truth.

I’ve seen it play out multiple times, highlights precisely why a candidate should never withhold their application based on preference of years of experience with anything. They simply haven’t put much thought into those numbers.

If you work on 10 projects in parallel for a year using Claude code… you have the equivalent of 10 years of experience in 1 year.

No you would have ten projects finished. You would have less than a year of actually programming experience.

That's not how it works...

It actually is. A year of experience is not equal at different companies.

You could spend years writing very little code and have “years of experience” in a language, and you can also output intense volumes of work and still be within a year.

Of those two people, the one who spent less real time but produced more work, can have the equivalent experience of the person who spent years.

The key is to figure out how much work a person using Claude Code would have been expected to produce in 10 years, then find a way to do that much in a single year. Boom, you just solved the years of experience problem.


You've never seen project managers basically propose the equivalent of getting a baby delivered in 1 month instead of 9 months by adding more people to the project?

But yeah, if the recruiters start asking for "10 years experience with Claude Code", then I guess a tongue-in-cheek answer would be "sure, I did 10 projects in parallel in one year".


Mythical Man Month -> Mythical Agent Swarm

Duh, just use Claude to 10x your productivity and get 10 years experience with Claude in one year.

If you can add more people to finish a project faster, I can add more projects to get experience faster.

You’re very confused i think.

Adding more people to a project doesn’t improve throughout - past a certain point. Communication and coordination overhead (between humans) is the limiting factor. This has been well known in the industry for decades.

Additionally, i’d much rather hire someone that worked on a a handful of projects, but actually _wrote_ a lot of the code, maintained the project after shipping it for a couple years, and has stories about what worked and didn’t, and why. Especially a candidate that worked on a “legacy” project. That type of candidate will be much more knowledgeable and able to more effectively steer an AI agent in the best direction. Taking various trade offs into account. It’s all too easy to just ship something and move on in our industry.

Brownie points if they made key architecture decisions and if they worked on a large scale system.

Claude building something for you isn’t “learning” in my opinion. That’s like saying I can study for a math exam by watching a movie about someone solving math problems. Experience doesn’t work like that. You can definitely learn with AI but it’s a slow process, much like learning the old fashioned way.

Maybe “experience” means different things to us…


I actually prefer removing people

The obvious solution is for Anthropic et al. to certify the skills of each user:

> “Good at explaining requirements, needs handholding to understand complex algorithms, picky with the wording of comments, slightly higher than average number of tokens per feature.”

I’m not saying this would be good at all, but the data (/insights) and the opportunity are clearly there.


You’re right, and I think this is the future.

For any proctored standardized testing a person takes, AI should be able to quickly summarize that person’s abilities. This way, instead of people writing their own BS resumes, a trusted test provider can evaluate an individual deeply, solving the problem of having to waste time on coding interviews etc. it will speed up hiring.


At work we’ve had like 10 hours of “AI training”. Like training us to use AI. I obviously learned nothing

_Open to Claude_ ;)

I use the enterprise plan for work and often burn ~150$ worth of tokens per day. I have noticed exhibiting similar behaviors here.

When you say nearly unlimited token, do you mean the 100 or 200$ subscription?


$200, over December it was doubled. I tried my best in between family time and friends to burn a hole in it. Never got near doing so.

I have the enterprise plan and get to use it for both work and some personal stuff.

I mainly use it for side projects and doing research for writing stuff on my blog.

I use Opus 4.6 with claude code 1M context and consistently use up 150-200$ worth of token per day. I wonder how do you manage to do anything with a 10$/mo program.


AI just lowered the cost of replication. Now you can replicate good or bad stuff but that doesn't automatically make AI the enabler of either.

I think a lighter version of literate programming, coupled with languages that have a small API surface but are heavy on convention, is going to thrive in this age of agentic programming.

A lighter API footprint probably also means a higher amount of boilerplate code, but these models love cranking out boilerplate.

I’ve been doing a lot more Go instead of dynamic languages like Python or TypeScript these days. Mostly because if agents are writing the program, they might as well write it in a language that’s fast enough. Fast compilation means agents can quickly iterate on a design, execute it, and loop back.

The Go ecosystem is heavy on style guides, design patterns, and canonical ways of doing things. Mostly because the language doesn’t prevent obvious footguns like nil pointer errors, subtle race conditions in concurrent code, or context cancellation issues. So people rely heavily on patterns, and agents are quite good at picking those up.

My version of literate programming is ensuring that each package has enough top-level docs and that all public APIs have good docstrings. I also point agents to read the Google Go style guide [1] each time before working on my codebase.This yields surprisingly good results most of the time.

[1] https://google.github.io/styleguide/go/


> The Go ecosystem is heavy on style guides, design patterns, and canonical ways of doing things.

Go was designed based on Rob Pike's contempt for his coworkers (https://news.ycombinator.com/item?id=16143918), so it seems suitable for LLMs.


Then it results in an absurd amount of duplication. I regularly encounter error strings like:

error:something happened:error:something happened


Yes, and that is desired.

Error: failed processing order: account history load failure: getUser error: context deadline exeeded


Your example shows an ideal case w/o repetition. If every layer just wraps error without inspecting, then there will be duplication in the error string.

I have never seen that. I have shipped multiple dozens of services at half a dozen companies. Big code bases. Large teams. Large volumes of calls and data. Complicated distributed systems.

I am unable to imagine a case where an error string repeated itself. On a loop, an error could repeat, but those show as a numerical count value or as separate logs.


This feels like manually written stacktraces

I’d find Error: failed processing order: context deadline exceeded just as useful and more concise.

Typically there is only one possible code path if you can identify both ends.


Not in my experience. Usually your call chain has forks. Usually the DoThing function will internally do 3 things and any one of those three things failed and you need a different error message to disambiguate. And four methods call DoThing. The 12 error paths need 12 uniquely rendered error messages. Some people say "that is just stack traces," and they are close. It is a concise stack trace with the exact context that focuses on your code under control.

If you have both the start of the call chain and the end of the call chain mapped you will get a different error response almost every time and it is usually more than enough, so say your chain is:

Do1:...Do10, which then DoX,DoY,DoZ and one of those last 3 failed.

Do you really need Do1 to Do10 to be annotated to know that DoY failed when called from Do1? I find:

Do1:DoZ failed for reason bar

Just as useful and a lot shorter than: Do1: failed:Do2:failed...Do9 failed:Do10:failed:DoZ failed for reason bar

It is effectively a stack trace stored in strings, why not just embed a proper stack trace to all your errors if that is what you want?

Your concern with having a stack trace of calls seems a hypothetical concern to me but perhaps we just work on different kinds of software. I think though you should allow that for some people annotating each error just isn't that useful, even if it is useful for you.


After a decade of writing go, I always wrap with the function name and no other content. For instance:

do c: edit b: create a: something happened

For functions called doC, editB, createA.

It’s like a stack trace and super easy to find the codepath something took.


I have a single wrap function that does this for all errors. The top level handler only prints the first two, but can print all if needed.

I have never had difficulty quickly finding the error given only the top two stack sites.

Any complaint about go boilerplate is flawed. The purpose and value is not in reducing code written, it is to make code easier to read and it achieves this goal better than any other language.

This value is compounding with coding agents.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: