Hacker Newsnew | past | comments | ask | show | jobs | submit | cryptonector's commentslogin

`rm -rf /` can be made to do nothing destructive and still be standards-compliant because in the standard `rm -rf /` is UB.

Oh no! It's pouring PRs!

Come on. Maintainers can:

  - insist on disclosure of LLM origin
  - review what they want, when they can
  - reject what they can't review
  - use LLMs (yes, I know) to triage PRs
    and pick which ones need the most
    human attention and which ones can be
    ignored/rejected or reviewed mainly
    by LLMs
There are a lot of options.

And it's not just open source. Guess what's happening in the land of proprietary software? YUP!! The same exact thing. We're all becoming review-bound in our work. I want to get to huge MR XYZ but I've to review several other people's much larger MRs -- now what?

Well, we need to develop a methodology for working with LLMs. "Every change must be reviewed by a human" is not enough. I've seen incidents caused by ostensibly-reviewed but not actually understood code, so we must instead go with "every change must be understood by humans", and this can sometimes involve a plain review (when the reviewer is a SME and also an expert in the affected codebase(s), and it can involve code inspection (much more tedious and exacting). But also it might involve posting transcripts of LLM conversations for developing and, separately, reviewing the changes, with SMEs maybe doing lighter reviews when feasible, because we're going to have to scale our review time. We might need to develop a much more detailed methodology, including writing and reviewing initial prompts, `CLAUDE.md` files, etc. so as to make it more likely that the LLM will write good code and more likely that LLM reviews will be sensible and catch the sorts of mistakes we expect humans to catch.


> Maintainers can...insist on disclosure of LLM origin

On the internet, nobody knows you're a dog [1]. Maintainers can insist on anything. That doesn't mean it will be followed.

The only realistic solution you propose is using LLMs to review the PRs. But at that point, why even have the OSS? If LLMs are writing and reviewing the code for the project, just point anyone who would have used that code to an LLM.

[1] https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...


Claiming maintainers can (do things while still take effort and time away from their OSS project's goals) is missing the point when the rate of slop submissions is ever increasing and malicious slop submitters refuse to follow project rules.

The Curl project refuse AI code and had to close their bug bounty program due to the flood of AI submissions:

"DEATH BY A THOUSAND SLOPS

I have previously blogged about the relatively new trend of AI slop in vulnerability reports submitted to curl and how it hurts and exhausts us.

This trend does not seem to slow down. On the contrary, it seems that we have recently not only received more AI slop but also more human slop. The latter differs only in the way that we cannot immediately tell that an AI made it, even though we many times still suspect it. The net effect is the same.

The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions) as we have averaged in about two security report submissions per week. In early July, about 5% of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased significantly compared to previous years."

https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...


In my experience LLMs mimic human thought, so they don't "copy" but they do write from "experience" -- and they know more than any single developer can.

So I'm getting tired of the argument that LLMs are "plagiarism machines" -- yes, they can be coaxed into repeating training material verbatim, but no, they don't do that unless you try.

Opus 4.6's C compiler? I've not looked at it, but I would bet it does not resemble GCC -- maybe some corners, but overall it must be new, and if the prompting was specific enough as to architecture and design then it might not resemble GCC or any other C compiler much at all.

Not only do LLMs mimic human thinking, but also they mimic human faults. Obviously one way in which they mimic human faults is that there are mistakes in the LLMs' training materials, so they will evince some imperfections, and even contradictions (since there will be contradictions in their training materials). Another way is that their context windows are limited, just like ours. I liken their hallucinations to crappy code written by a tired human at 3AM after a 20 hour day.

If they are so human-like, we really cannot ascribe their output to plagiarism except when prompted so as to plagiarize.


LLMs just predict the next token. They mimic humans because they were trained on terabytes of human-created data (with no credit given to the authors of the training data). They don't mimic human thinking. If they did, you would be able to train them by themselves, but if you do that you get Model Collapse

Yes, this. Vim needs no plugins for writing prose.

Terrible mistakes. People keep repeating these mistakes. Makes me think of Larry McVoy.

Clearly GP did look and you're just missing the sarcasm.

I don't think it's clear sarcasm in the sense you are making. I think GP was pointing out that doing what OP does yourself (HA git) comes with a lot of costs.

My point is that Pierre CC has really exorbitant pricing that would cost even more.


If it's media attention they want, you'd be right, but in fact you have the opposite pressure because no one wants to be judged a quack by their colleagues. You know what Max Planck said about how science advances, right? One obituary at a time.

There is also s2n-tls. I'm working on a GSS-based interface to TLS, much like SChannel is an SSPI-based interface to TLS.

Not Apples. But Amigas, omg those were smooth.

But what does Opus 4.6 say about this?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: