Hacker Newsnew | past | comments | ask | show | jobs | submit | jonas21's commentslogin

Upvote the conversations that you find to be more interesting. If enough people do the same, they too will make it to the top.

Parent implies there might be some "boosting" involved, in which case, "upvote the conversations that you find to be more interesting" wont change anything...

Not saying this is the case, but it's what the comment implies, so "just upvote your faves" doesn't really address it.


If you say: "Generate a strong password", then Claude will do what's reported in the article.

If you say: "Generate a strong password using Python", then Claude will write code using the `secrets` module, execute it, and report the result, and you'll actually get a strong password.

To get good results out of an LLM, it's helpful to spend a few minutes understanding how they (currently) work. This is a good example because it's so simple.


I think that "Generate a strong password" is a pretty clear and unambiguous instruction. Generating a password that can be easily recovered is a clear failure to follow that instruction.

Given that Claude already has the ability to write and execute code, it's not obvious to me why it should, in principle, need an explicit nudge. Surely it could just fulfil the first request exactly like it fulfils the second.


It's not actually thinking, though. There's no way for it to "know" it will be wrong because it wasn't trained on content covering that.

Maybe in the future companies making the models will train them specifically on when to require a source of true randomness and they might start writing code for it.


> It's not actually thinking, though.

That may well be, I genuinely don't know. However, consider the following thought experiment:

Ask a random stranger on the street[*] to "generate a random password" and observe their behaviour. Are they whipping out their Python interpreter or just giving you a string of characters?

Now ask yourself whether this random stranger is capable of thought.

I think it's pretty clear that the former is a poor test for the latter.

[*] someplace other than Silicon Valley :)


It's 2026 on hackernews of all places and people still think llms "know" stuff, we're doomed...

> a human brain operates on 12 to 25 watts

Yeah, but a human brain without the human attached to it is pretty useless. In the US, it averages out to around 2 kW per person for residential energy usage, or 9 kW if you include transportation and other primary energy usage too.


Fair.

Maybe the Matrix (1999) with the human battery farms were on to something. :)


I suspected it wasn't just battery farms, but more like what you see in less mass market scifi where the humans are used for more than just batteries... they'd also be some storage and processing for the system (and no longer humans).

However at that point I don't see the value of retaining the human form. It's for a story obviously, but a not-human computational device can still be made out of carbon processing units rather than silicon or semiconductors generally.


Here's Claude's translation of the PDF in the repo:

https://claude.ai/public/artifacts/0c40c3f8-16de-4947-93c1-3...

I couldn't verify it, and a human translation would be preferable -- but it's probably good enough to get an idea of the story if you want to read some right now.


I can verify it looks fine.

Not absolutely rigorous, e.g.

> "You have far more knowledge of the stars and the planets than any other living man"

Why "far more knowledge"? I don't see any emphasis like that in the original.

I'd have a few nits but they're of similarly small magnitude.


> “I became certain the planet was in the sky, in the very place where the ancient Egyptian scientist had marked it”

Ah yes, planets famously remain static in the sky…


Why? Very few nonprofits contain that language in their mission statements. It's certainly not required to be there.

Perhaps not, but if it was there before and then got suddenly removed, that ought to at least raise the suspicion that the organization's nature has changed and it should be re-evaluated.

There are dozens of spam and security settings that admins can change in the Google Workspace console, presumably because different businesses have different requirements. So in practice, there's not just two sets of rules in gmail -- there's probably thousands or millions (however many combinations of settings are actually in use).

What do AI or GLP-1s have to do with a casino economy?

AI, being a boom with a lot of companies trying to make something out of nothing.

GLP, not sure.


I don't really agree, but you could argue that GLP is targeting people who want a magic solution just like casinos.

GLP-1s target humans who need a pharma intervention to assist in making their reward center in their brain more defensive against the system they are forced to exist in.

We don’t need ads for it, we should hand it out over the counter to anyone who wants or needs it, but I digress.


Which evil system do they live in? Veggies are the cheapest stuff I can find in supermarkets anywhere. I agree their reward systems are fried, but thats a result of decades of over-eating on the worst junk mankind ever produced, while this whole 'evil system' screams on them from all sides how stupid and suicidal this is, how sugar is same as cocaine and so on.

Its all a mental problem (and here in Switzerland this is general consensus among doctors and I have one for wife), and an attempt to solve it anywhere else down the decision line apart from the head is just (temporarily, in case of glp) fixing the consequences.


Not every gas station, convenience store and pharmacy is stocked with aisle after aisle of cocaine. I don't know I would call it 'evil' but I agree it is a system most people are forced to exist in.

If we're appealing to authority, my mother, my father, and my sister are all highly accomplished doctors, and they believe GLP-1s will become part of a standard drug package to older adults like Statins because it's far more achievable than education we don't have and wouldn't work in the food system that exists in the US.


> Its all a mental problem

Yes, so let's just solve it. Okay, no more mental problem.

What's that? I didn't actually say a solution? Yeah, that's because I don't have one, and neither do you.

We can't just make people good people. It doesn't work, it's never worked, and it will never work. If you think otherwise, you are wrong. If you still think otherwise, you should think less because obviously it's not doing you any good.

We can sit here alllll day and tell people not to inject heroine or smoke cigarettes. But guess what? So long as the human brain is how it is, and we have those things available, people WILL continue to do them.

So while you have fake solutions you've made up in your head and can't even articulate, we have real solutions. GLP-1s. They work, as in they actually work. They actually help solve the problem.

So on one hand, you have an imaginary solution. On the other hand, you have a real solution. Hmm, which one should we gravitate towards? What a tough call!


> Its all a mental problem (and here in Switzerland

Ah, that might explain why you'd think that healthy food is easily available and affordable everywhere. I haven't seen what stores are like in Switzerland for myself, but it sure sounds like a massive improvement and may be part of why the obesity rate there is around 10% instead of over 40% like it is here.


But GLP-1's are a magic solution...

Yeah, it's actually revolutionary if you think about it, easy loss weight vs previous approaches, and it seems to help with smoking/alcohol as well.

The side effects are minor compared to the wins.


The last times so many tech ads made their way to the Superbowl was during the crypto craze and the dot-com bubble. These are symptoms of an overly speculative economy.

> If I could destroy these things - as the Luddites tried - I would do so, but that's obviously impossible.

Certainly, you must realize how much worse life would be for all of us had the Luddites succeeded.


Or perhaps they would have advanced the cause of labor and prevented some of the exploitation from the ownership class. Depends on which side of the story you want to tell. The slur Luddite is a form of historical propaganda.

Putting it in today's terms, if the goal of AI is to significantly reduce the labor force so that shareholders can make more money and tech CEOs can become trillionaires, it's understandable why some developers would want to stop it. The idea that the wealth will just trickle down to all the laid off work is economically dubious.


Reaganomics has never worked

> Reaganomics has never worked

Depends how you look at it.

Trickle down economics has never worked in the way it was advertised to the masses, but it worked fantastically well for the people who pushed (and continue to push) for it.


Sure, because it all trickles into their pockets.

> it worked fantastically well for the people who pushed (and continue to push) for it.

That would be "trickle up economics", though.


problem today is that there is no "sink" for money to go to when it flows upwards. we have resorted to raising interest rates to curb inflation, but that doesn't fix the problem, it just gives them an alternative income source (bonds/fixed income)

I'm not a hard socialist or anything, but the economics don't make sense. if there's cheap credit and the money supply perpetually expands without a sink, of course people with the most capital will just compound their wealth.

so much of the "economy" orbits around the capital markets and number going up. it's getting detached from reality. or maybe I'm just missing something.


Yeah it's called wealth transfer and the vast majority is on the wrong end.

All of this crap has already been figured out by Silvio Gesell and his free economics ("Freiwirtschaft") movement before world war two. In fact, it has been figured out before Lenin got to power. Communists could have just stolen his ideas, pretended that they were their own and be done with capitalism.

Instead, we live in this absurd timeline where our communist "saviors" preferred losing against capitalists over achieving their stated goals. This tells us that in communism, the means are the goal and the proclaimed end goal is just an excuse to perform the means.

If communists genuinely wanted to help their people and they accept that they might not know how to get there, they would at least run hundreds to thousands of economic experiments, something that wouldn't be possible under capitalism, to find the methods that work. Instead, the self proclaimed saviors are inherently anti-reformist and against incremental change to improve society. They demand that all economic activity be under the control of the state and thereby destroy all possibility of performing economic experiments.


If the human race is wiped out by global warming I'm not so sure I would agree with this statement. Technology rarely fails to have downsides that are only discovered in hindsight IMO.

If you knew a little bit about history then you would know that the "Anti-Luddite" position is literally "shoot the unemployed if they strike".

Equivocating Luddites with backwards thinking is a way to cover up government violence. You're literally trying to misrepresent the Luddite position by implying that they had some sort of global plot to force the world to be worse and that they were rightfully stopped by the government when in reality they had some personal grievances about how they were treated and they took revenge against the owners of capital by vandalizing their capital.

You're trying to twist this into Luddites hating capital and machinery itself, which is factually wrong.


Sure, but would it have been better or worse for the Luddites?

For those who survived sure. For those at the time, I'm sure they would disagree

It's correct in the same way as saying ads in the New York Times don't change the articles. Seems fair.

I think a better comparison is saying that search ads don't change search results (but it does change the results page).

The point is that the language and nuance ends up being lost on a large portion of the audience.


Funny, because the impact of introducing ads on the editorial line of any publication that does it is very real. I'd expect the same from ChatGPT.

Just like some youtube content with built-in ads about AI tools while the video is bushing on AI tools

But they do?

So that you can be using the current frontier model for the next 8 months instead of twiddling your thumbs waiting for the next one to come out?

I think you (and others) might be misunderstanding his statement a bit. He's not saying that using an old model is harmful in the sense that it outputs bad code -- he's saying it's harmful because some of the lessons you learn will be out of date and not apply to the latest models.

So yes, if you use current frontier models, you'll need to recalibrate and unlearn a few things when the next generation comes out. But in the meantime, you will have gotten 8 months (or however long it takes) of value out of the current generation.


You also don't have to throw away everything you've learnt in those 8 months, there's some things that you'll subtly pickup that you can carry over into the next generation as well.

Also a lot of what you learn is how to work around limitations of today's models and agent frameworks. That will all change, and I imagine things like skills and subagents will just be an internal detail that you don't need to know about.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: