Hacker Newsnew | past | comments | ask | show | jobs | submit | anonym29's commentslogin

Blue states: paternalism over your property, liberty for your body

Red states: paternalism over your body, liberty for your property


except for during covid, where there was a weird reversal.

The funny thing is the more this comment is downvoted, the more accurate you know it is.

I don't even know if that was much of a "reversal".

Blue states were paternalistic over both your property (business and social gathering shutdowns) and your body (masking, social distancing enforcement), while red states (particularly Texas, Florida) were very laissez-faire for both.

What's perplexing about this is that research has generally correlated higher amygdala activity (fear/worry) with political conservatism, and lower amygdala activity with political progressivism, but in this case, the effect seemed almost inverted.


In that case, I imagine that the response of mask mandates wasn't out of fear but was done do to the obvious benefit in controlling a disaster. The anti-masking movement is also I suspect a fear response. People are afraid of change, especially extremely visible change

When progressives become the status quo, they turn into conservatives. There's a meme going around about a 1950s Soviet communist vs. a 2020s American communist, and their diametrically opposed views on things like LGBTQ and immigration.

Fear of infectious diseases is inversely correlated with testosterone levels, and so is liberalism.

I'm not parent commenter, but any example of an event that actually matters?

Elections, wars, the economy, healthcare systems, laws, court cases?

The example you provided does look like some insider trading type of stuff, just like how 1v1 professional sports players will sometimes intentionally lose after receiving a bribe to do so, but both your example and sports don't really seem to have any kind of meaningful impact on anything or anyone who isn't gambling, no?


I don't know. I imagine if someone was going to adjust the timing or targets of strikes in a war in response to a prediction market bet, or the decisions of a high-profile court case etc, they wouldn't say so in public the way the CEO of Coinbase did on that earnings call. It's pretty rare that someone would actually claim they're taking an action specifically with the purpose of altering the outcome of the prediction market bet, rather than giving some other reason. So even if it were happening, the only evidence I might expect to find would be suspicious prediction-market trades around low-probability events.

In situations like this, where you wouldn't expect to see evidence of something even if it were happening, you're basically left to make a judgement based on prior probability. So here that would come down to: is the financial incentive provided by the prediction market high enough relative to the decisionmaker's perceived risk and penalty of being caught? IMO, the answer is currently 'no' for most high-profile cases, but in a future where more money is riding on the bets, or where decisionmakers are insulated from consequences, that could swing to 'yes'.


The decision-makers don't need change their decision whatsoever to corruptly profit from it, though, they can just place bets on the timeframe or outcome of whatever decision they originally intended to make. Why risk operational disruptions and a greater chance of getting caught when you can still profit exorbitantly from insider trading on your knowledge without incurring those unnecessary risks?

Privacy advocates, UNITE!

Just leave your name and email on this contact form on github, so privacy can be solved once and for all!

(/s, but an interesting paradox for pro-privacy initiatives soliciting identifiable public support)


babas03 put it best IMO - https://news.ycombinator.com/item?id=47394432

I'd also second bluefirebrand's point that "it's your job to know what to ask the AI to build" - https://news.ycombinator.com/item?id=47394349

Those are great answers to the question you did ask, but I'd also like to answer a question you didn't ask: whether AI can improve your learning, rather than diminish it, and the answer is absolutely a resounding yes. You have a world-class expert that you can ask to explain a difficult concept to you in a million different ways with a million different diagrams; you have a tool that will draft a syllabus for you; you have a partner you can have a conversation with to probe the depth of your understanding on a topic you think you know, help you find the edges of your own knowledge, can tell you what lies beyond those edges, can tell you what books to go check out at your library to study those advanced topics, and so much more.

AI might feel like it makes learning irrelevant, but I'd argue it actually makes learning more engaging, more effective, more impactful, more detailed, more personalized, and more in-depth than anyone's ever had access to in human history.


The people proposing these kinds of infringements on civil liberties need to start being criminally tried for treason. Not just in this case, or this country, or this hemisphere.

It's hilarious that they think it needs to be codified into law. As if the right to do math wasn't intrinsic, and could be even theoretically be revoked by the government, lol.

I think it betrays cynicism about the tendency for single-objective optimizing market actors to rent-seek and cartelize. I don't think it's a stretch at all. On the surface it would be equally preposterous to suggest that breathing could be theoretically revoked by the government, which truly is preposterous but we do have those laws in place depending on whether the air you breathe has "illegal substances" in it. But then again, explicit revocation is a high bar when you can throttle the free use of computational resources by regulatory capture: the AI incumbents could say, for example, that AI is so dangerous that it must be kept out of the hands of the unwashed masses. Another excellent strategy (with a rather high bar to entry) would be to distort the markets themselves by ensuring that your prospective renters can't afford basic compute.

Wrongful imprisonment isn't something that started with AI. This is why everyone should be against the death penalty: the state cannot be trusted not to make mistakes in determining guilt.

If 10% of drivers lacked car insurance, would your solution be to remove the legal requirement to possess a valid insurance policy to operate a motor vehicle because it discriminates against the poor?

The poor have a right to vote, while they don't have a right to operate a motor vehicle. We can debate over how disenfranchising it is to be unable to drive in the US (very), but the law makes a pretty clear distinction between these two activities.

No. Because operating a motor vehicle is a very dangerous activity.

This a very is a poor analogy that you have here.


Nobody ever voted someone dangerous into power? Was voting for Hitler a harmless act?

And voter ID will fix this?

Does mandatory auto insurance prevent accidents from happening? If not, should we get rid of it?

No, but it does mitigate the damage... where are we in this analogy anymore? :laughing:

You just answered your own question. Voter ID won't solve all the problems with illegitimate votes being cast, but it will mitigate the damage.

Except that it will cause far more damage than it mitigates.

Does disenfranchising felons cause damage? What is the nature of the damage? Can it be quantified in any way?

As a Strix Halo owner, I've been eagerly awaiting Nemotron 3 Super since it was announced for H1'26 when Nemotron 3 Nano dropped. It's humbling to watch the industry move so fast that Qwen 3.5 122B A10B ends up being competitive with this on benchmarks, though, which isn't a dig at Nemotron 3 Super as much as it a testament to Qwen 3.5's phenomenonal achievements.

Still, the NVFP4 benchmark numbers also look fantastic, which is enticing to me as I'm considering supplementig my Strix Halo rig with a GB10 rig as well, not to mention the YaRN-less 1M native context window and that gorgeous hybrid mamba architecture that scales exceptionally well into the deep context lengths that are unlocked with that 1M context window.

It's fascinating how far Nvidia has been able to push models trained entirely on synthetic data, though it makes me curious to see what the hallucination rate turns out to be - this is exactly what I thought we we're not supposed to be doing to avoid model collapse.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: