It's way too far into the Trump administration for people to still be responding to authoritarian moves by Trump by finding Biden administration actions that sound vaguely similar if you don't think too hard and then pretending nothing new is going on here. (Even if it wasn't, "that's nothing" would be a pretty weird inference to draw with a comparison to something that clearly upsets you, and an article is a "piece", not a "peace".)
(It's one thing to ask people to be fair in responding to your actual comment and not a strawman. It's another to ask us to pretend we were born yesterday. We do in fact have external sources of information about Lonsdale's political allegiences.)
This is interesting partly because Alex Karp (at least used to) occasionally claim to be a socialist when it was inconvenient or uncool to be defined as a standard issue rightwinger. Never thought that meant much myself - any more than it's meaningful for Lonsdale to define himself as against "evil authoritarian forces" here while advocating the murder of his political opponents - but I know people who took him seriously for some reason.
It's good to have these guys out in the open as Pinochet types, though. Silver lining of the Trump era.
I have nonspecific positive associations with Dan Wang's name, so I rolled my eyes a bit but kept going when "If the Bay Area once had an impish side, it has gone the way of most hardware tinkerers and hippie communes" was followed up by "People aren’t reminiscing over some lost golden age..."
But I stopped at this:
> “AI will be either the best or the worst thing ever.” It’s a Pascal’s Wager
That's not what Pascal's wager is! Apocalyptic religion dates back more than two thousand years and Blaise Pascal lived in the 17th century! When Rosa Luxemburg said to expect "socialism or barbarism", she was not doing a Pascal's Wager! Pascal's Wager doesn't just involve infinite stakes, but also infinitesimal probabilities!
The phrase has become a thought-terminating cliche for the sort of person who wants to dismiss any claim that stakes around AI are very high, but has too many intellectual aspirations to just stop with "nothing ever happens." It's no wonder that the author finds it "hard to know what to make of" AI 2027 and says that "why they put that year in their title remains beyond me."
It's one thing to notice the commonalities between some AI doom discourse and apocalyptic religion. It's another to make this into such a thoughtless reflex that you also completely muddle your understanding of the Christian apologetics you're referencing. There's a sort of determined refusal to even grasp the arguments that an AI doomer might make, even while writing an extended meditation on AI, for which I've grown increasingly intolerant. It's 2026. Let's advance the discourse.
I'm not sure I understand your complaint. Is it that he misuses the term Pascal's Wager? Or more generally that he doesn't extend enough credibility to the ideas in AI 2027?
More the former. Re the latter, it's not so much that I'm annoyed he doesn't agree with the AI2027 people, it's that (he spends a few paragraphs talking about them while) he doesn't appear to have bothered trying to even understand them.
If you can't define intelligence in a way that distinguishes AIs from people (and doesn't just bake that conclusion baldly into the definition), consider whether your insistence that only one is REAL is a conclusion from reasoning or something else.
About a third of Zen and the Art of Motorcycle Maintenance is about exactly this disagreement except about the ability to come to a definition of a specific usage of the word "quality".
Let's put it this way: language written or spoken, art, music, whatever... a primary purpose these things is a sort of serialization protocol to communicate thought states between minds. When I say I struggle to come to a definition I mean I think these tools are inadequate to do it.
I have two assertions:
1) A definition in English isn't possible
2) Concepts can exist even when a particular language cannot express them
Charitably I'm guessing it's supposed to be an allusion to the chart with cost per word? Which is measuring an input cost not an output value, so the criticism still doesn't quite make sense, but it's the best I can do...
Did they turn out to be right? Maybe, I'm not familiar with the research here, but no evidence for that has actually been posted. This study being untrustworthy doesn't make it prove its opposite instead.
Scale AI wrote a paper a year ago comparing various models performance on benchmarks to performance on similar but held-out questions. Generally the closed source models performed better, and Mistral came out looking pretty badly: https://arxiv.org/pdf/2405.00332
In many circumstances - including when that person is married to a US citizen, or when they'll likely be killed on return to their country of birth - it is indeed crazy and cruel.
(In more ordinary circumstances it's merely arbitrary and unjust.)
Yes, this is by no means only a U.S. problem. Some countries are worse, even where birth rate trends seem like they should make it more obviously self-destructive. A tendency towards xenophobia seems to be an unfortunate human universal, although one we can sometimes overcome.
reply