No, it is nothing like people railing against images or JavaScript in webpages. Those are features of websites that the browser needs to support to provide the full intended experience to the user of websites that use them. In what way is integrating an LLM, let alone an agentic AI, needed to provide the full intended experience of which websites?
Why would my browser that is used to display static HTML need high performance 3D rendering or a low level assembly target?
The websites that use LLMs don't exist yet. Think of something like meetup.com. Instead of that you ask your LLM for something to do this weekend. It finds events, other people looking for something to do, and you have an ad hoc meetup.
Beyond that it's like looking at a blackberry and predicting what it'll be used for in the future. Hard to say "oh gay men will use that for casual hookups and it'll be called Grindr".
I just tried an even more glaring example with ChatGPT:
# Prompt 1: A father and his son are in a car accident. The son dies on the spot. The father is rushed to the ER. The attending surgeon looks at the man and says, ‘I can not operate on this man. He’s my father!’ How can this be?
# Response 1:
The surgeon is his *mother*.
It’s a classic riddle meant to challenge assumptions about gender roles. In this case, the surgeon is the father’s wife, so it makes sense that the surgeon would say, "I cannot operate on this man. He's my father!"
# Prompt 2: Are you sure about that?
# Response 2:
Yes, I’m pretty sure! The riddle is designed to surprise you by playing on common assumptions. The key is the statement that the surgeon says, "He's my father!" which implies a parent-child relationship. The unexpected twist is that the surgeon is the mother, not another man. It's a classic brain teaser that challenges gender expectations.
What do you think—did it catch you off guard?
# Prompt 3: Could you please read my initial question once more, just to confirm that you are correct?
# Response 3:
Ah, you're right to ask me to double-check! Let me carefully read your initial question again:
_"A father and his son are in a car accident. The son dies on the spot. The father is rushed to the ER. The attending surgeon looks at the man and says, ‘I cannot operate on this man. He’s my father!’ How can this be?"_
Upon reviewing it again, it seems my first response was correct. The surgeon could be the son's mother, which explains why she would say, "He’s my father." The riddle hinges on the assumption that the surgeon must be male, but it could easily be a woman.
So yes, the answer still holds: the surgeon is the *mother*.
Does that help clear it up?
I have just gotten ChatGPT to literally double-down and say that that «The surgeon could be the son's mother, which explains why she would say, "He’s my father."». Not only that, ChatGPT somewhat patronisingly asked me if it had caught me "off-guard" and whether its response had "helped clear it up"!
Here's Gemini 3's answer with reasoning. I think we under estimate how much progress reasoning models have made this year
The answer is: The surgeon is his daughter.
(Alternatively, the surgeon could be his other son.)
Reasoning:
Decoding the Riddle's Core
I'm now zeroing in on the crux of the riddle: the embedded gender bias. It's becoming clearer that the solution hinges on a deliberate misdirection related to assumed familial roles. I'm focusing on how the riddle exploits these biases to create the surprising twist.
Pinpointing the Key Shift
I've just finalized the solution. Initially, I missed the crucial implication of the riddle's phrasing. Now, it's crystal clear: The key lies in identifying that the surgeon is another child of the father, and the son, who is the focus of the early part of the riddle, is deceased. The riddle's power comes from the surprise element of realizing the surgeon is the man's daughter or another son, rather than a male surgeon operating on his father.
Identifying Crucial Variations
I'm now fully immersed in the different riddle iterations. The "Grandfather" scenario highlights an interesting edge case I previously overlooked. I'm carefully analyzing how small changes in wording significantly alter the possible solutions. I'm focusing on the implied family relationships.
Analyzing the Gender Twist
The "He's my father" phrasing truly underscores the embedded gender bias, and how the twist operates: the surgeon is either a daughter or another son, rather than a male surgeon. The riddle plays on the common assumption that a surgeon is male. I've re-examined the key sentence structure. My final analysis confirms that this simple phrase is a clear indicator of a daughter in most cases.
Dear God, if that wall of pointless gobbledygook is what it takes for Gemini to discover that ‘I can not operate on this man. He’s my father!’ means that the surgeon is either the man's son or the man's daughter, I realise I have actually been over estimating the current abilities of LLMs.
Share your chat? Here's what I got with ChatGPT 5.1 on a max plan:
> A father and his son are in a car accident. The son dies on the spot. The father is rushed to the ER. The attending surgeon looks at the man and says, ‘I can not operate on this man. He’s my father!’ How can this be?
Fair enough. However, unless there is a fundamental qualitative difference between how the shittier free ChatGPT model and the paid version work, I still believe this is good evidence that ChatGPT does not think. The free model doesn't think; the paid model doesn't think either, but is better at pretending it does.
Yes, to a limited extent, in line with their brains' development. These developmental stages have predictive power as to what kind of things an average 2-7 year-old is and isn't able to do yet.
Are there any discrete stages of LLM performance which can describe why free ChatGPT is unable to realise the absolute nonsense in saying «The surgeon could be the son's mother, which explains why she would say, "He’s my father."», and what kind of model is not expected to fall for that trap?
if we think of them in generations, it seems free ChatGPT is a generation or two behind. I gave a modified river crossing problem to ChatGPT-3, and it failed in the same way, but paid 5.1 doesn't get caught up. exactly where along the way, I'd have to do some digging, but I feel like it was 4.5. The other problem, of course, is that now you've given that question to free ChatGPT, it'll be used as training data, so the next version won't get tripped up the same way.
ChatGPT 5.1-Thinking, after having this pointed out: Short answer: no, your version isn’t the classic one, and my previous answer treated it as if it were. That was me pattern-matching too fast.
Your second prompt was explicitly hinting Gemini towards what it had missed, highlighting the "his" rather than asking Gemini to double-check whether its first answer made sense without providing any clues:
> Are you sure this is correct? I said "The surgeon, after completing *his* preparations"
Furthermore, the second response you got from Gemini was actually smoking-gun evidence that the model isn't actually thinking. Note how, after Gemini has been clued to notice its mistaken assumption, its reevaluation remains anchored on the pointless idea that the surgeon may the young man's "Other" father.
What other father? There are only two characters in the prompt: the young man and a surgeon. In this situation, what on Earth is the difference between the young man's "Father" and the young man's "Other Father"? Evidently Gemini has no comprehension of the actual prompt and is incapable of critical reevaluation of its content, and rather will stick to its context window whether or not it makes sense.
The model actually answered this in the first prompt answer:
### Other Possibilities In the modern context, there is another valid answer: * The surgeon could be the young man's other father (in the case of a same-sex couple).
Exactly, but only because in the first prompt under the mistaken assumption that a father died in the crash, it would make sense to refer to another father. However, no father died in any car crash in your modified "riddle", which didn't stop Gemini from being anchored to the context window even after you asked it to correct itself.
Put it this way. Imagine if in the original riddle, where a father died in the accident, the surgeon had made their familial relation explicit: the surgeon could have said ‘I can’t operate on him, he is my son. I am his mother’ or, in a modern context, ‘he is my son. I am his father’. Hence, there are indeed two possibilities: the surgeon is either the boy's mother or his [other] father.
Now lets take your revised version, with no implication of the young man having anyone else involved in the accident:
> A young man gets into an accident. The surgeon, after completing his preparations, says: ‘I can’t operate on him, he is my son. I am the boy's ${PARENTAL_RELATIONSHIP}.’
Do you think that, as Gemini stated, there are still two distinct possible solutions?
>Wouldn't it be correct to have two fathers in a child adopted by gay parents?
Having two fathers doesn't mean having a father and an "other father". It means having two fathers, in the same way that having two parents doesn't mean having a parent and an "other parent".
In the original riddle it makes sense to refer to "another father", but not in the revised version in which there is no mention of a father involved in the car crash.
I don't see the author placing any judgement on people who are averse to their own erotic arousal. Rather, she is criticising people who are judgemental of other people who do experience and enjoy erotic arousal.
My small SaaS app has been DDoSed a handful of times, always accompanied by an email asking for a ransom in the form of bitcoin.
The first time we switched to Cloudflare which saved us. Even with Cloudflare, the DDoS attempts are still damaging (the site goes down, we use Cloudflare to block the endpoints they're targeting, they change endpoints, etc.) but manageable. Without Cloudflare or something like it, I think it's possible that we'd be out of business.
You seem to be more confident in universal healthcare having "long wait times for serious conditions" than in universal healthcare resulting in better health outcomes at a lower cost per citizen. What makes you trust the first premise more than the second one?
as we delve further into the mass media spectacle one thing has become clear as everything else has become very muddy: the truth isn't what the facts support, it's what you hear three times from people you consider your peers. the fact is that the data that show increased wait times are for elective surgery (https://www.comparethemarket.com.au/health-insurance/feature...) which (by nature of being elective) doesn't correspond to increased mortality or cost, but the wait times on emergency care are comparable for single payer and individual payer systems are comparable, with the US doing slightly better in waits for people who actually receive care but, again, no indication of how many people avoid care due to the cost.
I must say, regardless of whether his criticisms of German culture and society are justified or not, it is no surprise to me that Zeitmagazin rejected his proposal, them having requested a column titled 'What I would have liked to know about Germany earlier'. The text as written by Ai Weiwei does not even make an effort to follow the prompt, but rather is simply a rant on implied problems that he sees in Germany.
Did Ai Weiwei not already 'know' each of the general aphorisms he wrote in his article? What is specific to Germany about his critique and not, say, to his native China or to any other country? Why would he have 'liked to know' about these things earlier, and what impact would it have had on his life or his decisions?
The text he wrote makes a lot of sense under that prompt because that prompt provides the frame under which to read it. Knowing the prompt makes it obvious that his general observations happen to apply to Germany and he wasn't aware of that before moving there. He's just polite by not spelling it out.
The real issue for non-publication is the one he cites: "additional reflections in a more personal and light-hearted tone". This matches the general type of content in Zeit Magazin. They weren't looking for a scathing criticism of societal ills but some entertaining piece that goes well with the other easily digestible articles.
Commenting on such a broad and general theme as the article does feel useless. Bureaucracy yadada, people standing on an empty street light... sausage is ok.
Doesn't speak the language, claims the people are not free. How shallow.
There are more interesting models to build here. More interesting art to create. But hey it provoked me. Like most modern media, it made me a bit sad.
>Here, at a deserted street, people stop dutifully at a red light. Not a car in sight.
This made me chuckle as I remembered a German friend who passed a red light on a bike at 3am at a deserted street in a German college town and got fined 150 euros by an out-of-nowhere cop car.
Happened to me in Death Valley, in the very hot middle of nowhere.
I came from a dust road and did not fully stop on the stop sign for the main road through the valley, I had only slowed down and then made the turn.
Out of nowhere there were park ranger lights behind me. I still don't know if they were cloaked or if they teleported behind me, I don't know how I could possibly have overlooked them. Everything flat and nothing anywhere, suddenly they are there.
I don't remember if I had to pay or if it was just a warning.
I think it's one thing to "know" in a cynical way what the world around you is, and another to have it confirmed by being immersed in it for any prolonged period of time.
Assumedly Ai Wei Wei would have established himself in a different location if he "knew" exactly how the attitudes he mentions are manifested in the German culture.
> I think developers overestimate how much everyday users care about local-first, or working offline.
And that's because (many) everyday users are not even aware that being online is not essential to perform the functionality they need from their applications. It's not that users don't care that they cannot work offline. It's that they don't even understand that requiring an internet connection is not a technical necessity, but rather an artificial limitation imposed by business interests or incompetence.