No no. The thing is, the Monty Hall guy is responding to YOUR choice. So if he has to open a door where you fail, it's a response to what he knows of your choice, so HE knows what YOU chose and is not only revealing the remaining losing choice but also the winning choice. Call it a coin flip except for he always has to call tails.
Therefore your choice can either be cadillac or goat, he cannot choose cadillac and has to show a goat, so the remaining option you DIDN'T highlight is that much more likely to be cadillac because it could've been either, but he doesn't get to pick randomly, he had to show which one was NOT the winning one.
Hence the result. And since it started out as one pick of three, he responds to you and then you respond to the added information by switching and that's where the 66% odds come from: two moves each responding to each other.
Your explanation isn't wrong, but it's never quite resonated with me because it feels almost like a magic trick than something that follows intuition. Like seeing a magician perform a trick, it doesn't quite convey to me the "why" as much as the "what", and even though I know there's no actual magic, I still feel like I'm left having to figure out what happened on my own.
The idea that finally made it click for me is that Monty has to choose one of the doors to open, and because he knows which door has which thing behind it, he'll never pick the door with the winning prize. That means the fact that he didn't pick the other door is potentially meaningful; unless I picked the right door on my first try, it's guaranteed to be the one he didn't open, because he never opens the right door on his own. His choice communicates meaningful information to me because it's not random, and that part while seemingly obvious gets left implicit in almost every attempt to explain this that I've seen.
Another intuitive way to explain it would be to imagine that the step of opening one door is removed, and instead you're given the option of either sticking with your original door or swapping to all of the other doors and winning if it's any of them. It's much more obvious that it would be a better strategy to swap, and then if you add back the step where he happens to open all of the other doors that aren't what you picked or the right one, it shouldn't change the odds if you're picking all of the other doors. This clarifies why the 100 door case makes it an even better strategy to switch than the 3 door case; you're picking 99 doors and betting that it's behind one. The way people usually describe that formulation still often doesn't seem to explicitly talk about why the sleight of hand that opening 98 of the doors is a red herring though; people always seem to state it as if it's self-evident, and I feel like that misses the whole point of why this is unintuitive in the first place in favor of explaining in a way that clarifies little and only makes sense if you already understand in the first place.
It would be pretty weird if they were so broken they were incapable of saying anything right, even at times when they were trying to be ingratiating. You'd have to be astonishingly insane, more even than these people are, to be totally unable to identify something that would be good press.
I'm not saying they can't reach that point, but this ain't it. They are just getting details wildly wrong and being generally obtuse, but this is an attempt at not seeming completely insane and should be graded on that curve. You can't expect every little detail to be insane, that's asking a lot.
Wait, what? I lift weights and chicken breast is a fundamental part of my diet but I'm eating 1/3 to 1/2 a single chicken breast a day, and an egg for breakfast. That CAN'T be right.
I get that I include some rice, peanuts etc. in there, but even if I quit EVERYTHING else there's no way 4 to 5 chicken breasts a day is accurate.
No, when you bring in the genetic algorithm (something LLM AI can be adjacent to by the scale of information it deals in) you can go beyond human intelligence. I work with GA coding tools pretty regularly. Instead of prompting it becomes all about devising ingenious fitness functions, while not having to care if they're contradictory.
If superhuman intelligence is solved it'll be in the form of building a more healthy society (or, if you like, a society that can outcompete other societies). We've already seen this sort of thing by accident and we're currently seeing extensive efforts to attack and undermine societies through exploiting human intelligence.
To a genetic algorithm techie that is actually one way to spur the algorithm to making better societies, not worse ones: challenge it harder. I guess we'll see if that translates to life out here in the wild, because the challenge is real.
The troubling thing here is: what is a "better" society? As you said, it's just the one that outcompetes the other societies on the globe. We'd like to believe such a thing is an egalitarian "healthy" liberal society, but it's just as likely to be some form of enslaved/boot stomping on face society. Some think people won't accept this, but given human history I'm pretty sure they will. I think these sorts of societies are more of a local minima, but they only need to grant enough of a short term boost to unseat the other major powers. Once competition is out of the way they'll probably survive as a bloated mess for quite some time. The price of entry is so high they won't have to worry about being unseated by competition unless they really screw the pooch. I think this is the troubling conclusion a lot of people, including those in power, are reaching.
It's worth thinking about, but why hasn't this already happened? Or maybe it already has, and if so, what about AI specifically is it that will make it suddenly much worse?
We've had plenty of examples of all those things, over and over, throughout history. Nothing's really new. Societies that get into faceboot territory run afoul of what's already known (there's apparently a CIA handbook to this effect that's being largely ignored in modern America): assert hard rather than soft power and you generate determined and desperate resistance more than you undermine it. That's being demonstrated in countless places right now.
I'm arguing that the egalatarian 'lift my lamp beside the golden door' society is a cheat code for producing the variety and ferment that makes everybody frustrated and unhappy but producing with wild abandon. As a society this tactic dominates the hell out of would-be ethnostates and dictatorships, which seems to also be a natural tendency of humans. They are interested in not being challenged, in those like them not being challenged. Comfortable for those fortunate individuals, hopelessly suboptimal for the society they're in.
The rallying cry of 'NO New York Cities! Only sundown towns where if you don't look right you are killed and nobody ever knows about it!' might please some people (who have never been anywhere near those evil cities) but it just goes to show that many people have unhealthy wishes that are bad for them and the societies they're in.
> If superhuman intelligence is solved it'll be in the form of building a more healthy society (or, if you like, a society that can outcompete other societies).
Maybe so, but the point I'm trying to make is this needs to look nothing like sci-fi ASI fantasies, or rather, it won't look and feel like that before we get the humanoid AI robots that the GP mentioned.
You can have humans or human institutions using more or less specialized tools that together enable the system to act much more intelligently.
There doesn't need to be a single system that individually behaves like a god - that's a misconception that comes from believing that intelligence is something like a computational soul, where if you just have more of it you'll eventually end up with a demigod.
Trump is Russia's guy. There is no way I'd be screaming for revenge over a horrifying complicated nightmare becoming even more toxic, even more complicated, and even more nightmarish. If anyone comes and gets Trump it ain't Russia: he is already theirs, and acting in such a way as to further all their aims and all their narratives.
He did mean 'last', maybe don't steelman these arguments so dutifully?
Speak for yourself, friend. I don't believe you and think you're making a tragic mistake, but you're also my competition in a sense, so… you have fun with that.
Yup. It's certainly an art project or something. It's like setting a bunch of Markov Chaneys loose on each other to see how insane they go.
…kind of IS setting a bunch of Markov Chaneys loose on each other, and that's pretty much it. We've just never had Chaneys this complicated before. People are watching the sparks, eating popcorn, rooting for MechaHitler.
Therefore your choice can either be cadillac or goat, he cannot choose cadillac and has to show a goat, so the remaining option you DIDN'T highlight is that much more likely to be cadillac because it could've been either, but he doesn't get to pick randomly, he had to show which one was NOT the winning one.
Hence the result. And since it started out as one pick of three, he responds to you and then you respond to the added information by switching and that's where the 66% odds come from: two moves each responding to each other.
reply