From the video, it sounds like he engineered his own cells. Using a virus that is known for transferring genetic material into other organisms, he added a gene for producing lactase, and then ate it. I suppose that would affect both his gut bacteria and his own cells. But it lasted for ~1.5 years, which probably indicates that it truly was his cells. Also he seems to know what he's talking about, and he claims it was his own cells.
The text is crossed out using Unicode combining characters of a strikethrough. This allows it to display without any specific formatting support, but it does require that the font support those characters. The font you're using doesn't support the characters, so it displays boxes, instead.
Name some of the contradictory possibilities you have in mind?
Also, do you actually think the core idea is wrong, or is this more of a complaint about how it was presented? Say we do an experiment where we train an alpha-zero-style RL agent in an environment where it can take actions that replace it with an agent that pursues a different goal. Do you actually expect to find that the original agent won't learn not to let this happen, and even pay some costs to prevent it?
A contradictory possibility is that agents which have different ultimate objectives can have different and disjunct sets of goals which are instrumental towards their objectives.
I do think the core idea of instrumental convergence is wrong. In the hypothetical scenario you describe, the behavior of the agent, whether it learns to replace itself or not, will depend on its goal, its knowledge of and ability to reason about the problem, and the learning algorithm it employs. These are just some of the variables that you’d need to fill in to get the answer to your question. Instrumental convergence theoreticians suggest one can just gloss over these details and assume any hypothetical AI will behave certain ways in various narratively described situations, but we can’t. The behavior of an AI will be contingent on multiple details of the situation, and those details can mean that no goals instrumental to one agent are instrumental to another.
Why is that? My guess would be that you could adjoin an i all the time to the p^n field and get the p^2n field, as long as you had p = 4k + 3. But that's admittedly based on approximately zero thinking.
EDIT: Looking things up indicates that if n is even, there's already a square root of -1 in the field, so we can't add another. So now I believe the 1/4 of the time thing you mentioned, and can't see how that's wrong.
Spitballing here, but I suspect it's a density thing. If you are considering all prime powers up to some bound N, then the density of prime powers (edit: of size p^n with n > 1) approaches 0 as N tends to infinity. So rather than things being 1/4 like our intuition says, it should unintuitively be 1/2. I haven't given this much thought, but I suspect this based on checking some examples in Sage.
Oh, so just a probability density thing where we sample q and check if it's p^n (retrying if not) rather than sampling p and n separately and computing q=p^n? I guess that's probably what the they were going for, yeah.
You should basically assume they are pulled from thin air. (Or more precisely, from the brain and world model of the people making the prediction.)
The point of giving such estimates is mostly an exercise in getting better at understanding the world, and a way to keep yourself honest by making predictions in advance. If someone else consistently gives higher probabilities to events that ended up happening than you did, then that's an indication that there's space for you to improve your prediction ability. (The quantitative way to compare these things is to see who has lower log loss [1].)
> If someone else consistently gives higher probabilities to events that ended up happening than you did, then that's an indication that there's space for you to improve your prediction ability.
Your inference seems ripe for scams.
For example-- if I find out that a critical mass of participants aren't measuring how many participants are expected to outrank them by random chance, I can organize a simplistic service to charge losers for access to the ostensible "mentors."
I think this happened with the stock market-- you predict how many mutual fund managers would beat the market by random chance for a given period. Then you find that same (small) number of mutual fund managers who beat the market and switched to a more lucrative career of giving speeches about how to beat the market. :)
Is there some database where you can see predictions of different people and the results? Or are we supposed to rely on them keeping track and keeping themselves honest? Because that is not something humans do generally, and I have no reason to trust any of these 'rationalists'.
This sounds like a circular argument. You started explaining why them giving percentage predictions should make them more trustworthy, but when looking into the details, I seem to come back to 'just trust them'.
People's bets are publicly viewable. The website is very popular with these "rationality-ists" you refer to.
I wasn't in fact arguing that giving a prediction should make people more trustworthy, please explain how you got that from my comment? I said that the main benefit to making such predictions is as practice for the predictor themselves. If there's a benefit for readers, it is just that they could come along and say "eh, I think the chance is higher than that". Then they also get practice and can compare how they did when the outcome is known.
Would you also get triggered if you saw people make a bet at, say, $24 : $87 odds? Would you shout: "No! That's too precise, you should bet $20 : $90!"? For that matter, should all prices in the stock market be multiples of $1, (since, after all, fluctuations of greater than $1 are very common)?
If the variance (uncertainty) in a number is large, correct thing to do is to just also report the variance, not to round the mean to a whole number.
Also, in log odds, the difference between 5% and 10% is about the same as the difference between 40% and 60%. So using an intermediate value like 8% is less crazy than you'd think.
People writing comments in their own little forum where they happen not to use sig-figs to communicate uncertainty is probably not a sinister attempt to convince "everyone" that their predictions are somehow scientific. For one thing, I doubt most people are dumb enough to be convinced by that, even if it were the goal. For another, the expected audience for these comments was not "everyone", it was specifically people who are likely to interpret those probabilities in a Bayesian way (i.e. as subjective probabilities).
> Would you also get triggered if you saw people make a bet at, say, $24 : $87 odds? Would you shout: "No! That's too precise, you should bet $20 : $90!"? For that matter, should all prices in the stock market be multiples of $1, (since, after all, fluctuations of greater than $1 are very common)?
> correct thing to do is to just also report the variance
And do we also pull this one out of thin air?
Using precise number to convey extremely unprecise and ungrounded opinions is imho wrong and to me unsettling. I'm pulling this purely out of my ass, and maybe I am making too much out of it, but I feel this is in part what is causing the many cases of very weird, and borderline associal/dangerous behaviours of some associated with the rationalists movement. When you try to precisely quantify what cannot be, and start trusting those numbers too much, you can easily be led to trust your conclusions way too much. I am 56% confident this is a real effect.
I mean, sure people can use this to fool themselves. I think usually the cause of someone fooling themselves is "the will to be fooled", and not so much that fact that they used precise numbers in the their internal monologue as opposed to verbal buckets like "pretty likely", "very unlikely". But if you estimate 56% it sometimes actually makes a difference, then who am I to argue? Sounds super accurate to me. :)
In all seriousness, I do agree it's a bit harmful for people to use this kind of reasoning, but only practice it on things like AGI that will not be resolved for years and years (and maybe we'll all be dead when it does get resolved). Like ideally you'd be doing hand-wavy reasoning with precise probabilities about whether you should bring an umbrella on a trip, or applying for that job, etc. Then you get to practice with actual feedback and learn how not to make dumb mistakes while reasoning in that style.
> And do we also pull this one out of thin air?
That's what we do when training ML models sometimes. We'll have the model make a Gaussian distribution by supplying both a mean and a variance. (Pulled out of thin air, so to speak.) It has to give its best guess of the mean, and if the variance it reports is too small, it gets penalized accordingly. Having the model somehow supply an entire probability distribution is even more flexible (and even less communicable by mere rounding). Of course, as mentioned by commenter danlitt, this isn't relevant to binary outcomes anyways, since the whole distribution is described by a single number.
> and not so much that fact that they used precise numbers in the their internal monologue as opposed to verbal buckets like "pretty likely", "very unlikely"
I am obviously only talking from my personal anecdotal experience, but having been on a bunch of coffee chat in the last few months with people in the AI safety field in SF, and a lot of them being Lesswrong-ers, I experienced a lot of those discussions with random % being thrown in succession to estimate the final probability of some event, and even though I have worked in ML for 10+ years (so I would guess more constantly aware of what a bayesian probability is than the average person), I do find myself often swayed by whatever numbers comes out at the end and having to consciously take a step back and pull myself from instinctively trusting this random number more than I should. I would not need to pull myself back, I think, if we were using words instead of precise numbers.
It could be just a personal mental weakness with numbers with me that is not general, but looking at my interlocutors emotional reactions to their own numerical predictions I do feel quite strongly that this is a general human trait.
> It could be just a personal mental weakness with numbers with me that is not general, but looking at my interlocutors emotional reactions to their own numerical predictions I do feel quite strongly that this is a general human trait.
Your feeling is correct; anchoring is a thing, and good LessWrongers (I hope to be in that category) know this and keep track of where their prior and not just posterior probabilities come from: https://en.wikipedia.org/wiki/Anchoring_effect
Probably don't in practice, but should. That "should" is what puts the "less" into "less wrong".
> If the variance (uncertainty) in a number is large, correct thing to do is to just also report the variance
I really wonder what you mean by this. If I put my finger in the air and estimate the emergence of AGI as 13%, how do I get at the variance of that estimate? At face value, it is a number, not a random variable, and does not have a variance. If you instead view it as a "random sample" from the population of possible estimates I might have made, it does not seem well defined at all.
I meant in a general sense that it's better when reporting measurements/estimates of real numbers to report the uncertainty of the estimate alongside the estimate, instead of using some kind of janky rounding procedure to try and communicate that information.
You're absolutely right that if you have a binary random variable like "IMO gold by 2026", then the only thing you can report about its distribution is the probability of each outcome. This only makes it even more unreasonable to try and communicate some kind of "uncertainty" with sig-figs, as the person I was replying to suggested doing!
(To be fair, in many cases you could introduce a latent variable that takes on continuous values and is closely linked to the outcome of the binary variable. Eg: "Chance of solving a random IMO problem for the very best model in 2025". Then that distribution would have both a mean and a variance (and skew, etc), and it could map to a "distribution over probabilities".)
Yeah it's kinda contradictory. It's part critique and part self reflection on an ideology that I have mixed feeling towards. In the same vein as the Programming Language Checklist for language designers.
"Below" in this context means "directly below and connected by a line" (the lines are the edges of the cube). So you can have a blue vertex that is vertically below a white vertex, so long as they are not connected by an edge. The first time this can happen is for a 3 dimensional cube. You can have blue at the top, then 2 blue and 1 white below that, and then 1 blue (under and between the 2 blues in the layer above) and 2 white in the layer below that, and then white for the bottom vertex. This configuration can be rotated 3 ways and this takes us from 17 to 20.
The post assumes a 2% annual rate of growth in energy consumption. So, due to the nature of exponential functions, most of the energy loss would concentrated towards the end of the 1000 years, as the energy consumption approaches 400 million times present day energy usage. The first two centuries of use would not have a noticeable impact.
So I'm not a moral relativist, like, at all. But in this case, it seems like we westerners have constructed one particular set of norms for encouraging innovation, where we decide that it's possible for ideas to be owned. It's not like there's anything intrinsically wrong with copy-pasting code, it's just that we have a legal framework where we've traded away the right to freely copy-paste code so that we can grant a temporary monopoly to its author. We do this in the hope that more useful code will be written than otherwise. But if the people of China decide that that's not a trade-off they want to make, then I don't think we westerners get to say that they've committed a moral wrong in making that decision. It's just that they have a different way of doing things.
Like I said, I'm not a moral relativist at all. Murder is still wrong in China, imprisoning people not convicted of any crime is still wrong in China, lying is still wrong in China. But I just don't see how copyright infringement is universally an immoral act.
OK, no, it's not intrinsically or uniquely western. Any person who has an idea has no obligation to share their ideas, that's universal. This revolves around the expectation set at the point of sharing their ideas, under which they say, effectively, "Here's something I came up with. I'm sharing it with you in exchange for your agreement that you will attribute it to me and not take the idea and publish it as your own." The ownership and control under closed source could (somehow) be argued in the way you're suggesting, but for open source it's a matter of blatant disrespect and refusal to adhere to and of the requirements set by the initial sharers of their ideas.
Naw... it's just the hypocrisy. If any other company/country were doing it, they should be called out too.
If you actually stop doing the wrong thing and decades later complain about others still doing it (the Western world can complain about slavery) that's moving up and on, but complaining about TikTok maybe being banned when FB/Google/Twitter and TikTok itself are banned in your own country?
Same with copyright... it's not like China's government doesn't issue copyrights and enforce copyrights, it's just that there's not really rule of law since it's enforced so haphazardly. There's little plan other than individuals "what can I get right now".
Getting downvoted by the group-think majority. Just know I tend to agree with you. Our country does not get to dictate how the world operates no matter what our beliefs are. And the idea of intellectual property is just a belief, and a bad one at that.
Operating under a different set of rules doesn't change the fact that they violate ours. They can be simultaneously right under their own standards and wrong under ours. We can and should judge them under our moral standards.
EDIT: Above is false. Went back and checked and I had mis-remembered the video.
reply