To be fair, ants have not created humanity. I don't think it's inconceivable for a friendly AI to exist that "enjoys" protecting us in the way a friendly god might. And given that we have AI (well, language models...) that can explain jokes before we have AI that can drive cars, AI might be better at understanding our motives than the stereotypical paperclip maximizer.
However, all of this is moot if the team developing the AI does not even try to align it.
Yeah, I'm not arguing alignment is not possible - but that we don't know how to do it and it's really important that we figure it out before we figure out AGI (which seems unlikely).
The ant example is just to try to illustrate the spectrum of intelligence in a way more people may understand (rather than just thinking of smart person and dumb person as the entirety of the spectrum). In the case of a true self-improving AGI the delta is probably much larger than that between an ant and a human, but at least the example makes more of the point (at least that was my goal).
The other common mistake is people think intelligence implies human-like thinking or goals, but this is just false. A lot of bad arguments from laypeople tend to be related to this because they just haven't read a lot about the problem.
One avenue of hope for successful AI alignment that I've read somewhere is that we don't need most laypeople to understand the risks of it going wrong, because for once the most powerful people on this planet have incentives that are aligned with ours. (Not like global warming, where you can buy your way out of the mess.)
I really hope someone with very deep pockets will find a way to steer the ship more towards AI safety. It's frustrating to see someone like Elon Musk, who was publicly worried about this very specific issue a few years ago, waste his time and money on buying Twitter.
Edit: I'm aware that there are funds available for AI alignment research, and I'm seriously thinking of switching into this field, mental health be damned. But it would help a lot more if someone could change Eric Schmidt's mind, for example.
> I really hope someone with very deep pockets will find a way to steer the ship more towards AI safety. It's frustrating to see someone like Elon Musk, who was publicly worried about this very specific issue a few years ago, waste his time and money on buying Twitter.
It has occurred to me that social networks are a vulnerable channel which we've already seen APTs exploit to shift human behavior. It's possible that Musk is motivated to close this backdoor into human society. That would also be consistent with statements he's made about "authenticating all real humans."
For one thing, we could try to come up with safety measures that prevent the most basic paperclip maximizer disaster from happening.
At this point I almost wish it was still the military that makes these advances in AI, not private companies. Anyone working on a military project has to have some sense that they're working on something dangerous.
Not you specifically, but I honestly don't understand how positive many in this community (or really anyone at all) can be about these news. Tim Urban's article explicitly touches on the risk of human extinction, not to mention all the smaller-scale risks from weaponized AI. Have we made any progress on preventing this? Or is HN mostly happy with deprecating humanity because our replacement has more teraflops?
Even the best-case scenario that some are describing, of uploading ourselves into some kind of post-singularity supercomputer in the hopes of being conscious there, doesn't seem very far from plain extinction.
I think the best-case scenario is that 'we' become something different than we are right now. The natural tendency of life(on the local scale) is toward greater information density. Chemical reactions beget self-replicating molecules beget simple organisms beget complex organisims beget social groups beget tribes beget city states beget nations beget world communities. Each once of these transitions looks like the death of the previous thing and in actuality the previous thing is still there, just as part of a new whole. I suspect we will start with natural people and transition to some combination of people whose consciousness exists, at least partially, outside of the boundaries of their skulls, people who are mostly information on computing substrate outside of a human body, and 'people' who no longer have much connection with the original term.
And that's OK. We are one step toward the universe understanding itself, but we certainly aren't the final step.
Growing tomatoes is less efficient than buying them, regardless of your metric. If you just want really cleanly grown tomatoes, you can buy those. If you want cheap tomatoes, you can buy those. If you want big tomatoes, you can buy those.
And yet individual people still grow tomatoes. Zillions of them. Why? Because we are inherently over-evolved apes who like sweet juicy fruits. The key to being a successful human in the post-scarcity AI overlord age is to embrace your inner ape and just do what makes you happy, no matter how simple it is.
The real insight out of all this is that the above advice is also valid even if there are no AI overlords.
Humans are great at making up purpose where there is absolutely none, and indeed this is a helpful mechanism for dealing with post-scarcity.
The philosophical problem that I see with the "AI overlord age" (although not directly related to AI) is that we'll then have the technology to change the inherent human desires you speak of, and at that point growing tomatoes just seems like a very inefficient way of satisfying a reward function that we can change to something simpler.
Maybe we wouldn't do it precisely because it'd dissolve the very notion of purpose? But it does feel to me like destroying (beating?) the game we're playing when there is no other game out there.
(Anyway, this is obviously a much better problem to face than weaponized use of a superintelligence!)
Any game you play has cheat codes. Do you use them? If not, why not?
In a post-scarcity world we get access to all the cheat codes. I suspect there will be many people who use them and as a result run into the inevitable ennui that comes with basing your sense of purpose on competing for finite resources in a world where those resources are basically free.
There will also be many people who choose to set their own constraints to provide some 'impedance' in their personal circuit. I suspect there will also be many people who will simply be happy trying to earn the only resource that cannot ever be infinite: social capital. We'll see a world where influencers are god-kings and your social credit score is basically the only thing that matters, because everything else is freely available.
I feel exactly the opposite. AI has not yet posed any significant threats to humanity other than issues with the way people choose to use it (tracking citizens, violating privacy, etc.).
So far, we have task-driven AI/ML. It solves a problem you tell it to solve. Then you, as the engineer, need to make sure it solves the problem correctly enough for you. So it really still seems like it would be a human failing if something went wrong.
So I'm wondering why there is so much concern that AI is going to destroy humanity. Is the theoretical AI that's going to do this even going to have the actuators to do so?
Philosophically, I don't have an issue with the debate, but the "AI will destroy the world" side doesn't seem to have any tangible evidence. It seems to me that people seem to take it as a given that it's possible AI could eliminate all of humanity and they do not support that argument in the least. From my perspective, it appears to be fearmongering because people watched and believed Terminator. It appears uniquely out-of-touch.
Agreed. People think of the best case scenario without seriously considering everything that can go wrong. If we stay on this path the most likely outcome is human extinction. Full stop
Mechanized factories failed to kill humanity two hundreds ago and the Luddite movement against them seems comical today. What makes you think extinction is most likely?
this path will indeed lead to human extinction, but the path is climate change. AI is one of the biggest last hopes for reversing it. from my perspective, if it does kill us all, well, it's most likely still a less painful death.
> Or is HN mostly happy with deprecating humanity because our replacement has more teraflops?
If we manage to make a 'better' replacement for ourselves, is it actually a bad thing? Our cousin's on the hominoid family tree are all extinct, yet we don't consider that a mistake. AI made by us could well make us extinct. Is that a bad thing?
Your comment summarizes what I worry might be a more widespread opinion than I expected. If you think that human extinction is a fair price to pay for creating a supercomputer, then our value systems are so incompatible that I really don't know what to say.
I guess I wouldn't have been so angry about any of this before I had children, but now I'm very much in favor of prolonged human existence.
I suppose the same axioms of every ape that's ever existed (and really the only axioms that exist). My personal survival, my comfort, my safety, accumulation of resources to survive the lean times (even if there are no lean times), stimulation of my personal interests, and the same for my immediate 'tribe'. Since I have a slightly more developed cerebral cortex I can abstract that 'tribe' to include more than 10 or 12 people, which judging by your post you can too. And fortunate for us, because that little abstraction let us get past smashing each other with rocks, mostly.
I think the only difference between our outlooks is I don't think there's any reason that my 'tribe' shouldn't include non-biological intelligence. Why not shift your priorities to the expansion of general intelligence?
We have Neanderthal, Denisovan DNA (and two more besides). Our cousins are not exactly extinct - we are a blend of them. Sure no pure strains exist, but we are not a pure strain either!
> If we manage to make a 'better' replacement for ourselves, is it actually a bad thing?
It's bad for all the humans alive at the time. Do you want to be replaced and have your life cut short? For that matter, why should something better replace us instead of coexist? We don't think killing off all other animals would be a good thing.
> Our cousin's on the hominoid family tree are all extinct, yet we don't consider that a mistake.
It's just how evolution played out. But if there was another hominid still alive along side us, advocating for it's extinction because we're a bit smarter would be considered genocidal and deeply wrong.
I prefer this less sterile framing of it. It was the most fun that I ever had with a puzzle, so to anyone scrolling around on this page, I would recommend not jumping straight to the solution :)
Self-correcting because frustrated users will simply start their own Google? Even if that happens, their second generation of employees will start a revolt if their company doesn't follow the latest DIE best practices.
I honestly think that only the Russian/Chinese model of a nationalized IT ecosystem has a chance to resist these trends.
Self-correcting because Google already has competition in the cloud editor space. Office suites are valuable software with an obvious business model for monetization.
> I honestly think that only the Russian/Chinese model of a nationalized IT ecosystem
That wouldn't solve the underlying problem if the nation decides some words are inappropriate (in fact, if we're thinking anti-censorship you may have chosen two particularly bad examples ;) ). It's power structures all the way down.
According to this Guardian article, the "Center for Countering Digital Hate" (also mentioned in your article) would be happy if Substack hadn't given it the platform:
The idea that Yet Another Generic NGO needs to "counter hate" because someone on the internet thinks the vaccine is not going to stop the pandemic is completely bonkers.
What I want to say is, a lot of the controversy around Substack seems to be that it is not aligned with the Right Side in the Culture War. I think they have some great writers, but they're not the next WikiLeaks or anything.
I disagree. The appeal to solidarity w.r.t. masks would never have worked because a tiny minority of scalpers can ruin it all. But the white lie didn't work either, so it seems like the worse option regardless.
Appeals to solidarity w.r.t. staying at home, wearing masks, getting vaccinated to flatten the curve all worked quite well in my bubble. Projects like Zero Covid would have required insanely high levels of compliance. If you say "anything involving solidarity with others does not work anymore", how high is your bar? What used to work but now doesn't?
(I'd guess that solidarity is lower than it used to be, but if anything I am surprised how much goodwill still exists considering growing inequality, atomization, erosion of public trust and all that.)
Agree with everything you said, but want to add one more point:
I have made my peace with all of these bureaucratic failures. Maybe a certain amount of contradiction and chaos is part of living in a democracy.
The scary part is that at the same time there has been an authoritarian push to crack down on "disinformation", you know, like the lab leak theory, or the conspiracy theory that vaccination will become mandatory. That has sent my trust from low into negative territory.
Yup. It is kind of a massive engineering failure. Everybody was doing exactly what they were incentivized to do. Everybody thought they were doing the right thing. But combined they drove the train right off the track. There was nothing in place to throw the brakes on and tell society to chill the fuck out. No amount of data or leadership could fix it.
The worst part is anybody who dared point this out was met with fierce vitriol, shaming and general hatred. We were all just expected to fall in line and those who asked questions got shunned by society.
In the first month or so it would have been possible for a good leader to chill things out. But eventually the thing grew legs of its own and nobody could put an end to it. Society would just have to wear itself out and one by one come to its senses. Two years later we are just now truly coming to our senses.
Unfortunately it will take quite some time for this event to be remembered for what it truly was: the first true mass hysteria of the internet age.
I agree about the engineering failure, but I'm not sure if mass hysteria is the right term. It was messier than that, we also saw a few leaders chilling out too much too early. My favorite: https://twitter.com/billdeblasio/status/1234648718714036229
And if anything, governments have been too relaxed about the origin of the pandemic (which has now officially killed 6 million!), while panicking about other details.
Notably, the Mac Mini does not have a stupid external power adapter, but just uses a standard cable that costs like $1. I did carry mine around for a while, and it was glorious! I'm not sure why Apple's competitors insist on including a brick that's almost as large as the PC itself.
It would have been really cool if the portable profile concept caught on with iPods. Then we could just have a room full of Mac Minis and plug in our iPhones for (literally!) roaming profiles. But then the company would have to provide me with a iPhone/iPad. Sadly we got iCloud instead, which is radioactive from a corporate infosec perspective.
I went from a 16" MacBook Pro with an i9 and 32 GB of RAM to the MacBook Air with 16 GB of RAM, and my build times at work were almost exactly the same. But the single-core speed makes everything _feel_ so much faster than the stupid slow i9 cores. (At less than half the price, and without a fan!)
In my opinion, the Air is the perfect addition to a heavy Linux/Windows workstation. You get the full spectrum from iOS development to gaming, albeit at the cost of having to babysit two or three operating systems. You can use the Air for meetings or conferences, because everything that's not a game has a macOS version (think Microsoft Teams etc.)
However, all of this is moot if the team developing the AI does not even try to align it.