I think they're misusing "forward propagation" and "backward propagation" to be basically mean "post training inference" and "training".
they seem to be assuming n iterations of the backward pass, which is why it's larger...
Which material ongoing issues are we ignoring? The paper is mainly talking about how the mundane problems we're already starting to have could lead to an irrecoverable catastrophe, even without any sudden betrayal or omnipotent AGI.
So I think we might be on the same side on this one.
One of the authors here. I don't think we anthropomorphize AI as some sort of God.
Here's a more prosaic analogy that might be helpful. Imagine tomorrow there's a new country full of billions of extremely conscientious, skilled workers. They're willing to work for extremely low wages, and to immigrate to any country and don't even demand political representation.
Various countries start allowing them to immigrate because they are great for the economy. In fact, they're so great for economies and militaries that countries compete to integrate them as quickly and widely as possible.
At first this is great for most of the natives, especially business owners. But the floor for being employable is suddenly really high, and most people end up in a sort of soft retirement. The government, still favoring natives, introduces various make-work and affirmative action programs. But for anything important, it's clear that having a human in the loop is a net drag and they tend to cause problems.
The immigrant population grows endlessly, and while GDP is going through the roof and services are all cheaper than ever, people's savings eventually dwindle as the cost of basic resources like land gets bid up. There are always more lucrative uses for their capital by the immigrants and capital owners compared to the natives. Educating new native humans for important new skills is harder and harder as the economy becomes more sophisticated.
I don't have strong opinions about what happens from here, but the point is that this is a much worse position for the native population to be in than currently.
Does that make sense? Even if this scenario doesn't seem plausible, do you agree that I'm not talking about anything omnipotent, just more competitive?
Thanks for co-writing an insightful paper! Something I put together around 2010 on possibilities for what happens from here: https://pdfernhout.net/beyond-a-jobless-recovery-knol.html
"This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."
This is a fantastic analogy -- thanks for sharing.
Another way to understand AI in my view is to look at (often smaller) resource rich countries around the world (Oil, Minerals etc). Often the government is more worried about the resource rather than the people that live in the country. The government often does not bother educate them, take good care of them, give them a voice in the future of the country etc. because those citizens are not the ones that pay the bills, or are the main source of GDP output or source of political power etc.
Similarly in an AI heavy economy, unless systems are not designed right governments might start ignoring their citizens. If democracy is not robust or money has a big role in elections the majority voice of humans is likely to matter less and less going forward.
Norway is a good example of resource rich country that still looks out for its citizens. So it should be possible to be resource rich/AI rich and have a happy citizenry. I suppose balancing all moving parts would be difficult.
The way to deal with the risks of AI would be to make AI available to all -- this is my strong belief. There are more risk to AI being walled off to select nations / classes of citizens on grounds of various real / imagined risks. This could create a very privileged class of countries and people that have AI while the others citizens don't / cant. With such huge advantages, AI would wreak greater havoc on the "have-nots". (Over)regulation of AI can have worse consequences under some conditions.
> as we notice these things, we pass laws against it
Well, the claim is that that's the sort of thing that will get harder once humans aren't involved in most important decisions.
> which is we created all this stuff, and we can (and have) modified the system to suit human flourishing
Why did we create North Korea? Why did we create WWI? We create horrible traps for ourselves all the time by accident. But so far there has been a floor on how bad things can get (sort of, most of the time), because eventually you run out of people to maintain the horrible system. But soon horrible systems will be more self-sustaining.
> we have categorized the ways it goes rogue (monopoly, extortion, etc) and responded adequately.
This objection is a reasonable one. But the point of the paper is that a lot of the ways we have of addressing these systemic problems will probably not work once we're mostly unemployed (or doing make-work). E.g. going on strike, holding a coup, or even organizing politically will become less viable. And once we're a net drag on growth, the government will have incentives to route resources away from us. Right now they have to invest in their human capital to remain competitive.
Driven by the most direct, tangible cause of the capitalists pursuing their own narrow interests. And AI fits in there as well ($500B funding to AI says Trump).
> What would it look like to face these issues but more directly?
Socialism. It doesn’t matter that jobs are automated away under socialism since there is no capital/worker social relation. If jobs are automated everyone just works less.
> Ending capitalism and competition?
You put those two together for some reason.
Capitalism and the state sector have lead to amazing improvements in the productive capacity of society overall. A lack of, at least from our First World perspective at least, doesn’t seem to be an issue. Instead the problem is (1) unsustainable growth (climate change) and (2) directing the productive capacity towards pro-social goals. So yes a change is long overdue.
Productive competition happens under capitalism. And sometimes it doesn’t. There’s plenty of accusations of Big Tech being anti-competitive on this site.
I appreciate your engagement, and I don't really have a plan myself to address these problems, but I don't really know what to do with "Socialism" as a recommendation. Care to elaborate what you think I, or anyone should do or advocate for more concretely?
The reason I mentioned competition is that, even under a complete command economy, there is still internal competition for control, which I think would still lead to human disempowerment eventually for similar reasons. Though probably on a longer time scale, which might still be a win. The only way to avoid being outcompeted is to have a total state ruled by something sufficiently agentic to resist all attempts to even adjust its policies. Which sounds terrifying and hard to get right on the first try.
> I appreciate your engagement, and I don't really have a plan myself to address these problems, but I don't really know what to do with "Socialism" as a recommendation. Care to elaborate what you think I, or anyone should do or advocate for more concretely?
A typical/classic way to build socialism is to organize the working class.
In many senses, yes. But the empowered ones still needed to keep most of the rest of the people happy and healthy enough to work, most of the time. That's what we're saying will change.
In fact, it'll be worse: Humans are currently a net source of growth, but they'll switch to being a net drag on growth. So the decision-makers will be forced to sideline humans in order to compete.
I agree that the loss of control we are threatened with is qualitatively different and far more absolute than the disenfranchisement we feel today, for sure.
Still, I wonder if any humans have been in control, really, since the agricultural revolution. Billionares building bug-out shelters? They seem even more scared than the rest of us.
Surely if we were really in control we could have come up with a better system than this.
Yep, a major missing piece in this entire problem / discussion is how to characterize how much "power" "humans" have had. My best idea so far is to characterize the sorts of outcomes that can be feasibly steered towards under what conditions.
> In many senses, yes. But the empowered ones still needed to keep most of the rest of the people happy and healthy enough to work, most of the time. That's what we're saying will change.
One of Western society's glaring cognitive dissonances: the conviction that "keeping people happy and healthy enough to work" is empowering them. Even assuming that the word "empowering" makes sense; even assuming that we can make sense of the notion of an authority "empowering" someone (which I personally cannot).
Directionally agree with rhelz but would push it further: any technique, even those which may have preceded agriculture, already does all the things you're claiming AI is going to do. Even a procedure entirely implemented by humans can keep its weighting of any unwanted form of "human input" beneath any epsilon.
------
> 2. There are effectively two ways these systems maintain their alignment: through explicit human actions (like voting and consumer choice), and implicitly through their reliance on human labor and cognition. The significance of the implicit alignment can be hard to recognize because we have never seen its absence.
> 3. If these systems become less reliant on human labor and cognition, that would also decrease the extent to which humans could explicitly or implicitly align them. As a result, these systems—and the outcomes they produce—might drift further from providing what humans want.
You talk about empowerment, but many of your arguments seem oriented toward alignment. Voting and consumer choice may indeed be techniques for aligning (and thereby binding and scaling) society (i.e., a given group of people), but they have very little ability to "empower" any given individual. The power of the individual voice literally decreases in proportion to the success of these techniques (i.e., in proportion to the growth of those groups of humans which compose them). In other words, your "explicit" techniques are alignment techniques and have little to do with empowerment.
Your "implicit" category (labor, cognition, etc.), on the other hand, does seem to me to be oriented toward something like individual power. Unlike voting and market-making, labor and cognition do seem to be (or can naively be viewed as being) oriented more toward something like our everyday notion of individual power than they are toward these notions of social "alignment" and top-down "techniques of empowerment." That is, without much mental gymnastics, we can imagine labor, cognition, etc., as coming from within the individual and radiating outward — which is probably as good a criterion of power (individuality, sentience, free will, subjectivity, ego, humanity, etc.) as we're ever going to get.
You seem to be claiming that AI is a relatively new threat to this category of "implicitly empowering forces." This is where you're going to lose the brighter minds in your audience. Because has there ever been a more dominant and monotonous trend in human society than the reduction of the dimensionality of human labor and cognition, the reduction of the degrees of freedom in which the human mind and body can play? Almost by definition, almost as the criterion for its existence, a society attempts to make itself less dependent on each of its individual components. So, in a society composed of humans, what would be a fairer mechanism for dissolving these snowflake dependencies than the invention or discovery of techniques by which to make the system as a whole less dependent on any possible human input?
These are all good points about our use of language, thanks for the feedback.
Maybe "disempowerment" is a bit of a red herring, or a misleading problem to focus on. The reason we didn't spend more time on clarifying that is that we're just using it to gesture towards a different set of mechanisms that lead to extinction-like outcomes than usual. So even if you think our definition of empowerment is poor, or that empowerment isn't a great goal - that's kind of OK.
The thing we want to emphasize is that right now there are some mechanisms that steer our civilization towards keeping us alive, and free in some senses, that might stop operating. Though I take your point in the last paragraph that this might not change things much specifically regarding the implicitly empowering forces. We'll think about this one some more!
> The thing we want to emphasize is that right now there are some mechanisms that steer our civilization towards keeping us alive, and free in some senses, that might stop operating.
I agree with this formulation. What I am emphasizing is that, insofar as mechanisms are steering, the system in which they are operating can be said to have largely decoupled from the human mind.
"Alive and free in some senses."
A society whose tagline is "alive and free in some senses" is already dystopian! Far scarier, to me, than its extinction or my own early death.
> A society whose tagline is "alive and free in some senses" is already dystopian!
Haha. Well we might agree about that - that description covers a wide range of possibilities. If you have ideas about what a plausible and good future looks like, please let me know. One of my next projects is trying to articulate "what is the best we can hope for?" and talk about which sets of goals are even possible to jointly fulfill. But certainly "everyone is free in all senses" is incoherent or at the very least, unstable.
Last author here. Good point, I agree that the move to an entirely self-sustaining machine economy would require extra time, and that would drag out the time to extinction even the worst case scenario by this mechanism. And, if caught early enough it's possible that a revolution could reverse the trend, at least temporarily and locally.
However, we tried to address the point about why a revolution would be difficult: We're assuming we're in a world where there are better machine alternatives for almost everything. So the police and military would already have been hollowed out. And the power of a general strike would be greatly diminished. It'd also be much less costly for the state to harshly punish early signs of dissent.
What incentives do any humans have to so totally delegate the functioning of the core levers of societal power that they're unable to prevent their own extinction?
"Better machine alternatives" implies that the police and military aren't first and foremost evaluated through their loyalty. A powerful army that doesn't listen to you is not a "better" one for your purposes. The same isn't true of the economy: one could argue that our current economic system is beyond any one person's ken, but even if I don't understand how my coffee came to me and no one person would be an expert on that entire pipeline it works.
The idea that AI could lead to power concentrating in the hands of a few oligarchs who use a robot army as a more effective version of the janissaries or praetorian guard of the past certainly seems broadly plausible, although I'm not sure that the effectiveness of the Stasi is the limiting factor on autocracy or oligarchy. I don't understand how that links to human extinction. For most of human history, most people have been unable to meaningfully impact the way their society operates. That is responsible for an incalculable amount of suffering, and it's not a threat to be taken lightly, but if anything one might argue it's likely to ensure some human survival for longer than a less stable, freer system.
> What incentives do any humans have to so totally delegate the functioning of the core levers of societal power that they're unable to prevent their own extinction?
Because it'll be more effective at every step than the alternative. Just like specialization is more effective, so anyone who wants to avoid poverty needs to outsource their food growing to giant mechanized farms.
> "Better machine alternatives" implies that the police and military aren't first and foremost evaluated through their loyalty. A powerful army that doesn't listen to you is not a "better" one for your purposes.
Ah, I think there's a confusion - I'm saying that the police and military will stay loyal to the state, or head of state. But that even if there is a human nominally still in charge, that that human's hands will be tied by competitive pressures to gradually marginalize their own human citizens in favor of more productive machines. Maybe a good analogy is unpopular today would be free trade deals or immigration policies enacted for economic reasons.
I think the objection of "wouldn't the few remaining humans in charge become ever-more powerful, so they could enact UBI by fiat" is a good one. But I think it's just hard for third parties to treat unpromising, unproductive beings well - others will be constantly proposing other, more lucrative, uses of their resources.
I agree that priors over aspects of the world would be more useful, but I don't think that they're important in making natural intelligence powerful. In my experience, the important thing is to make your prior really broad, but containing all kinds of different hypotheses with different kinds of rich structure.
I claim that knowing a priori about things like agents and objects just doesn't save you all that much data, as long as you have the imagination to consider all structures at least that complex.
This approach characterizes a different type of uncertainty than BNNs do, and the approaches can be combined. The BNN tracks uncertainty about parameters in the NN, and mixture density nets track the noise distribution _conditional on knowing the parameters_.