dasil003 5 months ago

The thesis and opening section is interesting for sure, because it shows how mechanisms and incentives that exist today could lead to really bad outcomes. However then it leaps to:

> Because this disempowerment would be global and permanent, and because human flourishing requires substantial resources in global terms, it could plausibly lead to human extinction or similar outcomes.

Which seems a bit over the top. AI is way up at the top of a huge pyramid of infrastructure and energy dependence. It's nowhere near self-sustaining without a huge amount of human input for the foreseeable future. Therefore the disempowerment isn't permanent or absolute, it's just another in a long line of technological tools that enable consolidation of power by reducing the need for human labor and enabling new, previously uneconomical approaches to different human goals. If at any point the disempowerment reaches a tipping point affecting the masses and those in control aren't paying attention then I expect standard revolution dynamics would take over.

  • duvenaud 5 months ago

    Last author here. Good point, I agree that the move to an entirely self-sustaining machine economy would require extra time, and that would drag out the time to extinction even the worst case scenario by this mechanism. And, if caught early enough it's possible that a revolution could reverse the trend, at least temporarily and locally.

    However, we tried to address the point about why a revolution would be difficult: We're assuming we're in a world where there are better machine alternatives for almost everything. So the police and military would already have been hollowed out. And the power of a general strike would be greatly diminished. It'd also be much less costly for the state to harshly punish early signs of dissent.

    • PollardsRho 5 months ago

      What incentives do any humans have to so totally delegate the functioning of the core levers of societal power that they're unable to prevent their own extinction?

      "Better machine alternatives" implies that the police and military aren't first and foremost evaluated through their loyalty. A powerful army that doesn't listen to you is not a "better" one for your purposes. The same isn't true of the economy: one could argue that our current economic system is beyond any one person's ken, but even if I don't understand how my coffee came to me and no one person would be an expert on that entire pipeline it works.

      The idea that AI could lead to power concentrating in the hands of a few oligarchs who use a robot army as a more effective version of the janissaries or praetorian guard of the past certainly seems broadly plausible, although I'm not sure that the effectiveness of the Stasi is the limiting factor on autocracy or oligarchy. I don't understand how that links to human extinction. For most of human history, most people have been unable to meaningfully impact the way their society operates. That is responsible for an incalculable amount of suffering, and it's not a threat to be taken lightly, but if anything one might argue it's likely to ensure some human survival for longer than a less stable, freer system.

      • duvenaud 5 months ago

        > What incentives do any humans have to so totally delegate the functioning of the core levers of societal power that they're unable to prevent their own extinction?

        Because it'll be more effective at every step than the alternative. Just like specialization is more effective, so anyone who wants to avoid poverty needs to outsource their food growing to giant mechanized farms.

        > "Better machine alternatives" implies that the police and military aren't first and foremost evaluated through their loyalty. A powerful army that doesn't listen to you is not a "better" one for your purposes.

        Ah, I think there's a confusion - I'm saying that the police and military will stay loyal to the state, or head of state. But that even if there is a human nominally still in charge, that that human's hands will be tied by competitive pressures to gradually marginalize their own human citizens in favor of more productive machines. Maybe a good analogy is unpopular today would be free trade deals or immigration policies enacted for economic reasons.

        I think the objection of "wouldn't the few remaining humans in charge become ever-more powerful, so they could enact UBI by fiat" is a good one. But I think it's just hard for third parties to treat unpromising, unproductive beings well - others will be constantly proposing other, more lucrative, uses of their resources.

        • ahartmetz 5 months ago

          > But I think it's just hard for third parties to treat unpromising, unproductive beings well

          Strong similarity to the Resource Curse and despotic governments. The people aren't that useful if you have natural resources.

          • jkey 5 months ago

            We actually mention the Resource Curse as an example of this in the paper.

keybored 5 months ago

> Once AI has begun to displace humans, existing feedback mechanisms that encourage human influence and flourishing will begin to break down. For example, states funded mainly by taxes on AI profits instead of their citizens' labor will have little incentive to ensure citizens' representation. This could occur at the same time as AI provides states with unprecedented influence over human culture and behavior, which might make coordination amongst humans more difficult, thereby further reducing humans' ability to resist such pressures. We describe these and other mechanisms and feedback loops in more detail in this work.

States like the USA already have little incentive to represent their citizens. This has been studied. Despite most of them being workers. They represent the rich instead, who just have their asses sat on assets.

This has been an issue for over a hundred years. So there is plenty of content (like what the AI likes) to pull from.

AI that is not embodied in the world can simply be unplugged. Straightforward if we all own technology collectively. But massively complicated when you have capitalists who have every incentive to replace all human labor. (And replacing all human labor is only a problem because a tiny minority would end up dominating everyone else.)

The authors seem more concerned with that hypothetical AI that would consume the universe on a directive to produce stamps (or whatever it was). Instead they could focus on the same issue that they are ostensibly concerned about but face it much more directly.

  • acatnamedjoe 5 months ago

    Focusing on the state misses the point. The state is a relatively modern abstraction for society. Society, state-based or not, has always been dominated by class of elites and governed primarily in their interests.

    However, for all of human history those elites have needed workers, and in complex societies, they need LOADS of them. The elites have always needed to ensure that the working people are sufficiently fit, healthy, motivated and skilled to do the work required.

    For the last 500 years or so the elites have also found it convenient to maintain a mass of relatively affluent people with a reasonable amount of leisure time, who will purchase the products that make them rich.

    Thus the typical person in the world today finds themself able to exchange their labour for basic necessities and increasingly, consumer goods. Most people receive some form of protection from bodily violence and for their property - whether from the state or from some other arrangement with the elite class. Most people also have access to some form of education and healthcare (although of course the level of provision varies massively). Most people have some amount of leisure time, some level of autonomy over what they do with that time, and an increasing range of options for leisure activities.

    All of this happens because it is convenient for elites - it gets them what they want.

    AI presents us with a possible future where a small group of elites could generate infinite wealth, and would have absolutely no need of the working and middle classes. The benefits we currently enjoy (however meagre) would dry up.

    At best, we'd be ignored and left to scratch a subsistence living out of whatever is left of our natural environment by that point.

    At worst, one could imagine a scenario where AI-wielding elites compete against each other, and need access to as many natural resources as possible to stay competitive. Then you'd suspect we wouldn't just be ignored, our very existence would be an opportunity cost for the elite class.

    e.g. It's 2056 and Musk needs every square meter of solar panels he can get to ensure his AI army triumphs over Zuckerberg's. The plot of land where you've been quietly growing your potatoes and trying to stop your children dying of cholera doesn't get much sun, but it gets a bit - and that's more than enough for him to have you murdered (or, if he's feeling merciful, evicted to die of starvation).

    • keybored 5 months ago

      I don’t buy your history but we seem to agree on the conclusion.

      The “thus” is misplaced. Nothing was given from the elites. In two senses of the word: labor created that standard of living, elites took a lot of it, and then labor forced them to give a bit more of if back. And labor has always created that value.

      And the future when labor is displaced? Does the fully automatically manufactured “largesse” of the elites dry up because the elites made it and they don’t have to give it to anyone else? No to the first part, yes to the second. Labor first created the value. Then the automation. Then they let the elite steal it wholesale.

      So discussing the elites as having inherently something to give away is misplaced.

      • acatnamedjoe 5 months ago

        They have something to give away because they have power, which is the only thing that matters in the final analysis.

        It doesn't matter who created the value - it's who controls it.

  • duvenaud 5 months ago

    Last author here. I agree that states already have little incentive to effectively represent their citizens. But they could have even less!

    What would it look like to face these issues but more directly? Ending capitalism and competition?

    • keybored 5 months ago

      > But they could have even less!

      Driven by the most direct, tangible cause of the capitalists pursuing their own narrow interests. And AI fits in there as well ($500B funding to AI says Trump).

      > What would it look like to face these issues but more directly?

      Socialism. It doesn’t matter that jobs are automated away under socialism since there is no capital/worker social relation. If jobs are automated everyone just works less.

      > Ending capitalism and competition?

      You put those two together for some reason.

      Capitalism and the state sector have lead to amazing improvements in the productive capacity of society overall. A lack of, at least from our First World perspective at least, doesn’t seem to be an issue. Instead the problem is (1) unsustainable growth (climate change) and (2) directing the productive capacity towards pro-social goals. So yes a change is long overdue.

      Productive competition happens under capitalism. And sometimes it doesn’t. There’s plenty of accusations of Big Tech being anti-competitive on this site.

      • duvenaud 5 months ago

        I appreciate your engagement, and I don't really have a plan myself to address these problems, but I don't really know what to do with "Socialism" as a recommendation. Care to elaborate what you think I, or anyone should do or advocate for more concretely?

        The reason I mentioned competition is that, even under a complete command economy, there is still internal competition for control, which I think would still lead to human disempowerment eventually for similar reasons. Though probably on a longer time scale, which might still be a win. The only way to avoid being outcompeted is to have a total state ruled by something sufficiently agentic to resist all attempts to even adjust its policies. Which sounds terrifying and hard to get right on the first try.

        • keybored 5 months ago

          > I appreciate your engagement, and I don't really have a plan myself to address these problems, but I don't really know what to do with "Socialism" as a recommendation. Care to elaborate what you think I, or anyone should do or advocate for more concretely?

          A typical/classic way to build socialism is to organize the working class.

  • smackeyacky 5 months ago

    If all labour is displaced, where will our capitalist overloads find more income? Can’t be government because we’re being taxed now, but won’t be once unemployed.

    We need jobs so we can continue to be fleeced.

    • acatnamedjoe 5 months ago

      That's only because a capitalist economy uses the circulation of currency and goods as a way to multiply wealth, while motivating the people who generate the wealth.

      Sufficiently advanced AI offers the potential for exponential wealth generation for our (former) capitalist overlords, without you or I needing to produce or consume anything.

      • keybored 5 months ago

        I think that was a sarcastic jab at consumer capitalism.

sandspar 5 months ago

Nice paper, I've shared it with my people. "Gradual disempowerment" works well as a title because I feel it moment by moment. When I ask an AI to compose a letter to my landlord, I can sense the gradual disempowerment. It's something we can relate to and grasp.

svilen_dobrev 5 months ago

in Lila, Pirsig shows 4 levels of things: physics -> biology -> society -> intellect (highest). (There was some addition later extrapolating from physics down to quantum, but it does not change anything). Each of the higher levels plays with and exploits the lower ones as it pleases.

Now, that intellect there is mostly human intellect.. over human societies over human biology.

If the intellect is replaced / substituted / forked into another form, then what about the underlying structure? A parallel path/hierarchy? Is original one abandoned, or just irrelevant?

rhelz 5 months ago

99.99% of humanity has been disempowered since the agricultural revolution.

  • duvenaud 5 months ago

    In many senses, yes. But the empowered ones still needed to keep most of the rest of the people happy and healthy enough to work, most of the time. That's what we're saying will change.

    In fact, it'll be worse: Humans are currently a net source of growth, but they'll switch to being a net drag on growth. So the decision-makers will be forced to sideline humans in order to compete.

    • rhelz 5 months ago

      I agree that the loss of control we are threatened with is qualitatively different and far more absolute than the disenfranchisement we feel today, for sure.

      Still, I wonder if any humans have been in control, really, since the agricultural revolution. Billionares building bug-out shelters? They seem even more scared than the rest of us.

      Surely if we were really in control we could have come up with a better system than this.

      • duvenaud 5 months ago

        Yep, a major missing piece in this entire problem / discussion is how to characterize how much "power" "humans" have had. My best idea so far is to characterize the sorts of outcomes that can be feasibly steered towards under what conditions.

    • dutchbookmaker 5 months ago

      Just a completely absurd statement unless by "empowerment" you mean famine, starvation and early death.

      Empowerment as 30% infant mortality rate.

      Empowering life long marriage since a person would get married and be dead in 10-15 years.

      Brilliant.

    • wfewras 5 months ago

      > In many senses, yes. But the empowered ones still needed to keep most of the rest of the people happy and healthy enough to work, most of the time. That's what we're saying will change.

      One of Western society's glaring cognitive dissonances: the conviction that "keeping people happy and healthy enough to work" is empowering them. Even assuming that the word "empowering" makes sense; even assuming that we can make sense of the notion of an authority "empowering" someone (which I personally cannot).

      Directionally agree with rhelz but would push it further: any technique, even those which may have preceded agriculture, already does all the things you're claiming AI is going to do. Even a procedure entirely implemented by humans can keep its weighting of any unwanted form of "human input" beneath any epsilon.

      ------

      > 2. There are effectively two ways these systems maintain their alignment: through explicit human actions (like voting and consumer choice), and implicitly through their reliance on human labor and cognition. The significance of the implicit alignment can be hard to recognize because we have never seen its absence.

      > 3. If these systems become less reliant on human labor and cognition, that would also decrease the extent to which humans could explicitly or implicitly align them. As a result, these systems—and the outcomes they produce—might drift further from providing what humans want.

      You talk about empowerment, but many of your arguments seem oriented toward alignment. Voting and consumer choice may indeed be techniques for aligning (and thereby binding and scaling) society (i.e., a given group of people), but they have very little ability to "empower" any given individual. The power of the individual voice literally decreases in proportion to the success of these techniques (i.e., in proportion to the growth of those groups of humans which compose them). In other words, your "explicit" techniques are alignment techniques and have little to do with empowerment.

      Your "implicit" category (labor, cognition, etc.), on the other hand, does seem to me to be oriented toward something like individual power. Unlike voting and market-making, labor and cognition do seem to be (or can naively be viewed as being) oriented more toward something like our everyday notion of individual power than they are toward these notions of social "alignment" and top-down "techniques of empowerment." That is, without much mental gymnastics, we can imagine labor, cognition, etc., as coming from within the individual and radiating outward — which is probably as good a criterion of power (individuality, sentience, free will, subjectivity, ego, humanity, etc.) as we're ever going to get.

      You seem to be claiming that AI is a relatively new threat to this category of "implicitly empowering forces." This is where you're going to lose the brighter minds in your audience. Because has there ever been a more dominant and monotonous trend in human society than the reduction of the dimensionality of human labor and cognition, the reduction of the degrees of freedom in which the human mind and body can play? Almost by definition, almost as the criterion for its existence, a society attempts to make itself less dependent on each of its individual components. So, in a society composed of humans, what would be a fairer mechanism for dissolving these snowflake dependencies than the invention or discovery of techniques by which to make the system as a whole less dependent on any possible human input?

      • duvenaud 5 months ago

        These are all good points about our use of language, thanks for the feedback.

        Maybe "disempowerment" is a bit of a red herring, or a misleading problem to focus on. The reason we didn't spend more time on clarifying that is that we're just using it to gesture towards a different set of mechanisms that lead to extinction-like outcomes than usual. So even if you think our definition of empowerment is poor, or that empowerment isn't a great goal - that's kind of OK.

        The thing we want to emphasize is that right now there are some mechanisms that steer our civilization towards keeping us alive, and free in some senses, that might stop operating. Though I take your point in the last paragraph that this might not change things much specifically regarding the implicitly empowering forces. We'll think about this one some more!

        • dnnn 5 months ago

          > The thing we want to emphasize is that right now there are some mechanisms that steer our civilization towards keeping us alive, and free in some senses, that might stop operating.

          I agree with this formulation. What I am emphasizing is that, insofar as mechanisms are steering, the system in which they are operating can be said to have largely decoupled from the human mind.

          "Alive and free in some senses."

          A society whose tagline is "alive and free in some senses" is already dystopian! Far scarier, to me, than its extinction or my own early death.

          • duvenaud 5 months ago

            > A society whose tagline is "alive and free in some senses" is already dystopian!

            Haha. Well we might agree about that - that description covers a wide range of possibilities. If you have ideas about what a plausible and good future looks like, please let me know. One of my next projects is trying to articulate "what is the best we can hope for?" and talk about which sets of goals are even possible to jointly fulfill. But certainly "everyone is free in all senses" is incoherent or at the very least, unstable.

          • wfewras 5 months ago

            Not that it matters, but dnnn is another account I got logged into for some reason; dnnn is the same person as wfewras

  • jazzyjackson 5 months ago

    After the agricultural revolution, almost nobody has retained the skills to feed themselves. What happens after the intellectual revolution?

  • GeoAtreides 5 months ago

    99.99% of humanity has not been disempowered since the agricultural revolution.

    see, I can also pull things out of my ass without any providing any supporting argument

LunicLynx 5 months ago

AI has no purpose on its own.

It will maybe drown us in attention seeking content, but otherwise why would it create imagery or videos for it self?

nis0s 5 months ago

In all honestly, the only existential threat AI poses is a threat to egos.

People define themselves through their works and their ideas, so some hope, while others fear, that the AI revolution will replace a person with something which will work better, and produce better ideas.

It’s good for human species evolution to develop a tool which exceeds its own capabilities. Obviously, and has been said by many before already, any existential threat to species will arise from a competitor species seeking to dominate or subjugate. The tools we’ve built so far are nowhere near any level to autonomously compete with humans as a species.

Again, our current set of tools is only a threat to the egos of individual humans who fear being made irrelevant. So, many of these anti-AI treatises read like saying “don’t built the printing press because scribes will lose their jobs!”. But I think human life doesn’t exist to “create jobs”, and individuals are not the sum total of their products or works. If human potential has any merit or capability, then it will be fine because one thing advanced human cognition does really well is cognitive flexibility.

In some sense, I hope the AI revolution results in a human intellectual revolution, where an individual is forced to think deeply of what it means to live besides fulfilling primal needs. The pessimist in me thinks the most likely scenario will be some kind of resurgence in spirituality and religion, which is a bigger existential threat to human thought and freedom than AI.

The practical consequences of AI replacing individual jobs means that governments will have to introduce some sort of universal basic income, and other types of social welfare programs, to ensure and maintain social stability. If there is a threat to human existence it’s bad governance and leaders, but not bad tools.

Sorry if some of this is already captured in the article somewhere, I skimmed through it because I am not well. I sincerely appreciate that so many scholars are thinking about the practical implications of the tools people build.