> Once AI has begun to displace humans, existing feedback mechanisms that encourage human influence and flourishing will begin to break down. For example, states funded mainly by taxes on AI profits instead of their citizens' labor will have little incentive to ensure citizens' representation. This could occur at the same time as AI provides states with unprecedented influence over human culture and behavior, which might make coordination amongst humans more difficult, thereby further reducing humans' ability to resist such pressures. We describe these and other mechanisms and feedback loops in more detail in this work.
States like the USA already have little incentive to represent their citizens. This has been studied. Despite most of them being workers. They represent the rich instead, who just have their asses sat on assets.
This has been an issue for over a hundred years. So there is plenty of content (like what the AI likes) to pull from.
AI that is not embodied in the world can simply be unplugged. Straightforward if we all own technology collectively. But massively complicated when you have capitalists who have every incentive to replace all human labor. (And replacing all human labor is only a problem because a tiny minority would end up dominating everyone else.)
The authors seem more concerned with that hypothetical AI that would consume the universe on a directive to produce stamps (or whatever it was). Instead they could focus on the same issue that they are ostensibly concerned about but face it much more directly.
> Once AI has begun to displace humans, existing feedback mechanisms that encourage human influence and flourishing will begin to break down. For example, states funded mainly by taxes on AI profits instead of their citizens' labor will have little incentive to ensure citizens' representation. This could occur at the same time as AI provides states with unprecedented influence over human culture and behavior, which might make coordination amongst humans more difficult, thereby further reducing humans' ability to resist such pressures. We describe these and other mechanisms and feedback loops in more detail in this work.
States like the USA already have little incentive to represent their citizens. This has been studied. Despite most of them being workers. They represent the rich instead, who just have their asses sat on assets.
This has been an issue for over a hundred years. So there is plenty of content (like what the AI likes) to pull from.
AI that is not embodied in the world can simply be unplugged. Straightforward if we all own technology collectively. But massively complicated when you have capitalists who have every incentive to replace all human labor. (And replacing all human labor is only a problem because a tiny minority would end up dominating everyone else.)
The authors seem more concerned with that hypothetical AI that would consume the universe on a directive to produce stamps (or whatever it was). Instead they could focus on the same issue that they are ostensibly concerned about but face it much more directly.