I think the key issue for liberalism under AGI/ASI is that AGI/ASI makes value alignment matter way, way more to a polity, and in particular you cannot get a polity to make you live under AGI/ASI if the AGI/ASI doesn’t want you to live, because you are economically useless.
Liberalism’s goal is to avoid the value alignment question, and to mostly avoid the question of who should control society, but AGI/ASI makes the question unavoidable for your basic life.
Indeed, I think part of the difficulty of AI alignment is lots of people have trouble realizing that the basic things they take for granted under the current liberal order would absolutely fall away if AIs didn’t value their lives intrinisically, and had selfish utility functions.
The goal of liberalism is to make a society where vast value differences can interact without negative/0-sum conflict and instead trade peacefully, but this is not possible once we create a society where AIs can do all the work without human labor being necessary.
I like Vladimir Nesov’s comment, and while I have disagreements, they’re not central to his point, and the point still works, just in amended form:
Hard agree. It’s ironic that it took hundreds of years to get people to accept the unintuitive positive-sum-ness of liberalism, libertarianism, and trade. But now we might have to convince everyone that those seemingly-robust effects are likely to go away, and that governments and markets are going to be unintuitively harsh.
There are several important “happy accidents” that allowed almost everyone to thrive under liberalism, that are likely to go away: - Not usually enough variation in ability to allow sheer domination (though this is not surprising, due to selection—everyone who was completely dominated is mostly not around anymore). - Predictable death from old age as a leveler preventing power lock-in. - Sexual reproduction (and deleterious effects of inbreeding) giving gains to intermixing beyond family units, and reducing the all-or-nothing stakes of competition. - Not usually enough variation in reproductive rates to pin us to Malthusian equilibria.
I think the key issue for liberalism under AGI/ASI is that AGI/ASI makes value alignment matter way, way more to a polity, and in particular you cannot get a polity to make you live under AGI/ASI if the AGI/ASI doesn’t want you to live, because you are economically useless.
Liberalism’s goal is to avoid the value alignment question, and to mostly avoid the question of who should control society, but AGI/ASI makes the question unavoidable for your basic life.
Indeed, I think part of the difficulty of AI alignment is lots of people have trouble realizing that the basic things they take for granted under the current liberal order would absolutely fall away if AIs didn’t value their lives intrinisically, and had selfish utility functions.
The goal of liberalism is to make a society where vast value differences can interact without negative/0-sum conflict and instead trade peacefully, but this is not possible once we create a society where AIs can do all the work without human labor being necessary.
I like Vladimir Nesov’s comment, and while I have disagreements, they’re not central to his point, and the point still works, just in amended form:
https://www.lesswrong.com/posts/Z8C29oMAmYjhk2CNN/non-superintelligent-paperclip-maximizers-are-normal#FTfvrr9E6QKYGtMRT
Hard agree. It’s ironic that it took hundreds of years to get people to accept the unintuitive positive-sum-ness of liberalism, libertarianism, and trade. But now we might have to convince everyone that those seemingly-robust effects are likely to go away, and that governments and markets are going to be unintuitively harsh.
There are several important “happy accidents” that allowed almost everyone to thrive under liberalism, that are likely to go away:
- Not usually enough variation in ability to allow sheer domination (though this is not surprising, due to selection—everyone who was completely dominated is mostly not around anymore).
- Predictable death from old age as a leveler preventing power lock-in.
- Sexual reproduction (and deleterious effects of inbreeding) giving gains to intermixing beyond family units, and reducing the all-or-nothing stakes of competition.
- Not usually enough variation in reproductive rates to pin us to Malthusian equilibria.