While I agree that the idea of AI representatives don’t immediately solve the problem absent other things, I do think you underestimate the power of AI representatives in solving the issues of Gradual Disempowerment, and there are a couple of reasons for this:
Lots of the dynamic of gradual disempowerment comes down to the fact that you can’t just leave an economy or society that disempowers you, and due to stuff like fusion, nanotech, biotech and more technologies could allow humans to survive alone in space colonies without suffering the big logistical penalties for leaving society.
Assuming that a supermajority/every AI produced by companies/states terminally value humans surviving and thriving, then people being disempowered could work out fine, similar to how pets are treated relatively well by humans, despite pets generally being totally dependent on humans to live well (with caveats).
Indeed, the human-pet relationship is a good example of what I think good futures/relationships between AIs and humans look like by default, assuming the alignment problem is solved and we don’t die and get very rich.
That isn’t likely to happen, but if it did happen, would defuse a lot of the issue of disempowerment leading to starvation/death.
Also, the individual AI representatives can coordinate (under assumptions of shared values) much better than human negotiators/coordinators do, so companies don’t have all the coordination power.
While I agree that the idea of AI representatives don’t immediately solve the problem absent other things, I do think you underestimate the power of AI representatives in solving the issues of Gradual Disempowerment, and there are a couple of reasons for this:
Lots of the dynamic of gradual disempowerment comes down to the fact that you can’t just leave an economy or society that disempowers you, and due to stuff like fusion, nanotech, biotech and more technologies could allow humans to survive alone in space colonies without suffering the big logistical penalties for leaving society.
RussellThor talks more about this below:
https://www.lesswrong.com/posts/pZhEQieM9otKXhxmd/?commentId=CramJssYNDmTMDr6Z
Assuming that a supermajority/every AI produced by companies/states terminally value humans surviving and thriving, then people being disempowered could work out fine, similar to how pets are treated relatively well by humans, despite pets generally being totally dependent on humans to live well (with caveats).
Indeed, the human-pet relationship is a good example of what I think good futures/relationships between AIs and humans look like by default, assuming the alignment problem is solved and we don’t die and get very rich.
That isn’t likely to happen, but if it did happen, would defuse a lot of the issue of disempowerment leading to starvation/death.
Also, the individual AI representatives can coordinate (under assumptions of shared values) much better than human negotiators/coordinators do, so companies don’t have all the coordination power.