I feel like there are some critical metrics are factors here that are getting overlooked in the details.
I agree with your assessment that it’s very likely that many people will lose power. I think it’s fairly likely that most humans won’t be able to provide much economic value at some point, and won’t be able to ask for many resources in response. So I could see an argument for incredibly high levels of inequality.
However, there is a key question in that case, of “could the people who own the most resources guide AIs using those resources to do what they want, or will these people lose power as well?”
I don’t see a strong reason why these people would lose power or control. That would seem like a fundamental AI alignment issue—in a world where a small group of people own all the world’s resources, and there’s strong AI, can those people control their AIs in ways that would provide this group a positive outcome?
2. There are effectively two ways these systems maintain their alignment: through explicit human actions (like voting and consumer choice), and implicitly through their reliance on human labor and cognition. The significance of the implicit alignment can be hard to recognize because we have never seen its absence.
3. If these systems become less reliant on human labor and cognition, that would also decrease the extent to which humans could explicitly or implicitly align them. As a result, these systems—and the outcomes they produce—might drift further from providing what humans want.
There seems to be a key assumption here that people are able to maintain control because of the fact that their labor and cognition is important.
I think this makes sense for people who need to work for money, but not for those who are rich.
Our world has a long history of dumb rich people who provide neither labor nor cognition, and still seem to do pretty fine. I’d argue that power often matters more than human output, and would expect the importance of power to increase over time.
I think that many rich people now are able to maintain a lot of control, with very little labor/cognition. They have been able to decently align other humans to do things for them.
I feel like there are some critical metrics are factors here that are getting overlooked in the details.
I agree with your assessment that it’s very likely that many people will lose power. I think it’s fairly likely that most humans won’t be able to provide much economic value at some point, and won’t be able to ask for many resources in response. So I could see an argument for incredibly high levels of inequality.
However, there is a key question in that case, of “could the people who own the most resources guide AIs using those resources to do what they want, or will these people lose power as well?”
I don’t see a strong reason why these people would lose power or control. That would seem like a fundamental AI alignment issue—in a world where a small group of people own all the world’s resources, and there’s strong AI, can those people control their AIs in ways that would provide this group a positive outcome?
There seems to be a key assumption here that people are able to maintain control because of the fact that their labor and cognition is important.
I think this makes sense for people who need to work for money, but not for those who are rich.
Our world has a long history of dumb rich people who provide neither labor nor cognition, and still seem to do pretty fine. I’d argue that power often matters more than human output, and would expect the importance of power to increase over time.
I think that many rich people now are able to maintain a lot of control, with very little labor/cognition. They have been able to decently align other humans to do things for them.