Most actors in society - businesses, governments, corporations, even families—aren’t monolithic entities with a single hierarchy of goals. They’re composed of many individuals, each with their own diverse goals.
The diversity of goals of the component entities is good protection to have. In the case of an AI, do we still have the same diversity? Is there a reason why a monolithic AI with a single hierarchy of goals cannot operate on the level of a many-human collective actor?
I’m not sure how the solutions our society have evolved apply to an AI due to the fact that it isn’t necessarily a diverse collective of individually motivated actors.
Even more importantly, the biggest reason our world is stable is that humans have a very narrow range of capabilities, and this importantly applies to intelligence, which is normally distributed, meaning that societies can usually defeat outlier humans. AI capabilities will not nearly be this constrained, and the variance is worrying because there a real chance that one AI will be far more intelligent than any human that has ever lived, and it’s relatively easy to cross the human range, ala Go and Starcraft. It’s a similar reason why superpowers in the real world would doom us by default.
EDIT: I no longer think superpowers would doom us by default.
The diversity of goals of the component entities is good protection to have. In the case of an AI, do we still have the same diversity? Is there a reason why a monolithic AI with a single hierarchy of goals cannot operate on the level of a many-human collective actor?
I’m not sure how the solutions our society have evolved apply to an AI due to the fact that it isn’t necessarily a diverse collective of individually motivated actors.
Even more importantly, the biggest reason our world is stable is that humans have a very narrow range of capabilities, and this importantly applies to intelligence, which is normally distributed, meaning that societies can usually defeat outlier humans. AI capabilities will not nearly be this constrained, and the variance is worrying because there a real chance that one AI will be far more intelligent than any human that has ever lived, and it’s relatively easy to cross the human range, ala Go and Starcraft. It’s a similar reason why superpowers in the real world would doom us by default.
EDIT: I no longer think superpowers would doom us by default.