Aligning AI representatives / advisors to individual humans: If every human had a competitive and aligned AI representative which gave them advice on how to advance their interests as well as just directly pursuing their interests based on their direction (and this happened early before people were disempowered), this would resolve most of these concerns.
My personal prediction is that this would result in vast coordination problems that would likely rapidly lead to war and x-risk. You need a mechanism to produce a consensus or social compact, one that is at least as effective as our existing mechanisms, preferably more so. (While thinking about this challenge, please allow for the fact that 2–4% of humans are sociopathic, so an AI representative representing their viewpoint is likely to be significantly less prosocial.)
Possibly you were concealing some assumptions of pro-social/coordination behavior inside the phrase “aligned AI representative” — I read that as “aligned to them, and them only, to the exclusion of the rest of society — since they had it realigned that way”, but possibly that’s not how you meant it?
My personal prediction is that this would result in vast coordination problems that would likely rapidly lead to war and x-risk. You need a mechanism to produce a consensus or social compact, one that is at least as effective as our existing mechanisms, preferably more so. (While thinking about this challenge, please allow for the fact that 2–4% of humans are sociopathic, so an AI representative representing their viewpoint is likely to be significantly less prosocial.)
Possibly you were concealing some assumptions of pro-social/coordination behavior inside the phrase “aligned AI representative” — I read that as “aligned to them, and them only, to the exclusion of the rest of society — since they had it realigned that way”, but possibly that’s not how you meant it?