I think this is correct, but for what it’s worth, I did take this into account in my proposed AI representative system. My system wasn’t a free-for-all, it was government-run with an equally powerful value-customizable AI assigned to each citizen. Then these representatives would debate in a controlled setting, like a giant parliament with established discussion rules. Like a legislature the size of the populace.
In the free-for-all setting, then you not only have problems with States and corporations, but also inequality between people. Wealthy people can afford smarter, faster, more numerous AIs. Less cautious people can let their AIs operate with less hindrance from human oversight. So the “AI teams” that pull ahead will be those who started out powerful and acted recklessly (and probably also immorally).
So the only way this situation is ok is if the government bans all of that free-for-all AI. Or, if a value-aligned AI Singleton or coalition bans it. Naive personally-aligned AIs are a bad equilibrium.
I think this is correct, but for what it’s worth, I did take this into account in my proposed AI representative system. My system wasn’t a free-for-all, it was government-run with an equally powerful value-customizable AI assigned to each citizen. Then these representatives would debate in a controlled setting, like a giant parliament with established discussion rules. Like a legislature the size of the populace.
In the free-for-all setting, then you not only have problems with States and corporations, but also inequality between people. Wealthy people can afford smarter, faster, more numerous AIs. Less cautious people can let their AIs operate with less hindrance from human oversight. So the “AI teams” that pull ahead will be those who started out powerful and acted recklessly (and probably also immorally).
So the only way this situation is ok is if the government bans all of that free-for-all AI. Or, if a value-aligned AI Singleton or coalition bans it. Naive personally-aligned AIs are a bad equilibrium.