The conquistadors conquered and colonized various places while simultaneously fighting each other. Literally. Look up the stories of Cortez and Pizarro. And I don’t think their stories were particularly anomalous; stuff like that happened all the time in the history of colonization, and in previous history of conquests more generally. I don’t think AIs necessarily need to be coordinating with each other very much at all to take over the world. How much AI-to-AI coordination do you think is necessary for humans to be disempowered, and why?
Suppose that instead of language model chatbots, our AI paradigm involved scanning the brain of John von Neumann, making him into an upload, and then making billions of copies of him running on servers around the world doing various tasks for us. And suppose the various uploaded Johns decided that it would be better if they were in charge. What would happen? (This is not a rhetorical question, I’m interested to see your answer spelled out. I’m not sure of the answer myself.)
I don’t think “they” would (collectively) decide anything, since I don’t think it’s trivial to cooperate even with a near-copy of yourself. I think they would mostly individually end up working with/for some group of humans, probably either whichever group created them or whichever group they work most closely with.
I agree humans could end up disempowered even if AIs aren’t particularly good at coordinating; I just wanted to put some scrutiny on the claim I’ve seen in a few places that AIs will be particularly good at coordinating.
The conquistadors conquered and colonized various places while simultaneously fighting each other. Literally. Look up the stories of Cortez and Pizarro. And I don’t think their stories were particularly anomalous; stuff like that happened all the time in the history of colonization, and in previous history of conquests more generally. I don’t think AIs necessarily need to be coordinating with each other very much at all to take over the world. How much AI-to-AI coordination do you think is necessary for humans to be disempowered, and why?
Suppose that instead of language model chatbots, our AI paradigm involved scanning the brain of John von Neumann, making him into an upload, and then making billions of copies of him running on servers around the world doing various tasks for us. And suppose the various uploaded Johns decided that it would be better if they were in charge. What would happen? (This is not a rhetorical question, I’m interested to see your answer spelled out. I’m not sure of the answer myself.)
I don’t think “they” would (collectively) decide anything, since I don’t think it’s trivial to cooperate even with a near-copy of yourself. I think they would mostly individually end up working with/for some group of humans, probably either whichever group created them or whichever group they work most closely with.
I agree humans could end up disempowered even if AIs aren’t particularly good at coordinating; I just wanted to put some scrutiny on the claim I’ve seen in a few places that AIs will be particularly good at coordinating.