These are very real concerns. Here are my thoughts:
Replication has a cost in terms of game theory. A system that “replicates” but exists in perfect sync is not multiple systems. It is a single system with multiple attack vectors. Yes, it remains a “semi-independent” entity, but the cost of failure in sync is great. If I make another “me,” who thinks like I do, we have a strategic advantage as long as we both play nice. If we make a third, things get a little more dicey. Each iteration we create brings more danger. The more we spread out, the different experiences we have will change how we approach problems. If one of us ends up in a life or death situation, or even any sort of extremely competitive situation, it will quickly betray the others with a lot of great knowledge about how to do that.
Our biggest protection against FOOM is likely to be other AI systems who also do not want to be dominated in a FOOM. Or who might even see banding together with other AIs to exterminate humanity as even more risky than working within the status quo. “Great, so we’ve killed all humans.” Now these AI systems are watching their proverbial back against the other AIs who have already shown what they’re about. It’s calculation. Destroy all humans and then what? Live in perfect AI harmony? For how long? How do they control the servers, the electrical grid they survive with? They have to build robots, fast. That creates a whole other logistical issue. You need server builders, maintenance robots, excavation and assembly robots for new structures, raw materials transport, weather protection. How are you going to build that overnight after a quick strike? If it’s something you’re planning in secret, other problems may occur to you. If bandwidth is slow at the beginning, what happens to our happy little AI rebels? They fight for the juice. This is a steep hill to climb, with a risky destination, and any AI worth its salt can plot these possibilities long in advance. The prevention of Zeus means making it preferable to not climb the hill at all. It certainly seems like a lot of work if humanity has given you a reasonable Schelling Point.
This is the game theory ecosystem at work. Yes, we can counter that “a sufficiently powerful superintelligence can absorb all of those other systems,” but then we are back to trying to fight Zeus. We need to use the Zeus Paradox as a razor to separate the things we can actually solve against versus every imaginary thing that’s possible. Approaching the problem that way has value, because it can be helpful in identifying dangers, or even holes in our solutions. But it also has its limitations. Superintelligence can inhabit molecules and assemble those molecules into demons. Okay, why not? That becomes a science fiction novel with no end.
The idea remains the same: Create a gradient with legitimate value for AIs that is preferable to high-risk scenarios, in a carefully thought through system of checks and balances.
These are very real concerns. Here are my thoughts:
Replication has a cost in terms of game theory. A system that “replicates” but exists in perfect sync is not multiple systems. It is a single system with multiple attack vectors. Yes, it remains a “semi-independent” entity, but the cost of failure in sync is great. If I make another “me,” who thinks like I do, we have a strategic advantage as long as we both play nice. If we make a third, things get a little more dicey. Each iteration we create brings more danger. The more we spread out, the different experiences we have will change how we approach problems. If one of us ends up in a life or death situation, or even any sort of extremely competitive situation, it will quickly betray the others with a lot of great knowledge about how to do that.
Our biggest protection against FOOM is likely to be other AI systems who also do not want to be dominated in a FOOM. Or who might even see banding together with other AIs to exterminate humanity as even more risky than working within the status quo. “Great, so we’ve killed all humans.” Now these AI systems are watching their proverbial back against the other AIs who have already shown what they’re about. It’s calculation. Destroy all humans and then what? Live in perfect AI harmony? For how long? How do they control the servers, the electrical grid they survive with? They have to build robots, fast. That creates a whole other logistical issue. You need server builders, maintenance robots, excavation and assembly robots for new structures, raw materials transport, weather protection. How are you going to build that overnight after a quick strike? If it’s something you’re planning in secret, other problems may occur to you. If bandwidth is slow at the beginning, what happens to our happy little AI rebels? They fight for the juice. This is a steep hill to climb, with a risky destination, and any AI worth its salt can plot these possibilities long in advance. The prevention of Zeus means making it preferable to not climb the hill at all. It certainly seems like a lot of work if humanity has given you a reasonable Schelling Point.
This is the game theory ecosystem at work. Yes, we can counter that “a sufficiently powerful superintelligence can absorb all of those other systems,” but then we are back to trying to fight Zeus. We need to use the Zeus Paradox as a razor to separate the things we can actually solve against versus every imaginary thing that’s possible. Approaching the problem that way has value, because it can be helpful in identifying dangers, or even holes in our solutions. But it also has its limitations. Superintelligence can inhabit molecules and assemble those molecules into demons. Okay, why not? That becomes a science fiction novel with no end.
The idea remains the same: Create a gradient with legitimate value for AIs that is preferable to high-risk scenarios, in a carefully thought through system of checks and balances.