Is it worth thinking about not just a single AGI system, but technological development in general? Here is an outline for an argument—the actual argument requires a lot of filling in.
Definitions:
A “singleton” is a system with enough global control to prevent potential competitor AGIs from being developed. I’m assuming that this would imply the ability to obtain control in many other spheres as well, even if the singleton chooses not to use that power
“disruptive” is a vaguely-defined term which is strictly weaker than “singleton”. Disruptive doesn’t necessarily imply bad.
The argument should work for any reasonable definition of “AGI”.
Someone creates an AGI. Then one of the following is true:
The AGI becomes a singleton. This isn’t a job that we would trust to any current human, so for it to be safe the AGI would need not just human-level ethics but really amazing ethics. This is where arguments about the fragility of value and Kaj_Sotala’s document come in.
The AGI doesn’t become a singleton but it creates another AGI that does. This can be rolled into 1 (if we don’t distinguish between “creates” and “becomes”) or it can be rolled into 3 (if we don’t make the distinction between humans and AGIs acting as programmers).
The AGI doesn’t become a singleton and doesn’t create one either. Then we just wait for someone to develop the next AGI.
Notes on point 3:
New AGIs are likely to arrive thick and fast due to arms races, shared/leaked source code and rising technological waterline. This is the differential technology argument—even if your AGI won’t be disruptive, working on it might accelerate the development of more disruptive AGI.
A possible loophole: we might cycle round here for a long time if singleton creation is an unlikely event. Worth digging up arguments for/against this one.
Probably worth mentioning that AGIs could cause a lot of disruption, up to and including human extinction, even without becoming a singleton (e.g. out-competing humans economically), but this is only incidental to the argument
Is it worth thinking about not just a single AGI system, but technological development in general? Here is an outline for an argument—the actual argument requires a lot of filling in.
Definitions:
A “singleton” is a system with enough global control to prevent potential competitor AGIs from being developed. I’m assuming that this would imply the ability to obtain control in many other spheres as well, even if the singleton chooses not to use that power
“disruptive” is a vaguely-defined term which is strictly weaker than “singleton”. Disruptive doesn’t necessarily imply bad.
The argument should work for any reasonable definition of “AGI”.
Someone creates an AGI. Then one of the following is true:
The AGI becomes a singleton. This isn’t a job that we would trust to any current human, so for it to be safe the AGI would need not just human-level ethics but really amazing ethics. This is where arguments about the fragility of value and Kaj_Sotala’s document come in.
The AGI doesn’t become a singleton but it creates another AGI that does. This can be rolled into 1 (if we don’t distinguish between “creates” and “becomes”) or it can be rolled into 3 (if we don’t make the distinction between humans and AGIs acting as programmers).
The AGI doesn’t become a singleton and doesn’t create one either. Then we just wait for someone to develop the next AGI.
Notes on point 3:
New AGIs are likely to arrive thick and fast due to arms races, shared/leaked source code and rising technological waterline. This is the differential technology argument—even if your AGI won’t be disruptive, working on it might accelerate the development of more disruptive AGI.
A possible loophole: we might cycle round here for a long time if singleton creation is an unlikely event. Worth digging up arguments for/against this one.
Probably worth mentioning that AGIs could cause a lot of disruption, up to and including human extinction, even without becoming a singleton (e.g. out-competing humans economically), but this is only incidental to the argument