Most of your post is well reasoned, but I disagree with the opening paragraph:
It’s agreed on this list that general intelligences—those that are capable of displaying high cognitive performance across a whole range of domains—are those that we need to be worrying about. This is rational: the most worrying AIs are those with truly general intelligences, and so those should be the focus of our worries and work.
A dangerous intelligence doesn’t have to be smart in all domains, it can just be smart in a single dangerous domain. For example, I’d say that a group of uploaded minds running at 1000x speed is an existential risk.
it can just be smart in a single dangerous domain.
Possibly. If that domain can overwhelm other areas. But its still the general intelligences—those capable of using their weather prediction modules to be socially seductive instead—that have the most potential to go wrong.
There are some ways of taking over human society that are much easier than others (though we might not know which are easiest at the moment). A narrow intelligence gets to try one thing, and that has to work, while a general intelligence can search through many different approaches.
Yeah, I agree that a truly general intelligence would be the most powerful thing, if it could exist. But that doesn’t mean it’s the main thing to worry about, because non-general intelligences can be powerful enough to kill everyone, and higher degrees of power probably don’t matter as much.
For example, fast uploads aren’t general by your definition, because they’re only good at the same things that humans are good at, but that’s enough to be dangerous. And even a narrow tool AI can be dangerous, if the domain is something like designing weapons or viruses or nanotech. Sure, a tool AI is only dangerous in the wrong hands, but it will fall into wrong hands eventually, if something like FAI doesn’t happen first.
I’m not worrying about 1000x chicken minds. Do you think that the only thing stopping an average person from taking over the world is the fact that they don’t live to 75000?
First of all a mind running at 1000x the speed is quite different from a person living to 75000. Imagine boxing (with gloves and all) with somebody while you could move at twice the speed—this is quite different from boxing with somebody while you have twice his stamina (in the latter case you will win after a long and equal fight where your opponent gets tired. In the former case I’d be surprised if your opponent managed to land a hit).
Secondly: If I have 75000 years to waste I might as well take over the world at some point. Seems like a good return on investment. And really, how hard can it be? Maybe 300 years, tops?
Sounds like a good premise for a humorous science fiction story—someone tries that, and discovers the world has such a wide range of behavior that every effort to take it over has unmanageable side effects. Unmanageable but harmless side effects, since this is a humorous story.
Most of your post is well reasoned, but I disagree with the opening paragraph:
A dangerous intelligence doesn’t have to be smart in all domains, it can just be smart in a single dangerous domain. For example, I’d say that a group of uploaded minds running at 1000x speed is an existential risk.
Possibly. If that domain can overwhelm other areas. But its still the general intelligences—those capable of using their weather prediction modules to be socially seductive instead—that have the most potential to go wrong.
There are some ways of taking over human society that are much easier than others (though we might not know which are easiest at the moment). A narrow intelligence gets to try one thing, and that has to work, while a general intelligence can search through many different approaches.
Yeah, I agree that a truly general intelligence would be the most powerful thing, if it could exist. But that doesn’t mean it’s the main thing to worry about, because non-general intelligences can be powerful enough to kill everyone, and higher degrees of power probably don’t matter as much.
For example, fast uploads aren’t general by your definition, because they’re only good at the same things that humans are good at, but that’s enough to be dangerous. And even a narrow tool AI can be dangerous, if the domain is something like designing weapons or viruses or nanotech. Sure, a tool AI is only dangerous in the wrong hands, but it will fall into wrong hands eventually, if something like FAI doesn’t happen first.
We seem to have drifted into agreement.
I’m not worrying about 1000x chicken minds. Do you think that the only thing stopping an average person from taking over the world is the fact that they don’t live to 75000?
First of all a mind running at 1000x the speed is quite different from a person living to 75000. Imagine boxing (with gloves and all) with somebody while you could move at twice the speed—this is quite different from boxing with somebody while you have twice his stamina (in the latter case you will win after a long and equal fight where your opponent gets tired. In the former case I’d be surprised if your opponent managed to land a hit).
Secondly: If I have 75000 years to waste I might as well take over the world at some point. Seems like a good return on investment. And really, how hard can it be? Maybe 300 years, tops?
Sounds like a good premise for a humorous science fiction story—someone tries that, and discovers the world has such a wide range of behavior that every effort to take it over has unmanageable side effects. Unmanageable but harmless side effects, since this is a humorous story.