This seems to imply that general intelligences are more powerful, as it basically bakes in diminishing returns—but we haven’t included effort yet. Imagine that the following three intelligences require equal effort: (10,10,10), (20,20,5), (100,5,5). Then the specialised intelligence is definitely the one you need to build.
The (100,5,5) AI seems kind of like a Hitler-AI, very good at manipulating people and taking power over human societies, but stupid about what to do once it takes over. We can imagine lots of narrow intelligences that are better at destruction than helping us reach a positive Singularity (or any kind of Singularity). We already know that FAI is harder than AGI, and if such narrow intelligences are easier than AGI, then we’re even more screwed.
So let’s caveat the proposition above: the most effective and dangerous type of AI might be one with a bare minimum amount of general intelligence, but an overwhelming advantage in one type of narrow intelligence.
I want to point out that a purely narrow intelligence (without even a bare minimum amount of general intelligence, i.e., a Tool-AI), becomes this type of intelligence if you combine it with a human. This is why I don’t think Tool-AIs are safe.
So I would summarize my current position as this: General intelligence is possible and what we need in order to reach a positive Singularity. Narrow intelligences that are powerful and dangerous may well be easier to build than general intelligence, so yes, we should be quite concerned about them.
Mostly agreed, but why do you think a positive singularity requires a general intelligence? Why can’t we achieve a positive singularity by using intelligence amplification, uploading and/or narrow AIs in some clever way? For example, if we can have a narrow AI that kills all humans, why can’t we have a narrow AI that stops all competing AIs?
I meant general intelligence in an abstract sense, not necessarily AGI. So intelligence amplification would be covered unless it’s just amplifying a narrow area, for example making someone very good at manipulating people without increasing his or her philosophical and other abilities. (This makes me realize that IA can be quite dangerous if it’s easier to amplify narrow domains.) I think uploading is just an intermediary step to either intelligence amplification or FAI, since it’s hard to see how trillions of unenhanced human uploads all running very fast would be desirable or safe in the long run.
For example, if we can have a narrow AI that kills all humans, why can’t we have a narrow AI that stops all competing AIs?
It’s hard to imagine a narrow AI that can stop all competing AIs, but can’t be used in other dangerous ways, like as a generic cyberweapon that destroys all the technological infrastructure of another country or the whole world. I don’t know how the group that first develops such an AI would be able to keep it out of the hands of governments or terrorists. Once such an AI comes into existence, I think there will be a big arms race to develop countermeasures and even stronger “attack AIs”. Not a very good situation unless we’re just trying to buy a little bit of time until FAI or IA is developed.
General intelligence is possible and what we need in order to reach a positive Singularity. Narrow intelligences that are powerful and dangerous may well be easier to build than general intelligence, so yes, we should be quite concerned about them.
Add a “may be” to the first sentence and I’m with you.
The (100,5,5) AI seems kind of like a Hitler-AI, very good at manipulating people and taking power over human societies, but stupid about what to do once it takes over. We can imagine lots of narrow intelligences that are better at destruction than helping us reach a positive Singularity (or any kind of Singularity). We already know that FAI is harder than AGI, and if such narrow intelligences are easier than AGI, then we’re even more screwed.
I want to point out that a purely narrow intelligence (without even a bare minimum amount of general intelligence, i.e., a Tool-AI), becomes this type of intelligence if you combine it with a human. This is why I don’t think Tool-AIs are safe.
So I would summarize my current position as this: General intelligence is possible and what we need in order to reach a positive Singularity. Narrow intelligences that are powerful and dangerous may well be easier to build than general intelligence, so yes, we should be quite concerned about them.
Mostly agreed, but why do you think a positive singularity requires a general intelligence? Why can’t we achieve a positive singularity by using intelligence amplification, uploading and/or narrow AIs in some clever way? For example, if we can have a narrow AI that kills all humans, why can’t we have a narrow AI that stops all competing AIs?
I meant general intelligence in an abstract sense, not necessarily AGI. So intelligence amplification would be covered unless it’s just amplifying a narrow area, for example making someone very good at manipulating people without increasing his or her philosophical and other abilities. (This makes me realize that IA can be quite dangerous if it’s easier to amplify narrow domains.) I think uploading is just an intermediary step to either intelligence amplification or FAI, since it’s hard to see how trillions of unenhanced human uploads all running very fast would be desirable or safe in the long run.
It’s hard to imagine a narrow AI that can stop all competing AIs, but can’t be used in other dangerous ways, like as a generic cyberweapon that destroys all the technological infrastructure of another country or the whole world. I don’t know how the group that first develops such an AI would be able to keep it out of the hands of governments or terrorists. Once such an AI comes into existence, I think there will be a big arms race to develop countermeasures and even stronger “attack AIs”. Not a very good situation unless we’re just trying to buy a little bit of time until FAI or IA is developed.
Add a “may be” to the first sentence and I’m with you.