Mostly agreed, but why do you think a positive singularity requires a general intelligence? Why can’t we achieve a positive singularity by using intelligence amplification, uploading and/or narrow AIs in some clever way? For example, if we can have a narrow AI that kills all humans, why can’t we have a narrow AI that stops all competing AIs?
I meant general intelligence in an abstract sense, not necessarily AGI. So intelligence amplification would be covered unless it’s just amplifying a narrow area, for example making someone very good at manipulating people without increasing his or her philosophical and other abilities. (This makes me realize that IA can be quite dangerous if it’s easier to amplify narrow domains.) I think uploading is just an intermediary step to either intelligence amplification or FAI, since it’s hard to see how trillions of unenhanced human uploads all running very fast would be desirable or safe in the long run.
For example, if we can have a narrow AI that kills all humans, why can’t we have a narrow AI that stops all competing AIs?
It’s hard to imagine a narrow AI that can stop all competing AIs, but can’t be used in other dangerous ways, like as a generic cyberweapon that destroys all the technological infrastructure of another country or the whole world. I don’t know how the group that first develops such an AI would be able to keep it out of the hands of governments or terrorists. Once such an AI comes into existence, I think there will be a big arms race to develop countermeasures and even stronger “attack AIs”. Not a very good situation unless we’re just trying to buy a little bit of time until FAI or IA is developed.
Mostly agreed, but why do you think a positive singularity requires a general intelligence? Why can’t we achieve a positive singularity by using intelligence amplification, uploading and/or narrow AIs in some clever way? For example, if we can have a narrow AI that kills all humans, why can’t we have a narrow AI that stops all competing AIs?
I meant general intelligence in an abstract sense, not necessarily AGI. So intelligence amplification would be covered unless it’s just amplifying a narrow area, for example making someone very good at manipulating people without increasing his or her philosophical and other abilities. (This makes me realize that IA can be quite dangerous if it’s easier to amplify narrow domains.) I think uploading is just an intermediary step to either intelligence amplification or FAI, since it’s hard to see how trillions of unenhanced human uploads all running very fast would be desirable or safe in the long run.
It’s hard to imagine a narrow AI that can stop all competing AIs, but can’t be used in other dangerous ways, like as a generic cyberweapon that destroys all the technological infrastructure of another country or the whole world. I don’t know how the group that first develops such an AI would be able to keep it out of the hands of governments or terrorists. Once such an AI comes into existence, I think there will be a big arms race to develop countermeasures and even stronger “attack AIs”. Not a very good situation unless we’re just trying to buy a little bit of time until FAI or IA is developed.