People get killed by people let on parole. I guess it doesn’t form a species wide threat. I am left pondering that if humans grew in danger would we box them accordingly strongly? I am thinking that on one hand event like 9/11 actually strip civil liberties effectively boxing people more strongly, so it seems it might actually be the case.
The origin of a intelligence shouldn’t bear that much on how potent it is. What is the argument again of thinking that AIs are orders of magnitudes more capable than humans?
What is the argument again of thinking that AIs are orders of magnitudes more capable than humans?
Nick Bostrom answers this at length in Superintelligence, which has been widely discussed on LW. Superintelligence is a well-researched, thought-provoking and engaging book; I recommend it. I don’t think that I can give a very satisfactory summary of the argument in a short comment, however.
An unboxed AI is presumed to be an existential threat. Most human criminals are not.
People get killed by people let on parole. I guess it doesn’t form a species wide threat. I am left pondering that if humans grew in danger would we box them accordingly strongly? I am thinking that on one hand event like 9/11 actually strip civil liberties effectively boxing people more strongly, so it seems it might actually be the case.
The origin of a intelligence shouldn’t bear that much on how potent it is. What is the argument again of thinking that AIs are orders of magnitudes more capable than humans?
Nick Bostrom answers this at length in Superintelligence, which has been widely discussed on LW. Superintelligence is a well-researched, thought-provoking and engaging book; I recommend it. I don’t think that I can give a very satisfactory summary of the argument in a short comment, however.