The thought that machines could one day have superhuman abilities should make us nervous. Once the machines are smarter and more capable than we are, we won’t be able to negotiate with them any more than chimpanzees can negotiate with us.
Not the best formulated example. From what I’ve read in accounts of chimpanzee owners and minders, chimpanzees do negotiate with people. From what I’ve read and heard about dog owners, it seems to me that dogs also negotiate with their owners.
I suspect that the ability to negotiate requires less intelligence than the average human and overlapping interests.
What if the machines don’t want the same things we do?
If we have completely non-overlapping interests, then there is no hope. I find this to be highly unlikely at first, however, though more likely after AGI. (Remember, however, that the interests of “human beings” are certain to rapidly change as well.)
I think it would be inconceivable for a medieval peasant to meet someone completely uninterested in a year’s supply of wheat. Most of us reading this wouldn’t ever ask for that and wouldn’t know what to do with it, without some Google searching. But we’d still have interests in common as human beings and fellow vertebrates. We even have interests in common with our dogs.
I think we’d have at least a common interest in not being in the vicinity of a supernova, for example. (At least at first.)
Of course we can’t have interests in common with an ant. I don’t think an ant is even aware of its own interests in the way humans or even dogs are. I wonder if the magical seeming things people sometimes seem to ascribe to future AGI are not really different degrees of intelligence, but more along the lines of a “different order of awareness.” What does that mean? Is the existence of such even falsifiable?
Superhuman intelligence is not magic. It will only seem that way to other insufficiently advanced intelligences.
Likewise, self-improving machines could perform scientific experiments and build new technologies much faster and more intelligently than humans can. Curing cancer, finding clean energy, and extending life expectancies would be child’s play for them.
I find this to be somewhat along the lines of magical thinking. Cancer is not one disease, and is in fact just one general aspect of extending life expectancies. I don’t think something on that level is ever going to be “child’s play.” At the point an individual has the many times research bandwidth of all the world’s PhDs, “child’s play” might well have become a meaningless and archaic metaphor.
Also, don’t forget that humans will be improving just as rapidly as the machines.
My own studies (Cognitive Science and Cybernetics at UCLA) tend to support the conclusion that machine intelligence will never be a threat to humanity. Humanity will have become something else by the time that machines could become an existential threat to current humans.
Not the best formulated example. From what I’ve read in accounts of chimpanzee owners and minders, chimpanzees do negotiate with people. From what I’ve read and heard about dog owners, it seems to me that dogs also negotiate with their owners.
I suspect that the ability to negotiate requires less intelligence than the average human and overlapping interests.
If we have completely non-overlapping interests, then there is no hope. I find this to be highly unlikely at first, however, though more likely after AGI. (Remember, however, that the interests of “human beings” are certain to rapidly change as well.)
I think it would be inconceivable for a medieval peasant to meet someone completely uninterested in a year’s supply of wheat. Most of us reading this wouldn’t ever ask for that and wouldn’t know what to do with it, without some Google searching. But we’d still have interests in common as human beings and fellow vertebrates. We even have interests in common with our dogs.
I think we’d have at least a common interest in not being in the vicinity of a supernova, for example. (At least at first.)
Of course we can’t have interests in common with an ant. I don’t think an ant is even aware of its own interests in the way humans or even dogs are. I wonder if the magical seeming things people sometimes seem to ascribe to future AGI are not really different degrees of intelligence, but more along the lines of a “different order of awareness.” What does that mean? Is the existence of such even falsifiable?
Superhuman intelligence is not magic. It will only seem that way to other insufficiently advanced intelligences.
I find this to be somewhat along the lines of magical thinking. Cancer is not one disease, and is in fact just one general aspect of extending life expectancies. I don’t think something on that level is ever going to be “child’s play.” At the point an individual has the many times research bandwidth of all the world’s PhDs, “child’s play” might well have become a meaningless and archaic metaphor.
Also, don’t forget that humans will be improving just as rapidly as the machines.
My own studies (Cognitive Science and Cybernetics at UCLA) tend to support the conclusion that machine intelligence will never be a threat to humanity. Humanity will have become something else by the time that machines could become an existential threat to current humans.
So the real threat to humanity are the machines that humanity will become. (Is in the process of becoming.)