I am particularly bothered by this because it seems irrelevant to FAI. I’m fully convinced that a smarter than human AI could take control of the Earth via less magical means, using time tested methods such as manipulating humans, rigging elections, making friends, killing its enemies, and generally only being a marginally more clever and motivated than a typical human leader. A smarter than human AI could out-manipulate human institutions and out-plan human opponents with the sort of ruthless efficiency that modern computers beat humans in chess.
Your argument is extremely human-parochial. You seem to be thinking of AIs as potential supervillains who want to “rule the world,” (where ruling the world = controlling humans.) If you think that an AI would care about controlling humans, you are assuming that the AI would be very human-like. In the space of possible mind-designs, very few AIs care about humans as anything but raw resources.
In the space of possible mind-designs, your mind (and every human mind) is an extreme specialist in manipulating humans. So of course, to you manipulating humans seems vastly easier and more useful than building MNT or macro-sized robots, or whatever.
In the space of possible mind-designs, very few AIs care about humans as anything but raw resources.
An AGI cares about not being killed by humans.
In the space of possible mind-designs, your mind (and every human mind) is an extreme specialist in manipulating humans.
Corn manipulates humans to kill parasites that might damage the corn by a variety of ways. An entity doesn’t need to be smart to be engaged in manipulating humans.
As long as humans have the kind of power over our world that they have at the moment and AGI will either be skilled in dealing with humans or humans will shut it down if there seems to be a danger of the AGI amazing power and not caring about humans.
I’m not assuming that the AI has a large final preference for controlling humans. I am stressing how the AI interacts with humans because as a human that’s of particular concern to me. Access to human resource may also be a useful instrumental goal for a “young” AI, as human beings control fairly large amount of resources and gaining access to them may be the easiest route to power for an AI. My understanding is that in the context of FAI, we’re discussing AI in terms of what it means from humans, so that’s where I’m placing the emphasis. The discussion of how the AI gains resources/global control is valid even if the AI’s end game is tiling the universe in paperclips.
The question of whether an AI is likely to have more difficulty understanding humans or quantum mechanics is interesting. As a possible counterpoint, I would say that an AI programmed by human beings is likely to be close to human style thought in the space of all possible minds, so the vastness of mind space is perhaps not totally relevant. I’m not clear as to whether that’s a particularly good counterpoint.
I don’t have a problem with the AI building an army of macrosize robots, or taking over the internet, or whatever. I don’t think human society is well-designed, or is even capable of being well-designed, with respect to significantly slowing down an AI trying to convert us all into resources. Indeed, it seems to me that any number of possible path require fewer assumptions and less computational time than MNT. The essence of my complaint is that it seems like of the many possible paths to power for an AI, the one that gets stressed in FAI literature in on the less likely end of the spectrum, and I’m really confused as to why that choice has been made.
Your argument is extremely human-parochial. You seem to be thinking of AIs as potential supervillains who want to “rule the world,” (where ruling the world = controlling humans.) If you think that an AI would care about controlling humans, you are assuming that the AI would be very human-like. In the space of possible mind-designs, very few AIs care about humans as anything but raw resources.
In the space of possible mind-designs, your mind (and every human mind) is an extreme specialist in manipulating humans. So of course, to you manipulating humans seems vastly easier and more useful than building MNT or macro-sized robots, or whatever.
An AGI cares about not being killed by humans.
Corn manipulates humans to kill parasites that might damage the corn by a variety of ways. An entity doesn’t need to be smart to be engaged in manipulating humans.
As long as humans have the kind of power over our world that they have at the moment and AGI will either be skilled in dealing with humans or humans will shut it down if there seems to be a danger of the AGI amazing power and not caring about humans.
I’m not assuming that the AI has a large final preference for controlling humans. I am stressing how the AI interacts with humans because as a human that’s of particular concern to me. Access to human resource may also be a useful instrumental goal for a “young” AI, as human beings control fairly large amount of resources and gaining access to them may be the easiest route to power for an AI. My understanding is that in the context of FAI, we’re discussing AI in terms of what it means from humans, so that’s where I’m placing the emphasis. The discussion of how the AI gains resources/global control is valid even if the AI’s end game is tiling the universe in paperclips.
The question of whether an AI is likely to have more difficulty understanding humans or quantum mechanics is interesting. As a possible counterpoint, I would say that an AI programmed by human beings is likely to be close to human style thought in the space of all possible minds, so the vastness of mind space is perhaps not totally relevant. I’m not clear as to whether that’s a particularly good counterpoint.
I don’t have a problem with the AI building an army of macrosize robots, or taking over the internet, or whatever. I don’t think human society is well-designed, or is even capable of being well-designed, with respect to significantly slowing down an AI trying to convert us all into resources. Indeed, it seems to me that any number of possible path require fewer assumptions and less computational time than MNT. The essence of my complaint is that it seems like of the many possible paths to power for an AI, the one that gets stressed in FAI literature in on the less likely end of the spectrum, and I’m really confused as to why that choice has been made.
Might be easier for a program as well, if one person can write a chat bot that hypnotizes people :-)
One person.
Also, an AI needs to keep itself from being shut down. Also, an AI needs humans as its manipulators until it can have its own manipulators.