I’m not assuming that the AI has a large final preference for controlling humans. I am stressing how the AI interacts with humans because as a human that’s of particular concern to me. Access to human resource may also be a useful instrumental goal for a “young” AI, as human beings control fairly large amount of resources and gaining access to them may be the easiest route to power for an AI. My understanding is that in the context of FAI, we’re discussing AI in terms of what it means from humans, so that’s where I’m placing the emphasis. The discussion of how the AI gains resources/global control is valid even if the AI’s end game is tiling the universe in paperclips.
The question of whether an AI is likely to have more difficulty understanding humans or quantum mechanics is interesting. As a possible counterpoint, I would say that an AI programmed by human beings is likely to be close to human style thought in the space of all possible minds, so the vastness of mind space is perhaps not totally relevant. I’m not clear as to whether that’s a particularly good counterpoint.
I don’t have a problem with the AI building an army of macrosize robots, or taking over the internet, or whatever. I don’t think human society is well-designed, or is even capable of being well-designed, with respect to significantly slowing down an AI trying to convert us all into resources. Indeed, it seems to me that any number of possible path require fewer assumptions and less computational time than MNT. The essence of my complaint is that it seems like of the many possible paths to power for an AI, the one that gets stressed in FAI literature in on the less likely end of the spectrum, and I’m really confused as to why that choice has been made.
I’m not assuming that the AI has a large final preference for controlling humans. I am stressing how the AI interacts with humans because as a human that’s of particular concern to me. Access to human resource may also be a useful instrumental goal for a “young” AI, as human beings control fairly large amount of resources and gaining access to them may be the easiest route to power for an AI. My understanding is that in the context of FAI, we’re discussing AI in terms of what it means from humans, so that’s where I’m placing the emphasis. The discussion of how the AI gains resources/global control is valid even if the AI’s end game is tiling the universe in paperclips.
The question of whether an AI is likely to have more difficulty understanding humans or quantum mechanics is interesting. As a possible counterpoint, I would say that an AI programmed by human beings is likely to be close to human style thought in the space of all possible minds, so the vastness of mind space is perhaps not totally relevant. I’m not clear as to whether that’s a particularly good counterpoint.
I don’t have a problem with the AI building an army of macrosize robots, or taking over the internet, or whatever. I don’t think human society is well-designed, or is even capable of being well-designed, with respect to significantly slowing down an AI trying to convert us all into resources. Indeed, it seems to me that any number of possible path require fewer assumptions and less computational time than MNT. The essence of my complaint is that it seems like of the many possible paths to power for an AI, the one that gets stressed in FAI literature in on the less likely end of the spectrum, and I’m really confused as to why that choice has been made.