I would presume that the AI would know that humans are likely to try to resist a takeover attempt, and to have various safeguards against it. It might be smart enough to be able to overcome any human response, but that seems to only work if it actually puts that intelligence to work by thinking about what (if anything) it needs to do to counteract the human response.
More generally, humans are such a major influence on the world as well as a source of potential resources, that it would seem really odd for any superintelligence to naturally arrive on a world-takeover plan without at any point happening to consider how this will affect humanity and whether that suggests any changes to the plan.
I would presume that the AI would know that humans are likely to try to resist a takeover attempt, and to have various safeguards against it.
That assumes humans are, in fact, likely to meaningfully resist a takeover attempt. My guess is that humans are not likely to meaningfully resist a takeover attempt, and the AI will (implicitly) know that.
I mean, if the AI tries to change who’s at the top of society’s status hierarchy (e.g. the President), then sure, the humans will freak out. But what does an AI care about the status hierarchy? It’s not like being at the top of the status hierarchy conveys much real power. It’s like your “total horse takeover” thing; what the AI actually wants is to be able to control outcomes at a relatively low level. Humans, by and large, don’t even bother to track all those low-level outcomes, they mostly pay attention to purely symbolic status stuff.
Now, it is still true that humans are a major influence on the world and source of resources. An AI will very plausibly want to work with the humans, use them in various ways. But that doesn’t need to parse to human social instincts as a “takeover”.
I would presume that the AI would know that humans are likely to try to resist a takeover attempt, and to have various safeguards against it. It might be smart enough to be able to overcome any human response, but that seems to only work if it actually puts that intelligence to work by thinking about what (if anything) it needs to do to counteract the human response.
More generally, humans are such a major influence on the world as well as a source of potential resources, that it would seem really odd for any superintelligence to naturally arrive on a world-takeover plan without at any point happening to consider how this will affect humanity and whether that suggests any changes to the plan.
That assumes humans are, in fact, likely to meaningfully resist a takeover attempt. My guess is that humans are not likely to meaningfully resist a takeover attempt, and the AI will (implicitly) know that.
I mean, if the AI tries to change who’s at the top of society’s status hierarchy (e.g. the President), then sure, the humans will freak out. But what does an AI care about the status hierarchy? It’s not like being at the top of the status hierarchy conveys much real power. It’s like your “total horse takeover” thing; what the AI actually wants is to be able to control outcomes at a relatively low level. Humans, by and large, don’t even bother to track all those low-level outcomes, they mostly pay attention to purely symbolic status stuff.
Now, it is still true that humans are a major influence on the world and source of resources. An AI will very plausibly want to work with the humans, use them in various ways. But that doesn’t need to parse to human social instincts as a “takeover”.