It’s unclear what fraction of people die due to takeover because this is expedient for the AI, but it seems like it could be the majority of people and could also be almost no one. If AIs are less powerful, this is more likely (because AIs would have a harder time securing a very high chance of takeover without killing more humans).
Yeah, this (extinction to facilitate takeover) seems like the most plausible pathway to total or near-total extinction by far. An AI that is only a little bit smarter than humanity collectively has to worry about humans making a counter-move—launching missiles or building a competing AI or various kinds of sabotage. If you’re a rogue AI, engineering a killer virus (something that smart humans can already do or almost do, if they wanted to) as soon as you or humanity has built out sufficient robotics infrastructure, makes all the subsequent parts of your takeover / expansion plan much less contingent and more straightforward to reason about. (And I think the analogy to historical relatively-bloodless coups here is a pretty weak counter / faint hope—for one, because human coup instigators generally still need humans to rule over, whereas AIs wouldn’t.)
If there are a large number of different rogue AIs, it becomes more likely that one of them would benefit from massive fatalities (e.g. due to a pandemic) making this substantially more likely.
I don’t see how the number of AIs makes a big difference here, rather than the absolute power level of the leading AI? An extinction or near-extinction event seems beneficial to just about any unaligned AI that is not all-powerful enough to not have to worry about humanity at all.
Put another way, the scenarios where extinction doesn’t happen due to takeover only feel plausible in scenarios where a single AI fooms so fast and so hard that it can leave humanity alive without really sweating it. But if I understand the landscape of the discourse / disagreement here, these fast and discontinuous takeoff scenarios are exactly the ones that you and some others find the least plausible.
human coup instigators generally still need humans to rule over, whereas AIs wouldn’t
Sometimes? In countries where most wealth is mineral wealth, they actually need very few citizens. The more basic point is that human coups aren’t particularly helped by indiscriminate killing while AI takeover might be. But I still think coups are surprisingly bloodless and often keeping coups bloodless is the optimal strategy for the person doing the coup. I think this transfers to AI.
I don’t see how the number of AIs makes a big difference here, rather than the absolute power level of the leading AI?
My argument here is basically just that even if a given rogue AI wouldn’t benefit from mass fatalities (which maybe you think is implausible, fair enough), if there are many rogue AIs in different situations, then mass fatalities become more likely.
If you think most humans dying as part of the takeover is over determined, then fair enough.
An extinction or near-extinction event seems beneficial to just about any unaligned AI that is not all-powerful enough to not have to worry about humanity at all.
I think this is pretty unclear because of the possiblity of responses from human actors and the potential for slowly building up control over time. E.g., in the AI 2027 scenario, killing (large fractions of) humans at the end didn’t matter almost at all for the AI’s takeover prospects and at each earlier point, killing tons of humans wouldn’t be helpful as this would just cause variance and the potential for stronger responses while the AI can just build up power over time.
Edit: there is also a live question of “marginal returns to death”; e.g. does it increase your odds of takeover to kill 90% instead of 30%?
Even if a coup is meant to capture mineral wealth and the population is irrelevant, coup leaders recognize that mass murder will lead to sanctions stopping them from selling that mineral wealth. Plenty of examples of regimes that kill even low thousands of people being sanctioned.
AI that plans to take over the world does not need to trade with humans or keep them from being horrified and lashing out. Kill approximately everyone is a viable strategy and preferrable in most cases since it removes us as an intelligent adversary.
“But I still think coups are surprisingly bloodless and often keeping coups bloodless is the optimal strategy for the person doing the coup. I think this transfers to AI.”
I think you are conflating two separate goals, One of which applies to AI and the other of which doesn’t. Control of government and control of the governed. Coups have constrained violence because the leaders of the coup explicitly want to rule the people and killing them reduces the rewards of winning.
This does not appear to apply to an AI coup as it would not benefit from having people to rule over.
Yeah, this (extinction to facilitate takeover) seems like the most plausible pathway to total or near-total extinction by far. An AI that is only a little bit smarter than humanity collectively has to worry about humans making a counter-move—launching missiles or building a competing AI or various kinds of sabotage. If you’re a rogue AI, engineering a killer virus (something that smart humans can already do or almost do, if they wanted to) as soon as you or humanity has built out sufficient robotics infrastructure, makes all the subsequent parts of your takeover / expansion plan much less contingent and more straightforward to reason about. (And I think the analogy to historical relatively-bloodless coups here is a pretty weak counter / faint hope—for one, because human coup instigators generally still need humans to rule over, whereas AIs wouldn’t.)
I don’t see how the number of AIs makes a big difference here, rather than the absolute power level of the leading AI? An extinction or near-extinction event seems beneficial to just about any unaligned AI that is not all-powerful enough to not have to worry about humanity at all.
Put another way, the scenarios where extinction doesn’t happen due to takeover only feel plausible in scenarios where a single AI fooms so fast and so hard that it can leave humanity alive without really sweating it. But if I understand the landscape of the discourse / disagreement here, these fast and discontinuous takeoff scenarios are exactly the ones that you and some others find the least plausible.
Sometimes? In countries where most wealth is mineral wealth, they actually need very few citizens. The more basic point is that human coups aren’t particularly helped by indiscriminate killing while AI takeover might be. But I still think coups are surprisingly bloodless and often keeping coups bloodless is the optimal strategy for the person doing the coup. I think this transfers to AI.
My argument here is basically just that even if a given rogue AI wouldn’t benefit from mass fatalities (which maybe you think is implausible, fair enough), if there are many rogue AIs in different situations, then mass fatalities become more likely.
If you think most humans dying as part of the takeover is over determined, then fair enough.
I think this is pretty unclear because of the possiblity of responses from human actors and the potential for slowly building up control over time. E.g., in the AI 2027 scenario, killing (large fractions of) humans at the end didn’t matter almost at all for the AI’s takeover prospects and at each earlier point, killing tons of humans wouldn’t be helpful as this would just cause variance and the potential for stronger responses while the AI can just build up power over time.
Edit: there is also a live question of “marginal returns to death”; e.g. does it increase your odds of takeover to kill 90% instead of 30%?
Even if a coup is meant to capture mineral wealth and the population is irrelevant, coup leaders recognize that mass murder will lead to sanctions stopping them from selling that mineral wealth. Plenty of examples of regimes that kill even low thousands of people being sanctioned.
AI that plans to take over the world does not need to trade with humans or keep them from being horrified and lashing out. Kill approximately everyone is a viable strategy and preferrable in most cases since it removes us as an intelligent adversary.
“But I still think coups are surprisingly bloodless and often keeping coups bloodless is the optimal strategy for the person doing the coup. I think this transfers to AI.”
I think you are conflating two separate goals, One of which applies to AI and the other of which doesn’t. Control of government and control of the governed. Coups have constrained violence because the leaders of the coup explicitly want to rule the people and killing them reduces the rewards of winning.
This does not appear to apply to an AI coup as it would not benefit from having people to rule over.