I place much higher odds on the “death due to takeover” for a pretty specific reason. We seem to have an excellent takeover mechanism in place which kills all or most of us: nukes. We have a gun pointed at our collective heads, and it’s deadlier to humans than AGIs.
I ended up feeling moderately persuaded by an argument that human coups typically kill a much smaller fraction of people in that country. I think some of the relevant reference classes don’t look that deadly and then there are some specific arguments for thinking AIs will kill lots of people as part of takeover, and I overall come to a not-that-high bottom line.
The number of likely deaths given takeover seems higher than your estimate if that logic about nukes as a route to takeover mostly goes through.
My understanding is that a large scale nuclear exchange would probably kill well less than half of people (I’d guess <25%). The initial exchange would probably directly kill <1 billion and then I don’t see a strong argument for nuclear winter killing far more people. This winter would be occuring during takeoff which has an unclear effect on the number of deaths.
But the question of extinction still hinges largely on whether AGI has any interest in humanity surviving. I think you’re assuming it will have a distribution of interests like humans do
I don’t think I’m particularly assuming this. I guess I think there is a roughly 50% chance that AI will be slightly kind in the sense needed to want to keep humans alive. Then, the rest is driven by trade. (Edit: I think trade is more important than slightly kind, but both are >50% likely.)
GPT5T roughly agrees with your estimate of nuclear winter deaths, so I’m probably way off. Looks like my information was well out of date; that conversation was from 2017 or so. And yes, deaths from winter-induced famine might be substantially mitigated if someone with AGI-developed tech bothered.
Thanks for the clairifications on trade vs kindness. I am less optimistic about trade, causal or acausal, than you, but it’s a factor.
I ended up feeling moderately persuaded by an argument that human coups typically kill a much smaller fraction of people in that country. I think some of the relevant reference classes don’t look that deadly and then there are some specific arguments for thinking AIs will kill lots of people as part of takeover, and I overall come to a not-that-high bottom line.
My understanding is that a large scale nuclear exchange would probably kill well less than half of people (I’d guess <25%). The initial exchange would probably directly kill <1 billion and then I don’t see a strong argument for nuclear winter killing far more people. This winter would be occuring during takeoff which has an unclear effect on the number of deaths.
I don’t think I’m particularly assuming this. I guess I think there is a roughly 50% chance that AI will be slightly kind in the sense needed to want to keep humans alive. Then, the rest is driven by trade. (Edit: I think trade is more important than slightly kind, but both are >50% likely.)
GPT5T roughly agrees with your estimate of nuclear winter deaths, so I’m probably way off. Looks like my information was well out of date; that conversation was from 2017 or so. And yes, deaths from winter-induced famine might be substantially mitigated if someone with AGI-developed tech bothered.
Thanks for the clairifications on trade vs kindness. I am less optimistic about trade, causal or acausal, than you, but it’s a factor.