I think I agree that, once an AI-enabled coup has happened, the expected remaining AI takeover risk would be much lower. This is partly because it ends the race within the country where the takeover happened (though it wouldn’t necessarily end the international race), but also partly just because of the evidential update: apparently AI is now capable of taking over countries, and apparently someone could instruct the AIs to do that, and the AIs handed the power right back to that person! Seems like alignment is working.
I don’t currently agree that the remaining AI takeover risk would be much lower:
The international race seems like a big deal. Ending the domestic race is good, but I’d still expect reckless competition I think. Maybe you’re imagining that a large chunk of powergrabs are motivated by stopping the race? I’m a bit sceptical.
I don’t think the evidential update is that strong. If misaligned AI found it convenient to take over the US using humans, why should we expect them to immediately cease to find humans useful at that point? They might keep using humans as they accumulate more power, up until some later point.
There’s another evidential update which I think is much stronger, which is that the world has completely dropped the ball on an important thing almost no one wants (powergrabs), where there are tractable things they could have done, and some of those things would directly reduce AI takeover risk (infosec, alignment audits etc). In a world where coups over the US are possible, I expect we’ve failed to do basic alignment stuff too.
The international race seems like a big deal. Ending the domestic race is good, but I’d still expect reckless competition I think.
I was thinking that AI capabilities must already be pretty high by the time an AI-enabled coup is possible. If one country also had a big lead, then probably they would soon have strong enough capabilities to end the international race too. (And the fact that they were willing to coup internally is strong evidence that they’d be willing to do that.)
But if the international race is very tight, that argument doesn’t work.
I don’t think the evidential update is that strong. If misaligned AI found it convenient to take over the US using humans, why should we expect them to immediately cease to find humans useful at that point? They might keep using humans as they accumulate more power, up until some later point.
Yeah, I suppose. I think this gets into definitional issues about what counts as AI takeover and what counts as human takeover.
For example: If, after the coup, the AIs are ~guaranteed to eventually come out on top, and they’re just temporarily using the human leader (who believe themselves to be in charge) because it’s convenient for international politics — does that count as human takeover or AI takeover?
If it counts as “AI takeover”, then my argument would apply. (Saying that “AI takeover” would be much less likely after successful “human takeover”, but also that “human takeover” mostly takes probability mass from worlds where takeover wasn’t going to happen.)
If it counts as “human takeover”, then my argument would not apply, and “AI takeover” would be pretty likely to happen after a temporary “human takeover”.
The practical upshot for how much “human takeover” ultimately reduces the probability of “AI takeover” would be the same.
I don’t currently agree that the remaining AI takeover risk would be much lower:
The international race seems like a big deal. Ending the domestic race is good, but I’d still expect reckless competition I think. Maybe you’re imagining that a large chunk of powergrabs are motivated by stopping the race? I’m a bit sceptical.
I don’t think the evidential update is that strong. If misaligned AI found it convenient to take over the US using humans, why should we expect them to immediately cease to find humans useful at that point? They might keep using humans as they accumulate more power, up until some later point.
There’s another evidential update which I think is much stronger, which is that the world has completely dropped the ball on an important thing almost no one wants (powergrabs), where there are tractable things they could have done, and some of those things would directly reduce AI takeover risk (infosec, alignment audits etc). In a world where coups over the US are possible, I expect we’ve failed to do basic alignment stuff too.
Curious what you think.
I was thinking that AI capabilities must already be pretty high by the time an AI-enabled coup is possible. If one country also had a big lead, then probably they would soon have strong enough capabilities to end the international race too. (And the fact that they were willing to coup internally is strong evidence that they’d be willing to do that.)
But if the international race is very tight, that argument doesn’t work.
Yeah, I suppose. I think this gets into definitional issues about what counts as AI takeover and what counts as human takeover.
For example: If, after the coup, the AIs are ~guaranteed to eventually come out on top, and they’re just temporarily using the human leader (who believe themselves to be in charge) because it’s convenient for international politics — does that count as human takeover or AI takeover?
If it counts as “AI takeover”, then my argument would apply. (Saying that “AI takeover” would be much less likely after successful “human takeover”, but also that “human takeover” mostly takes probability mass from worlds where takeover wasn’t going to happen.)
If it counts as “human takeover”, then my argument would not apply, and “AI takeover” would be pretty likely to happen after a temporary “human takeover”.
The practical upshot for how much “human takeover” ultimately reduces the probability of “AI takeover” would be the same.