We will soon enter an unstable state where the balance of military and political power will shift significantly because of advanced AI.
This evidently makes the very strong assumption that AGI is sufficient for a widely-recognized DSA before it becomes a near-term existential risk. That is, everyone behind in the AI race figures out that they have no chance to win without actually fighting a war that leads to nuclear escalation, or there is a war is won so decisively so quickly by one side that nuclear escalation does not occur. These seem like big claims that aren’t actually explained or explored (Or it assumes that ASI can be aligned enough to ensure we don’t all die before power dynamics shift in favor of whoever built to ASI, which is an even bigger claim.)
I don’t think I make the claim that a DSA is likely to be achieved by a human faction before AI takeover happens. My modal prediction (~58% as written in the post) for this whole process is that the AI takes over while the nations are trying to beat each other (or failing to coordinate).
In the world where the leading project has a large secret lead and has solved superalignment (an unlikely intersection) then yes, I think a DSA is achievable.
Maybe a thing you’re claiming is that my opening paragraphs don’t emphasize AI takeover enough to properly convey my expectations of AI takeover. I’m pretty sympathetic to this point.
Thanks. It does seem like the conditional here was assumed, and there was some illusion of transparency. The way it read was that you viewed this type of geopolitical singularity as the default future, which seemed like a huge jump, as I mentioned.
This evidently makes the very strong assumption that AGI is sufficient for a widely-recognized DSA before it becomes a near-term existential risk. That is, everyone behind in the AI race figures out that they have no chance to win without actually fighting a war that leads to nuclear escalation, or there is a war is won so decisively so quickly by one side that nuclear escalation does not occur. These seem like big claims that aren’t actually explained or explored (Or it assumes that ASI can be aligned enough to ensure we don’t all die before power dynamics shift in favor of whoever built to ASI, which is an even bigger claim.)
I don’t think I make the claim that a DSA is likely to be achieved by a human faction before AI takeover happens. My modal prediction (~58% as written in the post) for this whole process is that the AI takes over while the nations are trying to beat each other (or failing to coordinate).
In the world where the leading project has a large secret lead and has solved superalignment (an unlikely intersection) then yes, I think a DSA is achievable.
Maybe a thing you’re claiming is that my opening paragraphs don’t emphasize AI takeover enough to properly convey my expectations of AI takeover. I’m pretty sympathetic to this point.
Thanks. It does seem like the conditional here was assumed, and there was some illusion of transparency. The way it read was that you viewed this type of geopolitical singularity as the default future, which seemed like a huge jump, as I mentioned.