In this case, the problem isn’t that superpower A is gaining an unfair fraction of resources, it’s that gaining enough of them would (presumably) allow them to assert a DSA over superpower B, threatening what B already owns. Analagously, it makes sense to precommit to nuking an offensive realist that’s attempting to build ICBM defenses, because it signals that they are aiming to disempower you in the future. You also wouldn’t necessarily need to escalate to the civilian population as a deterrent right away: instead, you could just focus on disabling the defensive infrastructure while it’s being built, only escalating further if A undermines your efforts (such as by building their defensive systems next to civilians).
Any plan of this sort would be very difficult to enforce with humans because of private information and commitment problems, but there are probably technical solutions for AIs to verifiably prove their motivations and commitments (ex: co-design).
In this case, the problem isn’t that superpower A is gaining an unfair fraction of resources, it’s that gaining enough of them would (presumably) allow them to assert a DSA over superpower B, threatening what B already owns. Analagously, it makes sense to precommit to nuking an offensive realist that’s attempting to build ICBM defenses, because it signals that they are aiming to disempower you in the future. You also wouldn’t necessarily need to escalate to the civilian population as a deterrent right away: instead, you could just focus on disabling the defensive infrastructure while it’s being built, only escalating further if A undermines your efforts (such as by building their defensive systems next to civilians).
Any plan of this sort would be very difficult to enforce with humans because of private information and commitment problems, but there are probably technical solutions for AIs to verifiably prove their motivations and commitments (ex: co-design).