Agree that option-1 (literal destruction) is implausible.
Option 2 is much more likely primarily because who wins the contest is (in my model) sufficiently uncertain that in-expectation war would constitute large value destruction for the winner. In other words, if choosing “war” has a [30% probability of losing 99% of my utility over the next billion years, and a 70% probability of losing 0% of my utility], whereas choosing peace has [100% chance of achieving 60% of my utility] (assuming some positive-sum nature from the overlap of respective objective functions), then the agents choose peace.
But this does depend on the existence of meaningful uncertainty even post-FOOM. What is your reasoning for why uncertainty would be so unlikely?
Even in boardgames like Go (with a much more constrained strategy-space than reality) it is computationally impossible to consider all possible future opponent strategies, and thus with a near-peer adversary action-values still have high uncertainty. Do you just think that “game theory that allows an AGI to compute general-equilibrium solutions and certify dominant strategies for as-complex-as-AGI-war multi-agent games” is a computationally-tractable thing for an earth-bound AGI?
If that’s a crux, I wonder if we can find some hardness proofs of different games and see what it looks like on simpler environments.
EDIT: consider even the super-simple risk that B tries to destroy A, but A manages to send out a couple near-light-speed probes into the galaxy/nearby galaxies just to inform any other currently-hiding-AGIs about B’s historical conduct/untrustworthiness/refusal to live-and-let-live. If an alien-AGI C ever encounters such a probe, it would update towards non-cooperation enough to permanently worsen B-C relations should they ever meet. In this sense, your permanent loss from war becomes certain, if the AGI has ongoing nonzero probability of possibly encountering alien superintelligences.
Agree that option-1 (literal destruction) is implausible.
Option 2 is much more likely primarily because who wins the contest is (in my model) sufficiently uncertain that in-expectation war would constitute large value destruction for the winner. In other words, if choosing “war” has a [30% probability of losing 99% of my utility over the next billion years, and a 70% probability of losing 0% of my utility], whereas choosing peace has [100% chance of achieving 60% of my utility] (assuming some positive-sum nature from the overlap of respective objective functions), then the agents choose peace.
But this does depend on the existence of meaningful uncertainty even post-FOOM. What is your reasoning for why uncertainty would be so unlikely?
Even in boardgames like Go (with a much more constrained strategy-space than reality) it is computationally impossible to consider all possible future opponent strategies, and thus with a near-peer adversary action-values still have high uncertainty. Do you just think that “game theory that allows an AGI to compute general-equilibrium solutions and certify dominant strategies for as-complex-as-AGI-war multi-agent games” is a computationally-tractable thing for an earth-bound AGI?
If that’s a crux, I wonder if we can find some hardness proofs of different games and see what it looks like on simpler environments.
EDIT: consider even the super-simple risk that B tries to destroy A, but A manages to send out a couple near-light-speed probes into the galaxy/nearby galaxies just to inform any other currently-hiding-AGIs about B’s historical conduct/untrustworthiness/refusal to live-and-let-live. If an alien-AGI C ever encounters such a probe, it would update towards non-cooperation enough to permanently worsen B-C relations should they ever meet. In this sense, your permanent loss from war becomes certain, if the AGI has ongoing nonzero probability of possibly encountering alien superintelligences.