It’s not directly relevant, but here’s my favourite fact about Chicken, which I think I found in a book by Steven Brams: If you play a game of Chicken against God (or Omega, or any other entity able to read your mind or otherwise predict your behaviour), God loses. (Because all you have to do is decide not to flinch, and your omniscient opponent knows you will not flinch, and It then has no better option than to flinch and let you win.)
Of course the correct next inference is that Omega either doesn’t play Chicken, or cheats (at which point It is in fact no longer playing Chicken but some other game). Seems reasonable enough.
Well, obviously the result depends on what the algorithms actually do. If you’re going to be playing against an opponent with access to your source code, you’d quite like to be a nice simple “always-stand-firm” program, which even a very stupid opponent can prove will not flinch. But of course that leads to all algorithms doing that, and then everybody dies.
It’s not clear that this is avoidable. While it’s interesting to think in the abstract about programs analysing one another’s code, I think that in practice programs clever enough to do anything nontrivial with one another’s code are almost inevitably much too difficult to analyse, and unless there are big disparities in computational resource available simulation isn’t much help either. So we’re left with doing trivial things with one another’s code: e.g., perhaps you could say “if facing an exact copy of myself, flinch with probability p; else stand firm unconditionally”, and anything that makes a halfway-serious attempt at analysis will see that it isn’t going to win. (What’s the best value of p? Depends on the exact payoffs. With classical Chicken where “neither flinch” ⇒ “both die”, it’s probably very close to 1. But then with classical Chicken the best meta-algorithm is not to play the damn game in the first place.)
If you play a game of Chicken against God, God loses.
Does he?
… and great letters of fire appeared in the Sky, visible to half a continent, while a great booming voice read them out loud: “I, The Almighty, Am About To Play Chicken Against gjm. I Will Not Flinch, And Hereby Swear That Should I Do So, I Shall Abdicate My Reign Over The Universe In Favour Of Satan, The Deceiver. I Also Swear That I Have Not And Will Not Read gjm’s Mind Until Either The Game Is Over Or He Died And Must Face Judgement.”
Hmm. Does God even have the option of not reading my mind?
Of course the answer is ill-defined for multiple reasons. My feeling is that the standard LW notion of Omega says that It has mind-reading (or simulating, or …) apparatus that It can use or not as It pleases, whereas God simply, automatically, ineffably knows everything that can be known. If so, then when I play them at Chicken Omega wins and God loses.
Also: it’s part of the definition of the game of Chicken that the neither-flinches outcome is worst for both players. So if God’s still playing Chicken even after the precommitment above, then the outcome if I remain unintimidated must be even worse for him than abdicating in favour of Satan. And—if he truly isn’t reading my mind—that’s got to be a real possibility. In which case, for him to play this game at all he’d have to be incredibly stupid. Which isn’t supposed to be one of God’s attributes.
It’s not directly relevant, but here’s my favourite fact about Chicken, which I think I found in a book by Steven Brams: If you play a game of Chicken against God (or Omega, or any other entity able to read your mind or otherwise predict your behaviour), God loses. (Because all you have to do is decide not to flinch, and your omniscient opponent knows you will not flinch, and It then has no better option than to flinch and let you win.)
Of course the correct next inference is that Omega either doesn’t play Chicken, or cheats (at which point It is in fact no longer playing Chicken but some other game). Seems reasonable enough.
This makes me curious what happens when two algorithms, both of whom have access to the others source code, both play chicken.
Well, obviously the result depends on what the algorithms actually do. If you’re going to be playing against an opponent with access to your source code, you’d quite like to be a nice simple “always-stand-firm” program, which even a very stupid opponent can prove will not flinch. But of course that leads to all algorithms doing that, and then everybody dies.
It’s not clear that this is avoidable. While it’s interesting to think in the abstract about programs analysing one another’s code, I think that in practice programs clever enough to do anything nontrivial with one another’s code are almost inevitably much too difficult to analyse, and unless there are big disparities in computational resource available simulation isn’t much help either. So we’re left with doing trivial things with one another’s code: e.g., perhaps you could say “if facing an exact copy of myself, flinch with probability p; else stand firm unconditionally”, and anything that makes a halfway-serious attempt at analysis will see that it isn’t going to win. (What’s the best value of p? Depends on the exact payoffs. With classical Chicken where “neither flinch” ⇒ “both die”, it’s probably very close to 1. But then with classical Chicken the best meta-algorithm is not to play the damn game in the first place.)
Does he?
… and great letters of fire appeared in the Sky, visible to half a continent, while a great booming voice read them out loud: “I, The Almighty, Am About To Play Chicken Against gjm. I Will Not Flinch, And Hereby Swear That Should I Do So, I Shall Abdicate My Reign Over The Universe In Favour Of Satan, The Deceiver. I Also Swear That I Have Not And Will Not Read gjm’s Mind Until Either The Game Is Over Or He Died And Must Face Judgement.”
Hmm. Does God even have the option of not reading my mind?
Of course the answer is ill-defined for multiple reasons. My feeling is that the standard LW notion of Omega says that It has mind-reading (or simulating, or …) apparatus that It can use or not as It pleases, whereas God simply, automatically, ineffably knows everything that can be known. If so, then when I play them at Chicken Omega wins and God loses.
Also: it’s part of the definition of the game of Chicken that the neither-flinches outcome is worst for both players. So if God’s still playing Chicken even after the precommitment above, then the outcome if I remain unintimidated must be even worse for him than abdicating in favour of Satan. And—if he truly isn’t reading my mind—that’s got to be a real possibility. In which case, for him to play this game at all he’d have to be incredibly stupid. Which isn’t supposed to be one of God’s attributes.
Further, if I understand things correctly, this mean there are NO games at which god systematically wins.