Theorem 1.1 is incorrect. The flaw in the proof is that CB(CB)=S does not imply that Provable[CB(CB)=S]. Also, the conclusion of the theorem must be incorrect because the claim would imply that the provability predicate is unsound.
AstuteBot is complicated enough that I can’t see what it’s supposed to be doing, but in your natural language description of AstuteBot, your parenthetical rephrasings say different things than the parts of the description they are intended to make precise.
I wonder if there is a way that you might go about defining modal agents that reason something like:
I swerve if and only if my opponent doesn’t swerve. Otherwise, I don’t swerve.
and, very roughly:
If I don’t swerve then my opponent reasons that I don’t swerve and they swerve. So, I don’t swerve.
It seemed weird to me that there would be something wrong with defining something like CarefulBot because it seems pretty analogous to the FairBot of the modal Prisoner’s Dilemma.
You can’t define a modal agent that swerves if and only if the other player doesn’t swerve, because “the other player doesn’t swerve” is not a modalized formula.
CarefulBot does not seem much like FairBot to me. It also seems undeserving of its name, since it doesn’t swerve unless it can prove that the other player won’t. Perhaps instead you should make it so it swerves unless it can prove that the other player does?
Theorem 1.1 is incorrect. The flaw in the proof is that CB(CB)=S does not imply that Provable[CB(CB)=S]. Also, the conclusion of the theorem must be incorrect because the claim would imply that the provability predicate is unsound.
AstuteBot is complicated enough that I can’t see what it’s supposed to be doing, but in your natural language description of AstuteBot, your parenthetical rephrasings say different things than the parts of the description they are intended to make precise.
I wonder if there is a way that you might go about defining modal agents that reason something like:
and, very roughly:
It seemed weird to me that there would be something wrong with defining something like CarefulBot because it seems pretty analogous to the FairBot of the modal Prisoner’s Dilemma.
You can’t define a modal agent that swerves if and only if the other player doesn’t swerve, because “the other player doesn’t swerve” is not a modalized formula.
CarefulBot does not seem much like FairBot to me. It also seems undeserving of its name, since it doesn’t swerve unless it can prove that the other player won’t. Perhaps instead you should make it so it swerves unless it can prove that the other player does?