The Standard PD is set up so there are only two agents and only their choices and values matter. I tend to think of rationality in these dilemmas as being largely a matter of reputation, even when the situation is circumscribed and described as one-shot. Hofstadter’s concept of super-rationality is part of how I think about this. If I have a reputation as someone who cooperates when that’s the game-theoretically optimal thing to do, then it’s more likely that whoever I’ve been partnered with will expect that from me, and cooperate if he understands why that strategy works.
Since it would buttress that reputation, I keep hoping that rationalists, generally, would come to embrace some interpretation of super-rationality, but I keep seeing self-professed rationalists whose choices seem short-sightedly instrumentalist to me.
But this seems to be a completely different situation. Rather than attempting to cooperate with someone who I should assume to be my partner, and who has my interests at heart, I’m asked to play a game with someone who doesn’t reason the way I do, and who explicitly mistrusts my reasoning. In addition, the payoff isn’t to me and the other player, the payoff is to a huge number of uninvolved other people. MBlume seems to want me to think of it in terms of something valuable in my preference ranking, but he’s actually set it up so that it’s not a prisoner’s dilemma, it’s a hostage situation in which I have a clearly superior choice, and an opportunity to try to convince someone whose reasoning is alien to my own.
I defect. I do my best to convince my friend that the stakes are too high to justify declaring his belief in god. So you can get me to defect, but only by setting up a situation in which my allies aren’t sitting on the other side of the bargaining table.
The Standard PD is set up so there are only two agents and only their choices and values matter. I tend to think of rationality in these dilemmas as being largely a matter of reputation, even when the situation is circumscribed and described as one-shot. Hofstadter’s concept of super-rationality is part of how I think about this. If I have a reputation as someone who cooperates when that’s the game-theoretically optimal thing to do, then it’s more likely that whoever I’ve been partnered with will expect that from me, and cooperate if he understands why that strategy works.
Since it would buttress that reputation, I keep hoping that rationalists, generally, would come to embrace some interpretation of super-rationality, but I keep seeing self-professed rationalists whose choices seem short-sightedly instrumentalist to me.
But this seems to be a completely different situation. Rather than attempting to cooperate with someone who I should assume to be my partner, and who has my interests at heart, I’m asked to play a game with someone who doesn’t reason the way I do, and who explicitly mistrusts my reasoning. In addition, the payoff isn’t to me and the other player, the payoff is to a huge number of uninvolved other people. MBlume seems to want me to think of it in terms of something valuable in my preference ranking, but he’s actually set it up so that it’s not a prisoner’s dilemma, it’s a hostage situation in which I have a clearly superior choice, and an opportunity to try to convince someone whose reasoning is alien to my own.
I defect. I do my best to convince my friend that the stakes are too high to justify declaring his belief in god. So you can get me to defect, but only by setting up a situation in which my allies aren’t sitting on the other side of the bargaining table.