Well, obviously. But the more interesting question is what if you suspect, but are not certain, that your opponent is Eliezer Yudkowsky? Assuming identity makes the problem too easy.
My position is that I’d expect a reasonable chance that an arbitrary, frequent LW participant playing this game against you would also end with 10 (C,C)s. I’d suggest actually running this as an experiment if I didn’t think I’d lose money on the deal...
Harsher dilemmas (more meaningful stake, loss from an unreciprocated cooperation that may not be recoverable in the remaining iterations) would make me increasingly hesitant to assume “this person is probably like me”.
This makes me feel like I’m in “no true Scotsman” territory; nobody “like me” would fail to optimistically attempt cooperation. But if caring more about the difference in outcomes makes me less optimistic about other-similarity, then in a hypothetical where I am matched up against essentially myself (but I don’t know this), I defeat myself exactly when it matters—when the payoff is the highest.
and this is exactly the problem: If your behavior on the prisoner’s dilemma changes with the size of the outcome, then you aren’t really playing the prisoner’s dilemma. Your calculation in the low-payoff case was being confused by other terms in your utility function, terms for being someone who cooperates—terms that didn’t scale.
Yes, my point was that my variable skepticism is surely evidence of bias or rationalization, and that we can’t learn much from “mild” PD. I do also agree that warm fuzzies from being a cooperator don’t scale.
If we wanted to be clever we could include Eliezer playing against himself (just report back to him the same value) as a possibility, though if it’s a high probability that he faces himself it seems pointless.
I’d be happy to front the (likely loss of) $10.
It might be possible to make it more like a the true prisoner’s dilemma if we could come up with two players each of whom want the money donated to a cause that they consider worthy but the other player opposes or considers ineffective.
Though I have plenty of paperclips, sadly I lack the resources to successfully simulate Eliezer’s true PD . . .
Meaningful results would probably require several iterations of the game, though, with different players (also, the expected loss in my scenario was $5 per game).
I seem to recall Douglas Hofstadter did an experiment with several of his more rational friends, and was distressed by the globally rather suboptimal outcome. I do wonder if we on LW would do better, with or without Eliezer?
Well, let me put it this way—if my opponent is Eliezer Yudkowsky, I would be shocked to walk away with anything but $7.50.
Well, obviously. But the more interesting question is what if you suspect, but are not certain, that your opponent is Eliezer Yudkowsky? Assuming identity makes the problem too easy.
My position is that I’d expect a reasonable chance that an arbitrary, frequent LW participant playing this game against you would also end with 10 (C,C)s. I’d suggest actually running this as an experiment if I didn’t think I’d lose money on the deal...
Harsher dilemmas (more meaningful stake, loss from an unreciprocated cooperation that may not be recoverable in the remaining iterations) would make me increasingly hesitant to assume “this person is probably like me”.
This makes me feel like I’m in “no true Scotsman” territory; nobody “like me” would fail to optimistically attempt cooperation. But if caring more about the difference in outcomes makes me less optimistic about other-similarity, then in a hypothetical where I am matched up against essentially myself (but I don’t know this), I defeat myself exactly when it matters—when the payoff is the highest.
and this is exactly the problem: If your behavior on the prisoner’s dilemma changes with the size of the outcome, then you aren’t really playing the prisoner’s dilemma. Your calculation in the low-payoff case was being confused by other terms in your utility function, terms for being someone who cooperates—terms that didn’t scale.
Yes, my point was that my variable skepticism is surely evidence of bias or rationalization, and that we can’t learn much from “mild” PD. I do also agree that warm fuzzies from being a cooperator don’t scale.
If we wanted to be clever we could include Eliezer playing against himself (just report back to him the same value) as a possibility, though if it’s a high probability that he faces himself it seems pointless.
I’d be happy to front the (likely loss of) $10.
It might be possible to make it more like a the true prisoner’s dilemma if we could come up with two players each of whom want the money donated to a cause that they consider worthy but the other player opposes or considers ineffective.
Though I have plenty of paperclips, sadly I lack the resources to successfully simulate Eliezer’s true PD . . .
Meaningful results would probably require several iterations of the game, though, with different players (also, the expected loss in my scenario was $5 per game).
I seem to recall Douglas Hofstadter did an experiment with several of his more rational friends, and was distressed by the globally rather suboptimal outcome. I do wonder if we on LW would do better, with or without Eliezer?