No such thing as future property. This isn’t a factual disagreement on my part, just a quibble over terms; disregard it.
Your example isn’t about signaling or precommitment, it’s changing the game into multiple-shot, modifying the agent’s utility function in an isolated play to take into account their reputation for future plays. Yes, it works. But doesn’t help much in true one-shot (or last-play) situations.
On the other hand, the ideal platonic PD is also quite rare in reality—not as rare as Newcomb’s, but still. You may remember us having an isomorphic argument about Newcomb’s some time ago, with roles reversed—you defending the ideal platonic Newcomb’s Problem, and me questioning its assumptions :-)
Me, I don’t feel moral problems defecting in the pure one-shot PD. Some situations are just bad to be in, and the best way out is bad too. Especially situations where something terribly important to you is controlled by a cold uncaring alien entity, and the problem has been carefully constructed to prohibit you from manipulating it (Eliezer’s “true PD”).
No such thing as future property. This isn’t a factual disagreement on my part, just a quibble over terms; disregard it.
In what sense do you mean no such thing? Clearly, there are future properties. My cat has a property of being dead in the future.
Your example isn’t about signaling or precommitment, it’s changing the game into multiple-shot, modifying the agent’s utility function in an isolated play to take into account their reputation for future plays. Yes, it works. But doesn’t help much in true one-shot (or last-play) situations.
Yes, it was just an example of how to set up cooperation without precommitment. It’s clear that signaling being a one-off cooperator is a very hard problem, if you are only human and there are no Omegas flying around.
This doesn’t place the future in a privileged position. Even though I’m certain I saw my cat 10 minutes ago, it wasn’t alive a week ago with probability one, either.
No such thing as future property. This isn’t a factual disagreement on my part, just a quibble over terms; disregard it.
Your example isn’t about signaling or precommitment, it’s changing the game into multiple-shot, modifying the agent’s utility function in an isolated play to take into account their reputation for future plays. Yes, it works. But doesn’t help much in true one-shot (or last-play) situations.
On the other hand, the ideal platonic PD is also quite rare in reality—not as rare as Newcomb’s, but still. You may remember us having an isomorphic argument about Newcomb’s some time ago, with roles reversed—you defending the ideal platonic Newcomb’s Problem, and me questioning its assumptions :-)
Me, I don’t feel moral problems defecting in the pure one-shot PD. Some situations are just bad to be in, and the best way out is bad too. Especially situations where something terribly important to you is controlled by a cold uncaring alien entity, and the problem has been carefully constructed to prohibit you from manipulating it (Eliezer’s “true PD”).
In what sense do you mean no such thing? Clearly, there are future properties. My cat has a property of being dead in the future.
Yes, it was just an example of how to set up cooperation without precommitment. It’s clear that signaling being a one-off cooperator is a very hard problem, if you are only human and there are no Omegas flying around.
My cat has a property of being dead in the future.
Not with probability one, it doesn’t.
This doesn’t place the future in a privileged position. Even though I’m certain I saw my cat 10 minutes ago, it wasn’t alive a week ago with probability one, either.
Sorry. I deleted my comment to acknowledge my stupidity in making it. By now it’s clear that we don’t disagree substantively.