Ah, I see. We may not disagree, then. My angle was simply that “continuing to agree on all decisions” might be quite robust versus environmental noise, assuming the decision is felt to be impacted by my values (i.e. not chocolate versus vanilla, which I might settle with a coinflip anyway!)
In the OP’s scenario, yes, I cooperate without bothering to reflect. It’s clearly, obviously, the thing to do, says my brain.
I don’t understand the relevance of the TPD. How can I possibly be in a True Prisoner’s Dilemma against myself, when I can’t even be in a TPD against a randomly chosen human?
Yes, for a copy close enough that he will do everything that I will do and nothing that I won’t. In simple resource-gain scenarios like the OP’s, I’m selfish relative to my value system, not relative to my locus of consciousness.
Ah, I see. We may not disagree, then. My angle was simply that “continuing to agree on all decisions” might be quite robust versus environmental noise, assuming the decision is felt to be impacted by my values (i.e. not chocolate versus vanilla, which I might settle with a coinflip anyway!)
In the OP’s scenario, yes, I cooperate without bothering to reflect. It’s clearly, obviously, the thing to do, says my brain.
I don’t understand the relevance of the TPD. How can I possibly be in a True Prisoner’s Dilemma against myself, when I can’t even be in a TPD against a randomly chosen human?
OP is assuming selfishness, which makes this True. Any PD is TPD for a selfish person. Is it still the obvious thing to do if you’re selfish?
Yes, for a copy close enough that he will do everything that I will do and nothing that I won’t. In simple resource-gain scenarios like the OP’s, I’m selfish relative to my value system, not relative to my locus of consciousness.
So we have different models of selfishness, then. My model doesn’t care about anything but “me”, which doesn’t include clones.