Eliezer Yudkowsky: Reciprocity in humans is an executing adaptation. It is not strategically convergent for all minds toward all other minds. It’s strategic only
By LDT agents
Toward sufficiently strong LDT-agent-predictors
With negotiating power.
I assume this is referring to a one-shot context? Reciprocity seems plenty strategic for other sorts of agents/counterparties in an iterated context.
Yes, but EY’s statement implies that all (1, 2, 3) must be true for reciprocity to be strategic. There are iterated contexts where 1 and/or 2 do not hold (for example, a CDT agent playing iterated prisoner’s dilemma against a simple tit-for-tat bot).
I think I agree with your comment except for the “but.” AFAICT it doesn’t contradict mine? In your parenthetical scenario, #3 also does not hold—the CDT agent has no negotiating power against the tit-for-tat bot.
I am not. I am only saying that #3 is sufficient to cover all iterative interactions where one player’s actions meaningfully alter the others’ outcomes.
I assume this is referring to a one-shot context? Reciprocity seems plenty strategic for other sorts of agents/counterparties in an iterated context.
I think that’s implicitly covered under #3. The ability to alter outcomes of future interactions is a form of negotiating power.
Yes, but EY’s statement implies that all (1, 2, 3) must be true for reciprocity to be strategic. There are iterated contexts where 1 and/or 2 do not hold (for example, a CDT agent playing iterated prisoner’s dilemma against a simple tit-for-tat bot).
I think I agree with your comment except for the “but.” AFAICT it doesn’t contradict mine? In your parenthetical scenario, #3 also does not hold—the CDT agent has no negotiating power against the tit-for-tat bot.
This confuses me. Are you saying the CDT agent does not have “the ability to alter outcomes of future interactions”?
I am not. I am only saying that #3 is sufficient to cover all iterative interactions where one player’s actions meaningfully alter the others’ outcomes.