To the extent that iterated decision theory accurately models historical selection pressures which shaped our intuitions, I agree with you. However, moral positions like “violently victimizing someone from your own tribe for trivial personal gain is bad and should be heavily discouraged” have been uncontroversial since before decision theory was formalized.
Imagine a simple decision game: Should I eat the poisonous fruit: Yes (-100), No (0). Obviously, No is the superior answer, and it didn’t take publication of this decision theory result for humans to realize this. Making the decision game is writing the expected payouts of the environment—not setting them.
To take your example, as long as increasing the power of the tribe provides benefits to you (and I agree that it usually will), then reducing inter-tribe squabbling is the better long-term choice. Decision theory doesn’t disagree, but isn’t necessary for the conclusion. However, the incentive is already there, so there’s no reason why evolution would select for a “baked-in” preference.
The fact that the environment rewards certain choices is a sufficient reason for those choices to be favored. I referenced decision theory only to have a way to rigorously identify which choices are favor by pre-existing reward structures.
The ‘relatively uncontroversial’ positions are such because of the extent to which they’ve been permanently wired into human intuition.
No—the “relatively uncontroversial” positions are the ones most consistent with decision theory over repeated iterations.
To the extent that iterated decision theory accurately models historical selection pressures which shaped our intuitions, I agree with you. However, moral positions like “violently victimizing someone from your own tribe for trivial personal gain is bad and should be heavily discouraged” have been uncontroversial since before decision theory was formalized.
Imagine a simple decision game: Should I eat the poisonous fruit: Yes (-100), No (0). Obviously, No is the superior answer, and it didn’t take publication of this decision theory result for humans to realize this. Making the decision game is writing the expected payouts of the environment—not setting them.
To take your example, as long as increasing the power of the tribe provides benefits to you (and I agree that it usually will), then reducing inter-tribe squabbling is the better long-term choice. Decision theory doesn’t disagree, but isn’t necessary for the conclusion. However, the incentive is already there, so there’s no reason why evolution would select for a “baked-in” preference.
The fact that the environment rewards certain choices is a sufficient reason for those choices to be favored. I referenced decision theory only to have a way to rigorously identify which choices are favor by pre-existing reward structures.