That one agent’s preferences differ greatly from the norm does not automatically make cooperation impossible.
I wasn’t arguing that cooperation is impossible. From everything you said there it looks like your understanding of morality is similar to mine:
Agents each judging possible outcomes based upon subjective values and taking actions to try to maximize those values, where the ideal strategy can vary between cooperation, competition, etc.
This makes sense I think when you say:
For example, a society confronting a would-be suicide bomber will (morally and practically) incarcerate him
The members of that society do that because they prefer the outcome in which he does not suicide attack them, to one where he does.
once thwarted from his primary goal, the would-be bomber may find that he now has some common interests with his captors
This phrasing seems exactly right to me. The would-be bomber may elect to cooperate, but only if he feels that his long-term values are best fulfilled in that manor. It is also possible that the bomber will resent his captivity, and if released will try again to attack.
If his utility function assigns (carry out martyrdom operation against the great enemy) an astronomically higher value than his own survival or material comfort, it may be impossible for society to contrive circumstances in which he would agree to long term cooperation.
This sort of morality, where agents negotiate their actions based upon their self-interest and the impact of others actions, until they reach an equilibrium, makes perfect sense to me.
I wasn’t arguing that cooperation is impossible. From everything you said there it looks like your understanding of morality is similar to mine:
Agents each judging possible outcomes based upon subjective values and taking actions to try to maximize those values, where the ideal strategy can vary between cooperation, competition, etc.
This makes sense I think when you say:
The members of that society do that because they prefer the outcome in which he does not suicide attack them, to one where he does.
This phrasing seems exactly right to me. The would-be bomber may elect to cooperate, but only if he feels that his long-term values are best fulfilled in that manor. It is also possible that the bomber will resent his captivity, and if released will try again to attack.
If his utility function assigns (carry out martyrdom operation against the great enemy) an astronomically higher value than his own survival or material comfort, it may be impossible for society to contrive circumstances in which he would agree to long term cooperation.
This sort of morality, where agents negotiate their actions based upon their self-interest and the impact of others actions, until they reach an equilibrium, makes perfect sense to me.