Kant’s Categorical Imperative, the classical Golden Rule, and Hofstadter’s superrationality all seem to me to be reflections of the same observation: Ethics rests on an algebraic symmetry among agents.
(I don’t have the philosophical or mathematical skill to formalize this. I recognize that this may make me sound like — or be — a crank on the subject. Sorry about that.)
The concept of morality doesn’t make sense without multiple agents. If your model of the world doesn’t include other entities of the same kind as you — but who are not you — then moral reasoning leads quite logically to sociopathy. If you are the only real agent, or for that matter if the universe is a dialogue between your unique soul and Almighty God, then reasoning about morality is nonsense. It is only because there are multiple agents who each are capable of influencing the others’ outcomes that morality makes any sense at all to talk about.
Possibly the most dramatic example of this symmetry I’ve seen is Eliezer’s True Prisoners’ Dilemma which shows that the symmetry can exist even between agents that do not share any object-level values. If you believe the paperclip-maximizer in the True Prisoners’ Dilemma is a rational agent that models the world as containing other rational agents symmetric to itself (but with different values), then you cooperate, because you’re not deciding between four possible outcomes; the symmetry means you’re deciding between (C,C) and (D,D).
(It’s not a matter of judging whether you implement the same algorithm as the other guy. It’s a matter of judging whether you’re in the same situation as the other guy, and that you correctly appraise this, and recognize that the other guy correctly appraises it, and so on recursively.)
Kant’s approach seems to be partly based on the idea of an equilibrium: acting on a rule that treats others as mere means is self-undermining; treating others as ends is the only winning choice if those others are also rational. It also seems to me that reflexive decision theories aim at a more axiomatic reflection of this same principle, by explicitly incorporating the notion that agents model other agents’ behavior.
Evolution has encoded into humankind an instinct for recognizing agentiness. This instinct is buggy as hell; it is much more sensitive than specific. It sees agentiness in non-agenty crap like the weather — “Hey you! Rain agent! Here, have a chicken … now, come rain on my crops, please!” — and if you draw two dots and a horizontal line beneath them, it sees the face of an agent. However, it is by dint of recognizing that the world contains other agents like ourselves, who also in turn recognize this fact, that humans are able to cooperate for mutual benefit in a way which other apes and mammals are not.
Kant’s Categorical Imperative, the classical Golden Rule, and Hofstadter’s superrationality all seem to me to be reflections of the same observation: Ethics rests on an algebraic symmetry among agents.
(I don’t have the philosophical or mathematical skill to formalize this. I recognize that this may make me sound like — or be — a crank on the subject. Sorry about that.)
The concept of morality doesn’t make sense without multiple agents. If your model of the world doesn’t include other entities of the same kind as you — but who are not you — then moral reasoning leads quite logically to sociopathy. If you are the only real agent, or for that matter if the universe is a dialogue between your unique soul and Almighty God, then reasoning about morality is nonsense. It is only because there are multiple agents who each are capable of influencing the others’ outcomes that morality makes any sense at all to talk about.
Possibly the most dramatic example of this symmetry I’ve seen is Eliezer’s True Prisoners’ Dilemma which shows that the symmetry can exist even between agents that do not share any object-level values. If you believe the paperclip-maximizer in the True Prisoners’ Dilemma is a rational agent that models the world as containing other rational agents symmetric to itself (but with different values), then you cooperate, because you’re not deciding between four possible outcomes; the symmetry means you’re deciding between (C,C) and (D,D).
(It’s not a matter of judging whether you implement the same algorithm as the other guy. It’s a matter of judging whether you’re in the same situation as the other guy, and that you correctly appraise this, and recognize that the other guy correctly appraises it, and so on recursively.)
Kant’s approach seems to be partly based on the idea of an equilibrium: acting on a rule that treats others as mere means is self-undermining; treating others as ends is the only winning choice if those others are also rational. It also seems to me that reflexive decision theories aim at a more axiomatic reflection of this same principle, by explicitly incorporating the notion that agents model other agents’ behavior.
Evolution has encoded into humankind an instinct for recognizing agentiness. This instinct is buggy as hell; it is much more sensitive than specific. It sees agentiness in non-agenty crap like the weather — “Hey you! Rain agent! Here, have a chicken … now, come rain on my crops, please!” — and if you draw two dots and a horizontal line beneath them, it sees the face of an agent. However, it is by dint of recognizing that the world contains other agents like ourselves, who also in turn recognize this fact, that humans are able to cooperate for mutual benefit in a way which other apes and mammals are not.