The deeper, lonelier truth is that the “fragile angel” view doesn’t actually pay rent, and you do much better modeling other people as vicious reptilian warminds with transiently self-aware neocortices stapled on as an afterthought.
This doesn’t actually seem to be true, at least economically.
Edit: The cynical Hansonian/Wattsian view is that the neocortex is there to confabulate a rationalization for what the lizard brain decided to do. We cooperate for game theoretic reasons. Then we explain them as being angelic, altruistically driven impulses. Which framing is “true?” Does it matter?
We cooperate for game theoretic reasons. Then we explain them as being angelic, altruistically driven impulses. Which framing is “true?” Does it matter?
Both? I’ve thought for a while that people are rarely wrong about their motivations, but often think of them as more general than they really are. If someone’s going to claim altruistic reasons for doing something cooperative, it’s both simpler (from the outside) and less dissonant (from the inside) if their claims correspond to an actual altruistic impulse—yet those impulses rarely extend to outgroup members.
I agree, one can hold both the conflicting views that “we’re all selfish replicators” versus “we’re all flawed altruists” like the vase-versus-face interpretation of the Rubin vase or alternate spatial interpretations of a Necker cube. They’re both metaphors, basically. Like you say, the danger of overinterpreting the altruistic interpretation is to expect too much altruistic behavior, and the danger of overinterpreting the cynical interpretation is to fail to account for actual love and kindness when you see it.
Incidentally I think HPMOR!Harry and HPMOR!Quirrell do a good job at exemplifying the contrasting arguments for either side …
This doesn’t actually seem to be true, at least economically.
Poetic license.
Edit: The cynical Hansonian/Wattsian view is that the neocortex is there to confabulate a rationalization for what the lizard brain decided to do. We cooperate for game theoretic reasons. Then we explain them as being angelic, altruistically driven impulses. Which framing is “true?” Does it matter?
Both? I’ve thought for a while that people are rarely wrong about their motivations, but often think of them as more general than they really are. If someone’s going to claim altruistic reasons for doing something cooperative, it’s both simpler (from the outside) and less dissonant (from the inside) if their claims correspond to an actual altruistic impulse—yet those impulses rarely extend to outgroup members.
I agree, one can hold both the conflicting views that “we’re all selfish replicators” versus “we’re all flawed altruists” like the vase-versus-face interpretation of the Rubin vase or alternate spatial interpretations of a Necker cube. They’re both metaphors, basically. Like you say, the danger of overinterpreting the altruistic interpretation is to expect too much altruistic behavior, and the danger of overinterpreting the cynical interpretation is to fail to account for actual love and kindness when you see it.
Incidentally I think HPMOR!Harry and HPMOR!Quirrell do a good job at exemplifying the contrasting arguments for either side …