One hypothesis I have for why people care so much about some distinction like this is that humans have social/mental modes for dealing with people who are explicitly malicious towards them, who are explicitly faking cordiality in attempts to extract some resource. And these are pretty different from their modes of dealing with someone who’s merely being reckless or foolish. So they care a lot about the mental state behind the act.
[...]
On this theory, most people who are in effect trying to exploit resources from your community, won’t be explicitly malicious, not even in the privacy of their own minds. (Perhaps because the content of one’s own mind is just not all that private; humans are in fact pretty good at inferring intent from a bunch of subtle signals.) Someone who could be exploiting your community, will often act so as to exploit your community, while internally telling themselves lots of stories where what they’re doing is justified and fine.
I note that while I find both paragraphs individually reasonable [and I find myself nodding along to them], there seems to be a soft contradiction between them that needs explanation.
Namely, why is human (whether genetic or cultural) evolution maladaptive? “Which humans are bad allies” seems to be close to centrally the problems we should expect evolution in a social context to be good at, so I feel like the burden of proof is on whoever is positing a local deviance to explain why the features are off in this case. Some possibilities:
1. “Our” community is different [why?]
2. People in history are in fact object-level wrong about the existence (or at least prevalence) of evil actors. In reality “Almost no one is evil, almost everything is broken.” A possible evolutionarily concordant just-so story here is something in the direction of rational irrationality, perhaps humans are better at tribal ostracism etc if they collectively pretend (and/or genuinely believe) other humans who do bad things are genuinely evil and thus worthy of ostracism.
3.???
Both explanations are possible but I don’t know which one is right (or both, or neither); I just want to highlight there there is something left to be explained in your model so far.
There’s no contradiction. There are two competing sides of the evolutionary process: one side is racing to understand intentions as well as possible, the other side is racing to obscure its intentions, in this case by not having them consciously.
I think one aspect which softens the discrepancy is that our intuitions here might not be adapted to large-scale societies.
If everyone really lives mainly with one’s own tribe and has kind of isolated interactions with other tribes and maybe tribe-switching people every now and then (similar to village-life compared to city-life), I could well imagine that “are they truly part of our tribe?” actually manages to filter out a large portion of harmful cases.
Also, regarding 2):
If indeed almost no one is evil, almost everyone is broken: there are strong incentives to make sure that the social rules do not rule out your way of exploiting the system. Because of this I would not be surprised if “common knowledge” around these things tends to be warped by the class of people who can make the rules.
Another factor is that as a coordination problem, using “never try to harm others” seems like a very fine Schelling point to use as common denominator.
It’s possible, but I would previously have assumed that sociopathy/intentional maleficence etc to be less common in the ancestral environment relative to other harmful social situations. My own just-so story would suggest that people’s intuitions from a tribal context are maladaptive in underpredicting sociopathy or deliberate deception.
I am not sure we disagree with regards to the prevalence of maleficience. One reason why I would imagine that
“are they truly part of our tribe?” actually manages to filter out a large portion of harmful cases.
works in more tribal contexts would be that cities provide more “ecological” niches (would the term be sociological here?) for this type of behaviour.
intuitions [...] are maladaptive in underpredicting sociopathy or deliberate deception
Interesting. I would mostly think that people today are way more specialized in their “professions” such that for any kind of ability we will come into contact with significantly more skilled people than a typical ancestor of ours would have. If I try to think about examples where people are way too trusting, or way too ready to treat someone as an enemy, I have the impression that for both mistakes examples come to mind quite readily. Due to this, I think I do not agree with “underpredict” as a description and instead tend to a more general “overwhelmed by reality”.
I note that while I find both paragraphs individually reasonable [and I find myself nodding along to them], there seems to be a soft contradiction between them that needs explanation.
Namely, why is human (whether genetic or cultural) evolution maladaptive? “Which humans are bad allies” seems to be close to centrally the problems we should expect evolution in a social context to be good at, so I feel like the burden of proof is on whoever is positing a local deviance to explain why the features are off in this case. Some possibilities:
1. “Our” community is different [why?]
2. People in history are in fact object-level wrong about the existence (or at least prevalence) of evil actors. In reality “Almost no one is evil, almost everything is broken.” A possible evolutionarily concordant just-so story here is something in the direction of rational irrationality, perhaps humans are better at tribal ostracism etc if they collectively pretend (and/or genuinely believe) other humans who do bad things are genuinely evil and thus worthy of ostracism.
3.???
Both explanations are possible but I don’t know which one is right (or both, or neither); I just want to highlight there there is something left to be explained in your model so far.
There’s no contradiction. There are two competing sides of the evolutionary process: one side is racing to understand intentions as well as possible, the other side is racing to obscure its intentions, in this case by not having them consciously.
I think one aspect which softens the discrepancy is that our intuitions here might not be adapted to large-scale societies. If everyone really lives mainly with one’s own tribe and has kind of isolated interactions with other tribes and maybe tribe-switching people every now and then (similar to village-life compared to city-life), I could well imagine that “are they truly part of our tribe?” actually manages to filter out a large portion of harmful cases.
Also, regarding 2): If indeed almost no one is evil, almost everyone is broken: there are strong incentives to make sure that the social rules do not rule out your way of exploiting the system. Because of this I would not be surprised if “common knowledge” around these things tends to be warped by the class of people who can make the rules. Another factor is that as a coordination problem, using “never try to harm others” seems like a very fine Schelling point to use as common denominator.
It’s possible, but I would previously have assumed that sociopathy/intentional maleficence etc to be less common in the ancestral environment relative to other harmful social situations. My own just-so story would suggest that people’s intuitions from a tribal context are maladaptive in underpredicting sociopathy or deliberate deception.
I am not sure we disagree with regards to the prevalence of maleficience. One reason why I would imagine that
works in more tribal contexts would be that cities provide more “ecological” niches (would the term be sociological here?) for this type of behaviour.
Interesting. I would mostly think that people today are way more specialized in their “professions” such that for any kind of ability we will come into contact with significantly more skilled people than a typical ancestor of ours would have. If I try to think about examples where people are way too trusting, or way too ready to treat someone as an enemy, I have the impression that for both mistakes examples come to mind quite readily. Due to this, I think I do not agree with “underpredict” as a description and instead tend to a more general “overwhelmed by reality”.