One big reason why people don’t endorse Heuristic C (though not all of the reason) is that the general population are much more selfish/have much higher time preference than LW/EA people, and in general one big assumption that I think EAs/LWers rely on way too much is that the population inherently cares about the future of humanity, independent of their selfish preferences.
More generally, I think Robin Hanson’s right to say that a lot of our altruism is mostly fictional, and is instead a way to signal to exploit social systems/cooperate with other people when it isn’t fictional, and the behavior we see is most likely in a world where people’s altruism is mostly fictional combined with people not knowing all that much about AI.
This is complementary with other explanations like xpym’s.
There are interesting questions to ask around how we got to the morals we have (I’d say that something like cooperation between people who need to share things in order to thrive/survive explains why we developed any altruism/moral system that wasn’t purely selfish), but in general the moral objectivism assumptions embedded in the discourse are pretty bad if we want to talk about how we got to the morality/values that we have, and it’s worth trying to frame the discussion in moral relativist terms.
One big reason why people don’t endorse Heuristic C (though not all of the reason) is that the general population are much more selfish/have much higher time preference than LW/EA people, and in general one big assumption that I think EAs/LWers rely on way too much is that the population inherently cares about the future of humanity, independent of their selfish preferences.
More generally, I think Robin Hanson’s right to say that a lot of our altruism is mostly fictional, and is instead a way to signal to exploit social systems/cooperate with other people when it isn’t fictional, and the behavior we see is most likely in a world where people’s altruism is mostly fictional combined with people not knowing all that much about AI.
This is complementary with other explanations like xpym’s.
More generally, a potential crux with a lot of the post is that I think that something like “rationalizing why your preferred policies are correct” to quote PoignardAzur, is ultimately what has to happen to ethical reasoning in general, and there’s no avoiding that part, and thus involves dealing with conflict theory inevitably (the comment is how the proposed examples are bad since they invoke political debates/conflict theory issues, but contra that comment I think this isn’t avoidable in this domain).
There are interesting questions to ask around how we got to the morals we have (I’d say that something like cooperation between people who need to share things in order to thrive/survive explains why we developed any altruism/moral system that wasn’t purely selfish), but in general the moral objectivism assumptions embedded in the discourse are pretty bad if we want to talk about how we got to the morality/values that we have, and it’s worth trying to frame the discussion in moral relativist terms.