Zero sum expectations as an explanation of omnicide-indifference
For quite a while, I’ve been quite confused why (sweet nonexistent God, whyyyyy) so many[1] people intuitively believe that any risk of a genocide of some ethnicity is unacceptable while being… at best lukewarm against the idea of humanity going extinct.
I went through several hypotheses, but they all felt not quite right. Yes, people are not taking extinction risks as seriously as risks of genocide—but even when accounting for that, there still seems to be an unexplained gap here. Even when you explain that everyone dying means them, their family, their friends… they would express preference not to, but no conviction in that direction.[2] Or at least that’s how I interpret the blasé attitude towards human extinction combined with visible anger/disgust at the idea of genocide.
Recently, I’ve realized that there is a decent explanation for why so many people believe that—if we model them as operating under a strict zero-sum game model of the world… ‘everyone loses’ is basically an incoherent statement—as a best approximation it would either denote no change and therefore be morally neutral, or an equal outcome, and would therefore be preferable to some.
Of course, I’m not proposing that those people are explicitly believing that the world is best modeled as a zero-sum game. No, we shouldn’t expect random Joe to even know what a zero-sum game is. Rather, we should notice that many things in human experience throughout times are zero-sum, so it should be no surprise that some people have those expectations baked-in into their intuitions.
Once I thought of this, I started seeing this maladaptation everywhere. From naive redistribution schemes[3] (“No, you don’t understand, we have to stop making the rich richer! I don’t care if there will be less stuff made to go around, as long as the rich get poorer it’ll be fine.”), excuses for corruption (“We finally have power, of course we have to enrich ourselves and our supporters, that’s the whole point of gaining power!”), to the admittedly weaker connection to mistake vs conflict theorists (if you believe the whole world works as a zero-sum game, you would reasonably expect nearly every failure to be enemy action)
Assuming this is the root cause, it does present an interesting question, though. Would those people internalise the danger of omnicide, were the causes of said omnicide anthropomorphised? I guess it is not that easy for x-risks along the lines of asteroid impact (“Angry big asteroid is gonna willfully murder us all!”), but for AI this should be quite a bit simpler.
To be clear, I’m not advocating for doing that, but frankly, people are gonna anthropomorphize AI on their own, anyway. Which leads me to the prediction that if all of this is true, median objections will at some point shift from “Humanity going extinct? Meh, I have more important political issues to discuss!” to “That would be bad, but AI is my best friend and would never do that! Unless someone sabotages it somehow!”.
- ^
I have no statistical data on the prevalence of this perspective. It certainly feels like a significant chunk of the populace, but that’s an anecdote. So yes, this whole thing is based on vibes, not data, sorry.
- ^
Where by preference I mean one arising from purely system 2 thinking about morality, while conviction arises from system 1 gut feeling.
- ^
I’m not against redistribution in general—quite the opposite! I do believe that redistribution that sharply decreases the overall amount of wealth is bad, though.
I think there’s another element to this: moral judgement. Genocide is seen as an active choice. Somebody (or some group) is perpetrating this assault, which is horrible and evil. Many views of extinction don’t have a moral agent as the proximal cause—it’s an accident, or a fragile ecosystem that tips over via distributed little pieces, or something else that may be horrible but isn’t evil.
Ignorant destruction is far more tolerated than intentional destruction, even if the scales are such that the former is the more harmful.
It shouldn’t matter to those who die, but it does matter to the semi-evolved house apes who are pontificating about it at arms’ length.
I agree that this explains at least some of it, it was one of the hypotheses I considered, but it still didn’t sound exactly right.
After all, most of the people that this post describes would (I presume, again, no hard statistical data) assume that a genocide was not accidental (and proceed to find where to assign blame). Maybe that’s enough to explain why x-risks like asteroid or ecosystem collapse are treated like acts of God, but in a general case of misfortune the same people would quickly look for a guilty party, even when one doesn’t exist. Which makes me sceptical this explanation is the full story, as most AGI-apocalypse scenarios have plenty of folks to potentially blame. The question then remains of why they would presume ignorance and not willful risk-taking in this particular case, which is what I tried to address here.
Oh, willful risk-taking ALSO gets a pass, or at least less-harsh judgement. The distinction is between “this is someone’s intentional outcome” for genocide, and “this is an unfortunate side-effect” for x-risk.
Most people distinguish between intentional acts and shit that happens.
Edit:
“I’ve thought about this in the context of wondering why people are so much more bothered by police brutality (which kills about a thousand people a year in the US) than traffic fatalities (which kill well over 30,000 people a year). I think there’s some sense in which they see police killings as our societal collective action, while traffic killings are just byproducts of a system we use, but none of that matters to the victims.”
Intention matters because an undesirable outcome that isn’t brought about intentionally is a tragedy, not a wrongdoing.
Which is not to say consequences don’t matter, only that they do a different job. People get more het up over police brutality than traffic accidents because because it’s seems as intentional , voluntary and avoidable,...and there is a set of social emotions that function to alter those kinds of behaviours.
People also don’t approve of traffic fatalities, but think about the subject in a more technical , less emotive way,...because you can’t solve the problem by blaming one person or group.
They are different kinds of “bad” , so there isn’t a problem of failing to trade them off.
One reason is that the intentionality implies different ongoing risks. If a friend dies in a traffic accident, that’s bad. If a friend is assassinated by the secret police, that’s bad, but I also have to wonder if I’ll be next.
I agree with those who say that genocide is seen as deliberate, extinction as an act of God/Nature. I think extinction is also seen as too big to stop, whereas genocide might be stoppable by political or military means. And finally, genocide is seen as something that happens, extinction as something fictional or hypothetical.
Since we are talking about extinction being caused by human creation of out-of-control AIs… you need to recognize that this is a scenario way outside common sense. It’s like asking people to think about alien invasion or being in the Matrix. Genocide may be shocking but in the end it’s just murder multiplied by thousands or millions, and murder is something that all adults know to be real, even if only from the media.
So when you judge how people feel about genocide and about omnicide… it’s like asking how they feel about Hitler, versus asking how they feel about Thanos. Genocide is a thing that happened in the waking world of families and jobs, the world where normal adults spend most of their lives. Omnicide is something we only know about from movies and paleontology.
There is more to reality than the human life cycle, and the boundaries of what is normal do shift, or sometimes are just forcibly invaded by external factors. COVID was an external factor; AI is an external factor even if there are attempts to domesticate and normalize it. Modernity in general, going back more than a hundred years, has constantly been haunted by strange new things and strange new ideas.
It’s not impossible that AI doomerism will become a force sufficient to slow or stop the global race towards superintelligence (though if it’s going to happen, it had better happen soon). Stranger things have taken place. But for that to happen, people would have to regard it as a serious possibility, and they would have to not be seduced by dreams of AI Heaven over fears of AI Hell. So there are specific psychological barriers that would need to be overcome.