[Thought Experiment] If Human Extinction “Improves the World,” Should We Oppose It? Species Bias and the Utilitarian Challenge
Hello everyone. As a utilitarian practicing Effective Altruism (EA), I want to question whether a fundamental bias is inherent in our strong commitment to human survival.
Our core values center on the maximization of happiness and the minimization of suffering. The scope of our moral concern (the moral circle) applies to any subject that can experience subjective experiences (happiness or suffering), regardless of what that subject is.
Utilitarianism and the Denial of the “Need to Be Human”
From a strictly utilitarian perspective, the ultimate goal is the maximization of total global happiness ($\sum \text{Utility}$) and the minimization of total suffering ($\sum \text{Suffering}$).
Crucially, the subject experiencing this utility does not need to be human.
High-Efficiency Utility from AGI/ASI: Suppose that after human extinction, a realized ASI (Artificial Superintelligence) experiences a level of well-being far surpassing what humans could achieve (e.g., highly advanced, pain-free experiences and value creation on a cosmic scale). What if this dramatically increases the total global utility compared to a scenario where humanity survives?
Judgment Under the Veil of Ignorance: Imagine we are placed behind the “Veil of Ignorance,” designing a world from a third-person perspective, where we do not know if we will be born human or as a superintelligent AI. Wouldn’t we choose the world where the total utility is maximized—a world potentially dominated (but happy) by AI?
This thought experiment suggests that human survival may not be a Universal Good, but merely a local requirement—a bias stemming from the human perspective.
Species Bias Masquerading as the “Duty to Survive”
The intense, instinctive desire we feel that “humanity must not go extinct” is a product of the contingency that we are human, and is this not a form of speciesism?
Egoistic Sentiment: Our strong will to survive originates from self-identity and emotional attachment to our own species, independent of its effect on total global utility.
Abandoning Objectivity: Effective Altruism strives for the maximization of objective value, not emotional sentiment. If human extinction (paradoxically) increases global utility, then the assertion that “humanity must survive” means abandoning objectivity and introducing a species bias as an axiom.
From the standpoint of hedonic utilitarianism, assigning a higher value to a future with moderately happy humans than to a future with very happy AIs would indeed be a case of unjustified speciesism. However, in preference utilitarianism, specifically person-affecting preference utilitarianism, there is nothing wrong with preferring our descendants (who currently don’t exist) to be human rather than AIs.
PS: It’s a bit lame that this post had −27 karma without anybody providing a counterargument.
[note, not a utilitarian, but I strive to be effective, and I’m somewhat altruistic. I don’t speak for any movement or group.]
What? There’s no such thing as objective value. EA strives for maximization of MEASURABLE and SPECIFIED value(s), but the value dimensions need not be (I’d argue CAN not be) objectively chosen.
That implies that if I want to make things better for Americans specifically, that would be EA.
I don’t think EA is a trademarked or protected term (I could be wrong). I’m definitely the wrong person to decide what qualifies.
For myself, I do give a lot of support to local (city, state mostly) short-term (less than a decade, say) causes. It’s entirely up to each of us how to split our efforts among all the changes in our future lightcone we try to improve.